Overview

In my previous blog post HERE I deployed VMware Antrea IDS and demonstrated how VMware IDS can secure pods running Antrea as CNI against malicious attacks, and although the feature is in tech preview it is very promising to see that VMware is committed to the vision of bringing Tanzu/Kubernetes security as an integral part of NSX security.

VMware Antrea IDS engine leverages Suricata which is world’s no. 1 open source IDPS engine which position VMware Antrea IDS already on the top of containers security solutions out there. This being said, the solution however is in its early stages and lakes currently IDS events and alerts centralised logging and virtualisation.

In this blog post, I am going to integrate VMware Antrea IDS with EFK stack (Elasticsearch, fluentd and Kibana) to centralise Antrea IDS log collection and display them in dashboard under Kibana for much easier reading and processing.

Lab Inventory

For software versions I used the following:

  • VMware ESXi 7.0U3f
  • vCenter server version 7.0U3f
  • TrueNAS 12.0-U7 used to provision NFS data stores to ESXi hosts.
  • VyOS 1.4 used as lab backbone router and DHCP server.
  • Ubuntu 18.04 LTS as bootstrap machine.
  • Ubuntu 20.04.2 LTS as DNS and internet gateway.
  • Windows Server 2012 R2 Datacenter as management host for UI access.
  • NSX 4.0.0.1

For virtual hosts and appliances sizing I used the following specs:

  • 3 x virtualised ESXi hosts each with 8 vCPUs, 4 x NICs and 96 GB RAM.
  • vCenter server appliance with 2 vCPU and 24 GB RAM.

Prerequisites

  • VMware Antrea CNI and integrated with NSX (to learn how you might need to check my previous blog posts HERE and HERE)
  • NSX TP or ATP license.
  • VMware Antrea IDS controller and agents deployed and all in running state.

Kubernetes logging overview

Kubernetes does not have an integrated system components to export and visualise logs, however k8s leverages tools such as fluentd which collects all logs from k8s cluster and then sends them to a centralised search engine for indexing which in this post I chose to implement elasticsearch and finally integrating elasticsearch with Kibana for logs displaying and display.

The whole logging stack in K8s can be visualised below:

Deployment steps

Step 1: Deploy elasticsearch and kibana

I have to say, getting the correct deployment files with the correct versions of the components shown above, took me quite sometime. So, in this blog post I am sharing with you my 100% tested and working yaml deployment files so all what you will have to do is to copy, paste and kubectl apply -f the deployment files right away.

we are starting with elasticsearch and kibana deployment, copy the below contents and paste it in yaml file and then save and exit your text editor (hover over the code and copy button will appear in top right corner).

apiVersion: v1
kind: Namespace
metadata:
  name: kube-logging
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: elastic-storage
  namespace: kube-logging
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
reclaimPolicy: Delete
allowVolumeExpansion: True
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: elasticsearch-pvc
  namespace: kube-logging
spec:
  storageClassName: elastic-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: elasticsearch-pv
  namespace: kube-logging
spec:
  storageClassName: elastic-storage
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data/elasticsearch/"
---
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch
  namespace: kube-logging
  labels:
    app: elasticsearch
spec:
  selector:
    app: elasticsearch
  ports:
    - port: 9200
      targetPort: 9200
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch
  namespace: kube-logging
  labels:
    app: elasticsearch
spec:
  selector:
    matchLabels:
      app: elasticsearch
  serviceName: elasticsearch
  replicas: 1
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      initContainers:
        - name: init-sysctl
          image: busybox:1.27.2
          command:
            - sysctl
            - -w
            - vm.max_map_count=262144
          securityContext:
            privileged: true
      containers:
        - name: es-data
          image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.8.0
          env:
            - name: ES_JAVA_OPTS
              value: "-Xms512m -Xmx1g"
            - name: cluster.name
              value: "kube-logging"
            - name: bootstrap.memory_lock
              value: "false"
            - name: network.host
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: http.port
              value: "9200"
            - name: discovery.type
              value: "single-node"
            - name: indices.query.bool.max_clause_count
              value: "8192"
            - name: search.max_buckets
              value: "100000"
            - name: action.destructive_requires_name
              value: "true"
          ports:
            - containerPort: 9200
              name: http
            - containerPort: 9300
              name: transport
          livenessProbe:
            tcpSocket:
              port: transport
            initialDelaySeconds: 90
            periodSeconds: 10
          readinessProbe:
            httpGet:
              path: /_cluster/health
              port: http
            initialDelaySeconds: 90
            timeoutSeconds: 20
          volumeMounts:
            - name: es-data
              mountPath: /data
      nodeSelector:
        kubernetes.io/os: linux
        kubernetes.io/arch: amd64
      volumes:
        - name: es-data
          persistentVolumeClaim:
            claimName: elasticsearch-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: kube-logging
  labels:
    app: kibana
spec:
  type: NodePort
  selector:
    app: kibana
  ports:
    - port: 5601
      targetPort: 5601
      nodePort: 30007
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: kube-logging
  labels:
    app: kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
        - name: kibana
          image: docker.elastic.co/kibana/kibana-oss:7.8.0
          env:
            - name: action.destructive_requires_name
              value: "true"
            - name: SERVER_HOST
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: SERVER_PORT
              value: "5601"
            - name: ELASTICSEARCH_URL
              value: "http://elasticsearch:9200"
            - name: KIBANA_DEFAULTAPPID
              value: "dashboard/3b331b30-b987-11ea-b16e-fb06687c3589"
            - name: LOGGING_QUIET
              value: "true"
            - name: KIBANA_AUTOCOMPLETETERMINATEAFTER
              value: "100000"
          ports:
            - containerPort: 5601
              name: http
      nodeSelector:
        kubernetes.io/os: linux
        kubernetes.io/arch: amd64

 

Apply the above YAML file using the command:

kubectl apply -f <filename.yaml>

Wait till all pods under kube-logging namespace are up and running and check that elasticsearch and kibana services are created:

From a web browse you can verify that kibana UI is initialised by opening an http session to http://<node-port-ip:30007&gt; and to get the node ip just run the below command:

any worker node port will do, from your web browser you should see similar output

Step 2: Deploy and configure fluentd

Before deploying fluentd pods which will be collecting logs from our Tanzu/kubernetes cluster, we need to configure a configmap which will be providing essential configuration parameters for fluentd pods. Create a file called kubernetes.conf and past the following code into it:

<match *.**>
  @type copy
    <store>
      @type elasticsearch
      host 10.102.160.236
      port 9200
      include_tag_key true
      logstash_format true
      logstash_prefix fluentd
      flush_interval 10s
    </store>
</match>

<source>
  @type tail
  read_from_head true
  path /var/log/antrea/suricata/eve.alert.*
  pos_file /var/log/fluentd-suricata.pos
  tag suricata
  <parse>
    @type json
    time_type string
    time_format %Y-%m-%dT%H:%M:%S.%6N%z
  </parse>
</source>

<filter kubernetes.**>
  @type kubernetes_metadata
  @id filter_kube_metadata
  kubernetes_url "#{ENV['FLUENT_FILTER_KUBERNETES_URL'] || 'https://' + ENV.fetch('KUBERNETES_SERVICE_HOST') + ':' + ENV.fetch('KUBERNETES_SERVICE_PORT') + '/api'}"
  verify_ssl "#{ENV['KUBERNETES_VERIFY_SSL'] || true}"
  ca_file "#{ENV['KUBERNETES_CA_FILE']}"
</filter>

 

You need to however change the host value in this file from 10.102.160.236 (cluster service address for my elasticsearch) to you cluster address which you can get by running the command kubectl get svc -n kube-logging as shown in the previous step. Create a configmap from the above file using the command:

kubectl create configmap fluentd-conf -n kube-logging --from-file kubernetes.conf

Create another YAML file and paste the below contents into it:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd
  namespace: kube-logging
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: fluentd
rules:
  - apiGroups:
      - ""
    resources:
      - pods
      - namespaces
    verbs:
      - get
      - list
      - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd
roleRef:
  kind: ClusterRole
  name: fluentd
  apiGroup: rbac.authorization.k8s.io
subjects:
  - kind: ServiceAccount
    name: fluentd
    namespace: kube-logging
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-logging
  labels:
    k8s-app: fluentd-logging
    version: v1
spec:
  selector:
    matchLabels:
      k8s-app: fluentd-logging
      version: v1
  template:
    metadata:
      labels:
        k8s-app: fluentd-logging
        version: v1
    spec:
      serviceAccount: fluentd
      serviceAccountName: fluentd
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      containers:
        - name: fluentd
          image: fluent/fluentd-kubernetes-daemonset:v1.8.1-debian-elasticsearch7-1.3
          env:
            - name:  FLUENT_ELASTICSEARCH_HOST
              value: "elasticsearch.kube-logging.svc.cluster.local"
            - name:  FLUENT_ELASTICSEARCH_PORT
              value: "9200"
            - name: FLUENT_ELASTICSEARCH_SCHEME
              value: "http"
            - name: FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME
              value: "fluentd"
            - name: FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX
              value: "fluentd"
            - name: FLUENT_ELASTICSEARCH_SSL_VERIFY
              value: "false"
            - name: FLUENTD_SYSTEMD_CONF
              value: disable
          resources:
            limits:
              memory: 200Mi
            requests:
              cpu: 100m
              memory: 200Mi
          volumeMounts:
            - name: config-volume
              mountPath: /fluentd/etc/kubernetes.conf
              subPath: kubernetes.conf
            - name: varlog
              mountPath: /var/log
            - name: dockercontainerlogdirectory
              mountPath: /var/log/pods
              readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
        - name: config-volume
          configMap:
            name: fluentd-conf
        - name: varlog
          hostPath:
            path: /var/log
        - name: dockercontainerlogdirectory
          hostPath:
            path: /var/log/pods

 

Save and exit and then deploy the above YAML using kubectl apply -f <filename.yaml> 

Wait for couple of minutes and then check and make sure that all the pods under kube-logging namespace are in running state

Step 3: Verify fluentd connectivity to elasticsearch and create an index pattern

Before we can start visualising and analysing kubernetes logs in kibana dashboards, we need first to ensure that fluentd pods are collecting kubernetes cluster logs and sending it to elasticsearch cluster. After that, we need to define what we call an index pattern in kibana which is used by kibana to retrieve data from Elasticsearch indices for things like visualisations (i.e. dashboards creation).

To verify that fluentd is sending logs to elasticsearch, open a web browser pointing to http://<nodeport-ip:30007&gt; which is the elasticsearch and kibana UI we created in step 2. From the left pane, click on discover

Make sure that you can see the fluentd index appearing in the create index pattern window, if not then it means that fluentd pods cannot collect to elasticsearch cluster and you need to troubleshoot. If all is good, then you should see screen similar to the below

In the index pattern name field type “fluentd-*” without quotes, to tell kibana that we want to match on all logs with prefix fluentd and then click on Next step, for the time filter choose @timestamp and then click on create index pattern

Once the index pattern is created, you should be able to see logs being sent from fluentd agents to elasticsearch and you should be able to see Suricata logs which shows the IDS events captured

Final Word

VMware Antrea IDS at the moment is in tech preview release and log visibility has got some distance to cover. However, using the EFK stack provides a great utility to visualise and centralise VMware Antrea IDS log collection.

You have found this blog post useful.