Overview

In this blog post which is the last in series of posts discussing VMware Antrea IDS configuration and visibility, I am going to configure VMware Aria Operations for Logs (formerly known as vRealize LogInsight) to ingest and display VMware Antrea logs including IDS events captured by Antrea IDS Suricata engine.

VMware Aria operations for logs offers a sophisticated engine for log analysis and automatically identify structure in machine-generated, unstructured log data (including application logs, network traces, configuration files, etc.). Using Aria for operations logs we can build dashboard of interest for efficient logs visibility and analysis.

Lab Inventory

For software versions I used the following:

  • VMware ESXi 7.0U3f
  • vCenter server version 7.0U3f
  • TrueNAS 12.0-U7 used to provision NFS data stores to ESXi hosts.
  • VyOS 1.4 used as lab backbone router and DHCP server.
  • Ubuntu 18.04 LTS as bootstrap machine.
  • Ubuntu 20.04.2 LTS as DNS and internet gateway.
  • Windows Server 2012 R2 Datacenter as management host for UI access.
  • NSX 4.0.0.1
  • Vanilla Kubernetes cluster version 1.24
  • VMware Aria Operations for Logs version 8.8

For virtual hosts and appliances sizing I used the following specs:

  • 3 x virtualised ESXi hosts each with 12 vCPUs, 2 x NICs and 128 GB RAM.
  • vCenter server appliance with 2 vCPU and 24 GB RAM.

Prerequisites

  • VMware Antrea CNI and integrated with NSX (to learn how you might need to check my previous blog posts HERE and HERE)
  • NSX TP or ATP license.
  • VMware Antrea IDS controller and agents deployed and all in running state.
  • VMware Aria Operations for Logs version 8.8 deployed and running (step by step Installation Guide).

Kubernetes logging using Aria Operations for Logs

Aria Operations for Logs utilises Fluentd open source project for collecting Kubernetes cluster logs and send them over to Aria syslog server. Fluentd uses plugins in order to be able to interact and send logs to different collection destinations, such as elasticsearch, Aria and other tools, so the concept is pretty straight forward you need to install the right plugin for the collector that you want to use to collect Kubernetes logs collected by Fluentd.

However, this is easy said than done because installing a plugin to fluentd general image means that you will need to rebuild a fluentd image with the specific plugin you want to use which is a cumbersome task. For that reason, VMware has released specific fluentd image which already has loginsight log collection plugin installed and this is what we are going to use int his blog post. VMware fluentd k8s images are available via the following VMware Harbor repository:

projects.registry.vmware.com/vrealize_loginsight/fluentd:1.0

Deployment steps

Step 1: Deploy and configure fluentd

Before we deploy the actual fluentd daemonset on our k8s cluster, we need to create a configmap through a configuration file, in order to pass some configuration parameters to fluentd pods, those parameters include:

  • Which logs from k8s cluster to be collected by fluentd agents, i.e. log sources.
  • What are the log collectors to which fluentd should be sending the collected logs. By default, fluentd agents send collected logs to stdout of fluentd pods, we need to set our Vmware Aria Operations address as the log output destination and this needs to be added to the initial configuration file.

login to your bootstrap machine and create a file called fluent.conf and paste the following contents in it (modify loginsight address as per your setup)

<source>
  @id in_tail_container_logs
  @type tail
  path /var/log/containers/*.log
  pos_file /var/log/fluentd-containers.log.pos
  tag raw.kubernetes.*
  read_from_head true
  <parse>
    @type multi_format
    <pattern>
      format json
      time_key time
      time_format %Y-%m-%dT%H:%M:%S.%NZ
    </pattern>
    <pattern>
      format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
      time_format %Y-%m-%dT%H:%M:%S.%N%:z
    </pattern>
  </parse>
</source>

<source>
  @type tail
  read_from_head true
  path /var/log/antrea/suricata/eve.alert.*
  pos_file /var/log/fluentd-suricata.pos
  tag suricata
  <parse>
    @type json
    time_type string
    time_format %Y-%m-%dT%H:%M:%S.%6N%z
  </parse>
</source>

# Detect exceptions in the log output and forward them as one log entry.
<match raw.kubernetes.**>
  @id raw.kubernetes
  @type detect_exceptions
  remove_tag_prefix raw
  message log
  stream stream
  multiline_flush_interval 5
  max_bytes 500000
  max_lines 1000
</match>

<filter kubernetes.**>
  @type record_transformer
  <record>
  environment tanzu_k8s_grid
  log_type kubernetes
  </record>
  watch false
</filter>

# Enriches records with Kubernetes metadata
<filter kubernetes.**>
  @id filter_kubernetes_metadata
  @type kubernetes_metadata
  watch false
</filter>

<match **>
  @type vmware_loginsight
  scheme https
  ssl_verify false
  host 172.10.40.5
  port 9543
  http_method post
  serializer json
  rate_limit_msec 0
  raise_on_error true
  include_tag_key true
  tag_key tag
  http_conn_debug false
</match>

Save and exit the above file, then create a namespace called kube-logging and a configmap from the above file in that namespace:

kubectl create ns kube-logging
kubectl create configmap loginsight-fluentd-config -n kube-logging --from-file fluent.conf

 

Step 2: Deploy fluentd Daemonset pods and verify deployment

Next step, is to deploy fluentd as Daemonset. This allows fluentd pods to run on all available worker nodes and will be created automatically if deleted. Create a deployment YAML file and paste the below contents into it (make sure to create the deployment file under the same directory as the fluent.conf file).

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: fluentd-loginsight-logging
  name: fluentd-loginsight-logging
  namespace: kube-logging

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-clusterrole
rules:
- apiGroups: [""]
  resources: ["namespaces", "pods"]
  verbs: ["list", "get", "watch"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-clusterrole
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: fluentd-clusterrole
subjects:
  - kind: ServiceAccount
    name: fluentd-loginsight-logging
    namespace: kube-logging

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-loginsight-logging
  namespace: kube-logging
  labels:
    k8s-app: fluentd-loginsight-logging
    app: fluentd-loginsight-logging
    version: v1
    kubernetes.io/cluster-service: "true"
spec:
 selector:
   matchLabels:
     name: fluentd-loginsight-logging
 template:
   metadata:
     labels:
       name: fluentd-loginsight-logging
       app: fluentd-loginsight-logging
       version: v1
       kubernetes.io/cluster-service: "true"
   spec:
     serviceAccount: fluentd-loginsight-logging
     serviceAccountName: fluentd-loginsight-logging
     tolerations:
     - key: node-role.kubernetes.io/master
       effect: NoSchedule
     containers:
     - name: fluentd-loginsight
       image: projects.registry.vmware.com/vrealize_loginsight/fluentd:1.0
       command: ["fluentd", "-c", "/etc/fluentd/fluent.conf", "-p", "/fluentd/plugins"]
       env:
       - name: FLUENTD_ARGS
         value: --no-supervisor -q
       resources:
         limits:
           memory: 500Mi
         requests:
           cpu: 100m
           memory: 200Mi
       volumeMounts:
       - name: varlog
         mountPath: /var/log
         readOnly: false
       - name: config-volume
         mountPath: /etc/fluentd
         readOnly: true
     volumes:
     - name: varlog
       hostPath:
         path: /var/log
     - name: config-volume
       configMap:
         name: loginsight-fluentd-config

Save and exit the above file and then apply it to your k8s cluster:

kubectl apply -f <filename.yaml>

Wait for couple of minutes then check the status of the pods running inside kube-logging namespace, they should be all in running state:

Step 3: Verify log collection on Aria Operations for logs

Login to Aria UI and from left pane select Explore logs, you should be able to see logs coming in from your k8s cluster

In the search field type suricata and press enter, you should get similar output to the below with Antrea IDS logs

Final Word

Having Aria Operations for logs as central logging system for all your workloads is a great way of keeping close eye to all events and alerts across your entire environment. With integrating Antrea IDS logs with Aria you can create dashboards specifically for IDS events from your Tanzu and/or k8s clusters and with that you have a centralised log collection and visualisation for your containerised workloads as well.

Hope you find this blog post useful.