In this two parts blog post I am going to demonstrate how to setup kubernetes clusters using VMware Antrea CNI (Container Network Interface) and using NSX 3.2.x as centralised security policy manager for pods workloads running on a kubernetes cluster.

In part one, I am going to introduce to fundamentals of kubernetes and kubernetes cluster components along with installing Antrea as the kubernetes networking plugin for our demo cluster.

In part two, we will be integrating our kubernetes cluster which is running Antrea as CNI with NSX and will be testing how NSX can be used as centralised security manager for securing containerised workloads (Pods).

Lab Inventory

For software versions I used the following:

  • VMware ESXi 7.0.2.17867351
  • vCenter server version 7.0U3
  • NSX-T 3.2.0.1
  • TrueNAS 12.0-U7 used to provision NFS datastores to ESXi hosts.
  • VyOS 1.4 used as lab backbone router.
  • Ubuntu 20.04 LTS as Linux jumpbox.
  • Ubuntu 20.04.2 LTS as DNS and internet gateway.
  • Windows Server 2012 R2 Datacenter as management host for UI access.
  • 3 x Ubuntu 18.04 VMs as 1 x Kubernete controller and 2 x nodes.

For virtual hosts and appliances sizing I used the following specs:

  • 3 x virtualised ESXi hosts each with 8 vCPUs, 4 x NICs and 32 GB RAM.
  • vCenter server appliance with 2 vCPU and 24 GB RAM.
  • NSX-T Manager medium appliance
  • 2 x medium edges with no resource reservations.

Kubernetes and kubernetes cluster components

Before we jump into the configuration steps, it is important to understand basics of kubernetes clusters in addition to the need of a container plugin which is in our case is Antrea. Once the basics are set right then in part two of this blog post I am going to walk you through integrating container networking into NSX for centralised networking and security management for your containers.

Kubernetes basics

Kubernetes clusters provide orchestration, automation and management layer for running containerised workloads. kubernetes is deployed in clusters and a kubernetes cluster is built up of the following:

  • Controller node(s): one or more nodes will function as control-plane for the kubernetes cluster. Control nodes run no user workloads (containers) however they run service containers and other control plane components needed to run and maintain kubernetes clusters.
  • Worker node(s): one or more nodes will function as worker (data plane) nodes within a kubernetes cluster and on those nodes run the actual user containers.

In some lab/test deployments users can make use of MiniKube which is one node implementation of kubernetes (one node acts as controller and worker) and as mentioned it is intended for lab/tests use cases and not production.

Digging further into kubernetes control plane, the following services are required to be running on kubernetes controller nodes for fully functional kubernetes cluster:

  • Kube-controller-manager: core daemon for regulating system state and is the core component in kubernetes control plane.
  • Kube-scheduler: daemon responsible for scheduling containers on kubernetes nodes.
  • Kube-api-server: handels all api calls needed to build up objects in kubernetes clusters.
  • Container runtime: is daemon that is responsible for spinning up and running containers (found on control and worker nodes). There are various runtimes available, docker being the most famous but it is not being supported anymore in kubernetes so I am using containerd in my lab setup as container runtime.
  • Container Network Interface (CNI): this is a plugin which is responsible for providing network connectivity and policy configuration for containers running on kubernetes clusters. Without a CNI, containers cannot communicate neither with each other nor with external networks.

Deploying kubernetes clusters

Kubernetes clusters can be deployed using different methods, while the easiest is using kubeadm tool to build and configure kubernetes. Once the cluster is built, kubectl is another tool that can be used to provision pods, deployments and manage the operational side of kubernetes clusters.

Building our kubernetes cluster

In the coming section I will go through setting up a 3 node k8s cluster (1 x control and 2 x worker nodes) using kubeadm as building up tool and at the end of part one I am going to install VMware Antrea on that cluster for container networking.

Control node setup

Before we start, you need to make sure that name resolution works properly in your setup, either by means of DNS (which I use and recommend) or by adding entries to /etc/hosts file on your kubernetes cluster nodes (which I do not recommend).

login to your control node and run the following command to ensure that swap is disabled, this is a very important prerequisite for kubernetes:

Step 1: disable filesystem swap

sudo swapoff -a

this needs to be done on all 3 nodes, further we proceed with enabling and loading two important modules for containerd:

Step 2: enable and activate overlay and br_netfilter modules

cat << EOF | sudo tee /etc/modules-load.d/containerd.conf
> overlay
> br_netfilter
> EOF

The output should be similar to the below:

Step 3: we need to configure required sysctl parameters to persist across system reboots and restart sysctl so changes can take effect:

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sudo sysctl --system

Step 4: install containerd packages:

sudo apt-get update
sudo apt-get install containerd -y

Now we need to create containerd directories which will be used to store configuration files:

sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo systemctl restart containerd

apt-transport-https is also one of the needed packages and needs to be installed:

sudo apt update && sudo apt install -y apt-transport-https curl 

Step 5: Adding Google gpg key for kubernetes repository, this is needed to be able to download kubernetes building tools (kubeadm, kubectl and kubelet)

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

The result should look like the below:

Step 6: setup the actual kubernetes repository entry in the debian package manager:

cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
> deb https://apt.kubernetes.io/ kubernetes-xenial main
> EOF

The output should look like the following:

Step 7: update the apt repositories to reflect the newly added kubernetes repo and download the required components as below:

sudo apt update
sudo apt install -y kubelet=1.23.0-00 kubeadm=1.23.0-00 kubectl=1.23.0-00

The reason I specifically chose version 1.23 (which is the latest by the way) is that for VMware Antrea 1.4 below is the supported K8s versions from VMware release notes:

After the packages are downloaded, I am going to hold the package versions on 1.23 to prevent any automatic upgrade for the packages. This step is not a must but it ensures that all components are on the same compatible versions:

Now, we need to repeat all the above steps on the other 2 worker nodes in the kubernetes cluster before we proceed with installing VMware Antrea.

Step 8: Initialising the k8s cluster, this step you need to complete only on the controller node:

sudo kubeadm init --pod-network-cidr 10.20.0.0/16 --kubernetes-version 1.23.0

Important note: make sure that the network cidr has enough addresses for the controller and worker nodes, otherwise antrea agent will fail to initialise. The rule is simple, every node in the cluster will consume a /24 address range.

The above command will take some time to initialise the k8s cluster, if successful you should see similar output to the below:

Step 9: Joining the worker nodes to the kubernetes cluster, this can be done by first generating a cluster join token from the controller node, login to the controller node and run the following:

kubeadm token create --print-join-command

The above command will return the command that you need to copy and paste on the CLI of your worker nodes using root (i.e. sudo)

Run this command as root on your worker nodes and then return to the controller node to proceed with Antrea installation.

Step 10: Installing VMware Antrea

VMware Antrea Container Networking is based on the Antrea CNI open source project, in this blog post I am deploying the VMware Antrea CNI and NOT the open source variation, so you need to stick to the VMware container images and manifests and not the ones from the open source Antrea community.

First you need to login to VMware download portal and download antrea-advanced-1.5.2+vmware.2.zip under networking & security > VMware Antrea and then the latest version 1.4.

Once the zip file is downloaded copy it over to your home directory on your controller node and extract its contents:

There are different methods for installing Antrea as CNI on your K8s cluster, the easiest in my opinion is using VMware’s public Harbor image repository to pull Antrea image that you need. From the release notes of VMware Antrea 1.4 you can find a list with the Harbor repository URL corresponding to the image you want to pull, in our case we need the follwoing:

projects.registry.vmware.com/antreainterworking/antrea-advanced-debian:v1.5.2_vmware.2

All what we need to do now is to point the Antrea config YAML file to pull this container image from the above repository, change directory to the manifests directory and open the file named antrea-advanced-v1.5.2+vmware.2.yml for editing

In that file, modify all entries of image to point to the repository mentioned above in all the file, below is a section of the file.

Once all the image entries in the manifests YAML are updated with the repository above, run the following command from the same directory where the YAML file is:

kubectl apply -f antrea-advanced-v1.5.2+vmware.2.yml

After couple of minutes, check the status of all the system pods and ensure all are in the running state using the following command:

kubectl get pods -n kube-system

It is important to specify the switch -n kube-system to tell kubectl that you want to list pods within the system namespace, otherwise kubectl will search for pods within the default namespace which is used for user pods and not system pods. Another command which can be used to list all pods in a k8s cluster is kubectl get pods -A

With this, we conclude the first part of this blog post series of integrating VMware Antrea CNI with NSX. In part two we will completing NSX integration with Antrea and inspecting some container networking and security functions using NSX.