In part two in this blog post series, we will be integrating our kubernetes cluster which is running Antrea as CNI with NSX and will be using NSX to configure centralised security policies for container workloads running on our kubernetes cluster.
For software versions I used the following:
- VMware ESXi 188.8.131.5267351
- vCenter server version 7.0U3
- NSX-T 184.108.40.206
- TrueNAS 12.0-U7 used to provision NFS datastores to ESXi hosts.
- VyOS 1.4 used as lab backbone router.
- Ubuntu 20.04 LTS as Linux jumpbox.
- Ubuntu 20.04.2 LTS as DNS and internet gateway.
- Windows Server 2012 R2 Datacenter as management host for UI access.
- 3 x Ubuntu 18.04 VMs as 1 x Kubernete controller and 2 x nodes.
For virtual hosts and appliances sizing I used the following specs:
- 3 x virtualised ESXi hosts each with 8 vCPUs, 4 x NICs and 32 GB RAM.
- vCenter server appliance with 2 vCPU and 24 GB RAM.
- NSX-T Manager medium appliance
- 2 x medium edges with no resource reservations.
Registering Antrea container cluster to NSX
At this point, we have successfully setup a kubernetes cluster with 1 controller node and 2 x worker nodes, Vmware Antrea is deployed on that cluster and all system pods are in running state. In the following steps we will be preparing and connecting Antrea CNI with NSX.
Create a Self-Signed Security Certificate
This step is needed in order to create a principle identity user account in NSX, which will be used to by Antrea interworking plugin running on our kubernetes cluster to authenticate to NSX management plane. This user will use a self-signed certificate, and to generate this certificate we need to login to our Antrea controller node and use openssl, here we go:
The commands I used are as follows:
openssl genrsa -out k8s-cluster-private.key 2048 openssl rand -writerand .rnd openssl req -new -key k8s-cluster-private.key -out k8s-cluster.csr -subj "/C=US/ST=CA/L=Palo Alto/O=VMware/OU=Antrea Cluster/CN=homelab-k8s-cluster" openssl x509 -req -days 3650 -sha256 -in k8s-cluster.csr -signkey k8s-cluster-private.key -out homelab-k8s-cluster.crt
The command “openssl rand -writerand .rnd” is needed to avoid an error related to CSR generation using openssl.
Create a Principal Identity User in NSX
After we generated an openssl certificate, we need to create the principal identity user we discussed about earlier using that certificate.
To create a principal identity user:
- In the NSX Manager UI, click the System tab.
- Under Settings, navigate to User Management > User Role Assignment.
- Click Add > Principal Identity with Role.
- Enter a name for the principal identity user.
Configuring NSX Interworking connector
In part I of this blog post, I have downloaded VMware Antrea and VMware Antrea Interworking ZIP files from VMware customer connect and uploaded both ZIP files to my Antrea kubernetes controller node:
in Part I we have already used VMware Antrea manifests and images to setup Antrea as our CNI, now we need to unzip antrea-interworking zip file which will provide us with the YAML configuration files which will be used to connect Antrea CNI to NSX manager.
Using Vim or any other text editor, open both YAML files interworking.yaml and deregisterjob.yaml and replace all images with the following image location:
After updating all image locations in the above mentioned files, we need to edit the bootstrap configuration file. The bootstrap configuration file holds information needed to connect to NSX manager, such as NSX manager IP address, SSL certificate and key values.
Using any text editor, open bootstrap-config.yaml file and fill in the required arguments in this file:
Next step is to apply the YAML interworking and bootstrap config files to the kubernetes cluster using the following command:
kubectl apply -f bootstrap-config.yaml -f interworking.yaml
Give this a minute or two and then verify that Antrea interworking pod is in Running state, for this you need to specify the correct namespace which is vmware-system-antrea
Viewing Antrea Container Cluster Inventory in NSX Manager
If the connection from Antrea cluster to NSX manager is successful then from NSX Inventory menu > Containers we should be able to list Kubernetes clusters and namespaces from NSX Manager:
Generating NSX security policy for pods running on Antrea
Last section in this post is to use NSX and Antrea integration to create a security policy for some testing pods running on our kubernetes cluster.
First, I will create a namespace called homelab:
kubectl create namespace homelab
Next, I will deploy 2 simple nginx pods using the following sample YAML file (lets call it webserver1.yaml)
apiVersion: v1 kind: Pod metadata: name: webserver1 spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80
On the controller node, you need to create a file called webserver1.yaml and paste the above contents in it using any text editor.
Exit and save the above webserver1.yaml, repeat the same to create another pod called webserver2 (do not forget to change the name under the Pod metadata section).
Once done, create both Pods under the homelab namespace using the command:
kubectl create -f webserver1.yaml -f webserver2.yaml -n homelab
Make sure that both Pods are in running state:
Navigate to NSX UI and under Inventory > Containers > Clusters, click on the number of Pods showing, this should open a window listing all the available Pods on our Antrea cluster and you should be able to see webserver1 and webserver2 listed:
Once verified, navigate to Security and under Distributed Firewall section add a new Policy called K8s-Antrea-DFW-Policy and in the Applied to section ensure that you choose Antrea Container Cluster
I then went back, created a security group based on Antrea namespace homelab (which we created earlier) and then added a DFW rule allowing only HTTP and then applied that on the security group:
After you publish the rule (you can add a Deny Any Any at the bottom if you want to only allow HTTP, my example is only for demo purposes) you can navigate back to Inventory > Groups and the webserver Pods should now be visible as members under the security group we created:
VMware Antrea integration with NSX 3.2 adds an efficient centralised security management policy for container workloads, so users can manage complex container networking as simple as any other workload type. This abstraction layer in security configuration is what simplifies complex virtualised data centres and matches VMware vision of any workload any cloud.
Hope this blog post was worth of your time and has contributed to your knowledge.
I Bassem, great Blog!
I have a question regarding the Distributed Firewall Rule that is shown as an example; I see “WebServer Pods” as source, whilst on “traditional VMs” they would be the destination. Is this the way it is supposed to be done for K8s or is it just an example and not a “real case”?
Second question: at the moment only policies between PODs can be configured and not policies between and Overlay VMs, correct?
Thanks for the time you took to read through.
For your first question, it is indeed just an example 🙂
and for the second question, AFAIK this is at the moment only between Pods as it is handled by Antrea network policies eventually.
Hope this helps!
Pingback: Monitoring K8s clusters using vRNI - NSXBaas
Pingback: Securing Antrea Containers Using NSX IDPS - nsxbaas
Pingback: Visualising VMware Antrea IDPS logs using EFK Stack - nsxbaas
Pingback: Removing Stale Antrea Tanzu Kubernetes Clusters from NSX UI - nsxbaas