
Overview
Last year, I wrote a blog post series covering containers networking and security using VMware Antrea and NSX-T 3.2 and it was the highlight of my blogging work last year and I have received many positive feedback over that topic. Since then, I have been active in tracing new features that VMware Antrea keeps on adding to the Tanzu/Kubernetes networking and security space and am very optimistic about the future of how Antrea together with NSX will revolutionise how organisations are deploying and managing networking and security needs for Kubernetes.
On the 6th of February 2023, VMware released VMware Antrea 1.6.0 which is based off the Antrea 1.9.0 open source project and offers very interesting features especially when it comes to NSX integration. One of the most interesting enhancements (in my opinion) is the ability firewall rules matching both NSX objects and Kubernetes objects (DFW across containers and VMs for example) along with the ability to create rules matching ingress and egress traffic from/to Kubernetes clusters.
In this blog post series I will be revisiting the steps needed to deploy and integrate VMware Antrea with NSX on a vanilla Kubernetes cluster and building on that an example on how to implement the new DFW features discussed earlier.
Lab Inventory
For software versions I used the following:
- VMware ESXi 7.0.3g
- vCenter server version 7.0U3h
- VMware NSX 4.1.0
- VMware Antrea 1.6.0
- VMware NSX ALB (Avi) AKO for Ingress provisioning.
- TrueNAS 12.0-U7 as backend storage system.
- VyOS 1.3 used as lab backbone router, NTP and DHCP server.
- Ubuntu 20.04 LTS as Linux jumpbox.
- Windows Server 2019 R2 Standard as DNS server.
- Windows 10 Pro as UI jump box.
- 3 x Ubuntu 18.04 VMs as 1 x Kubernete controller and 2 x nodes.
For virtual hosts and appliances sizing I used the following specs:
- 3 x virtualised ESXi hosts each with 12 vCPUs, 2x NICs and 128 GB RAM.
- vCenter server appliance with 2 vCPU and 24 GB RAM.
- NSX Manager medium appliance
Deployment Workflow
In part one of this blog post series I will be covering the following tasks:
- Download Antrea CNI and interworking images.
- Deploy Antrea CNI agents on a vanilla Kubernetes cluster.
- Deploy Antrea Interworking pods and integrate with NSX.
- Deploy a testing microservices application on a kubernetes cluster and provision an Ingress using NSX ALB AKO.
Download Antrea CNI and Interworking Images
VMware Antrea is composed of Antrea CNI image which basically contains Antrea deployment manifests to install Antrea CNI agents on Kubernetes, while Vmware Antrea Interworking image package includes the interworking pods which will integrate Antrea CNI with NSX. Images for both packages are located on the following VMware repositories:
- Antrea CNI Images
- projects.registry.vmware.com/antreainterworking/antrea-advanced-debian:v1.9.0_vmware.2
- Antrea NSX Interworking Pods
- projects.registry.vmware.com/antreainterworking/interworking-debian:0.9.0
The above are the images that I will be using in my setup, however you can check VMware Antrea Release Notes if you want to use different images.
Step 1: Download Antrea CNI and Interworking Pods
You can download Antrea packages from VMware Customer Connect, navigate to Products and Accounts > All Products and then search for antrea
Both packages will be downloaded in a .zip file format, upload and extract them in your jumpbox, you should see the following directories
Open the highlighted file above called antrea-advanced-v1.9.0+vmware.2.yml and replace all the image references to reflect the antrea repository location, so a snippet from my antrea deployment manifest file looks like the below:
Save and exit the above file then apply the manifest to your Kubernetes cluster using the command “kubectl apply -f <manifest filename>” wait for couple of minutes and if Antrea pods are deployed successfully you should be able to see all pods in Running state and all your Kubernetes nodes in Ready state. You can use the following commands to verify the status
kubectl get pods -A |grep -i antrea kubectl get nodes -o wide
Step 2: Deploy Antrea NSX Interworking Adapter Pods
Next we need to extract the interworking manifests we downloaded earlier rom VMware Customer connect. The NSX Interworking manifests contains two important yaml files that we need to edit:
- bootstrap-config.yaml file which contains NSX manager credentials and certification info required to connect Antrea pods to NSX. This file will be used to generate a configmap which contains NSX credentials and a secret which is needed to connect interworking pods to NSX Manager.
- interworking.yaml file which contains image locations and deployment info required for interworking pods deployment.
VMware Antrea requires a principle identity account with Enterprise Admin rights on NSX side in order to connect with NSX Manager. Principle identity accounts are certificate based accounts in NSX and hence we need to generate an SSL certificate and key on the Kubernetes nodes hosting Antrea Interworking pods and use those to create the enterprise admin account on NSX Manager side.
To generate an SSL certificate on your kubernetes controller use the below set of commands
openssl genrsa -out antrea-cluster-private.key 2048 openssl req -new -key antrea-cluster-private.key -out antrea-cluster.csr -subj "/C=US/ST=CA/L=Palo Alto/O=VMware/OU=Antrea Cluster/CN=antrea-cluster openssl x509 -req -days 3650 -sha256 -in antrea-cluster.csr -signkey antrea-cluster-private.key -out antrea-cluster.crt
I also included a screenshot from my setup after applying the above commands:
We will also need to generate a base64 output files from the private key and certificate we generated since this will need to be added to the bootstrap-config.yaml when connecting to NSX Manager, you can generate a base64 output from both files using the below commands:
cat antrea-cluster-private.key |base64 -w0 > antrea-cluster-key.base64 cat antrea-cluster.crt |base64 -w0 > antrea-cluster-crt.base64
Step 3: Deploy Interworking Pods and connect to NSX Manager
As mentioned, antrea interworking pods requires a principle identity user defined on NSX in order to be able to connect to NSX Manager, to create a principal identity user follow the following steps:
- In the NSX Manager UI, click the System tab.
- Under Settings, navigate to User Management > User Role Assignment.
- Click Add Principal Identity.
- Enter a name for the principal identity user.
Assign Enterprise Admin role to the principle identity user, copy and paste the first certificate file we created earlier
Click SAVE and then you should be able to see the principle identity role created
Deploy Antrea Interworking pods and verify connectivity to NSX
We need to set VMware credentials and IP addresses along with the base64 formatted certificate and private key we created earlier in bootstrap-config.yaml file, my file looks like the below
Save and exit the above file, then open the file named interworking.yaml and replace every occurence of the interworking pods image location with the location mentioned earlier (projects.registry.vmware.com/antreainterworking/interworking-debian:0.9.0) snippet from my file looks like the below:
Save and exit the above file and then apply both files to the cluster using the command:
kubectl apply -f interworking.yaml -f bootstrap-config.yaml
Wait for couple of minutes and then verify that interworking pod under vmware-system-antrea namesapce is in Running state
If you want to troubleshooting registeration issues with NSX manager then you can inspect the register pod (register-wzlpx) using the command:
kubectl logs <register pod name> -n vmware-system-antrea
From NSX Manager and under Inventory > Containers > Clusters you should be able to see your antrea cluster listed
Deploy a test microservices App and Verify Discovered Objects
For testing Antrea and NSX integration, I deployed a test application on my Kubernetes cluster with a Layer 7 Ingress policy, the idea is to identify those components from NSX manager side and eventually in part 2 of this blog post to demo how NSX 4.1 DFW can filter traffic across kubernetes and non-kubernetes objects in an external network (such as VMs).
My testing application is deployed to a namespace called microservices
The above screenshot also shows the layer 7 Ingress rule I configured for my demo app, now navigate to NSX manager UI > Inventory > Containers and verify that you can see microservices namespace and the Ingress rule we configured, click on any object to get more details
If you click on the Ingress rule highlighted and choose details you will be able to see the detailed deployment YAML for that ingress
This concludes part one of this blog series, hope you found it useful.