This is part two of blog series I started to cover the most recent security features introduced in VMware Antrea 1.6.0 (based on project Antrea 1.9.0) and NSX 4.1. I find this release of VMware Antrea and NSX has elevated containers security in the Enterprise to a higher level by introducing the ability to secure traffic between pods and non-containerised workloads (VMs).
Vmware Antrea and NSX 4.1 leverages DFW and Generic groups in order to filter traffic from and to Antrea pods. The idea is to create NSX security groups with Kubernetes member types (resources) in dynamic membership criteria to match traffic entering into or leaving from Antrea Kubernetes clusters. These generic groups can then be used in distributed firewall rules or gateway firewall rules to secure traffic between VMs in the NSX environment and pods in Antrea Kubernetes clusters.
Below is a list of Kubernetes resource types/objects that can be defined in a generic security group and used in DFW
For software versions I used the following:
- VMware ESXi 8.0
- vCenter server version 8.0
- VMware NSX 4.1.0
- VMware Antrea 1.6.0
- VMware NSX ALB (Avi) AKO for Ingress provisioning.
- TrueNAS 12.0-U7 as backend storage system.
- VyOS 1.3 used as lab backbone router, NTP and DHCP server.
- Ubuntu 20.04 LTS as Linux jumpbox.
- Windows Server 2019 R2 Standard as DNS server.
- Windows 10 Pro as UI jump box.
- 3 x Ubuntu 18.04 VMs as 1 x Kubernete controller and 2 x nodes.
For virtual hosts and appliances sizing I used the following specs:
- 3 x virtualised ESXi hosts each with 12 vCPUs, 2x NICs and 128 GB RAM.
- vCenter server appliance with 2 vCPU and 24 GB RAM.
- NSX Manager medium appliance
Below is the reference architecture of my setup I used in this blog post, the left section is my NSX setup with 2 VMs, one connected to Dev segment and the other to prod segment and to their respective T1s. Routing is properly configured and distributed from my T0 gateway to my lab network. On the right side (light green background) is my Kubernetes cluster which is running VMware Antrea 1.6.0 as CNI and is integrated with NSX Manager from the domain on the left.
My Kubernetes cluster has Antrea Egress configured for Namespace called Developers to specify which IP subnet should be used for pods communicating with external network (more details later on) and Kube-Worker02 is configured as my Egress gateway, meaning that any pod running under Developers Namespace that requires connectivity to external network resources, will use an IP from that Egress IP pool.
For testing incoming traffic from VMs to Antrea Pods, I created a microservices demo app which is exposed using a Layer7 Ingress configured by means of AKO on URL http://shop.alb.nsxbaas.homelab which in the incoming traffic filtering scenario should only be accessible from WebProd VM and not from WebDev.
Securing Outgoing Traffic from Antrea Kubernetes Clusters to VMs in NSX
The idea of this scenario is to match on traffic generated from Antrea Kubernetes Cluster to NSX. Generic group members that can be added in NSX in this scenario and used in DFW rules can be either Kubernetes Node, Antrea Egress (Egress name needs to be specified) or an Antrea IP Pool. In my setup, I used Antrea Egress and assigned node Kube-Worker02 as egress gateway.
Verifying Antrea Kubernetes Cluster, Egress and NSX Configuration
First, lets verify that all nodes in the cluster are running, I also labeled kube-worker02 with type=egress-gw so I can use that label with Antrea egress to force all outgoing pods traffic to go out from that node. I also created a simple curl pod called mycurlpod under Namespace developers so I can curl from within that pod to my external DevWeb VM (refer to reference architecture).
I then deployed the following Egress YAML to create our egress object
apiVersion: crd.antrea.io/v1alpha2 kind: Egress metadata: name: egress-dev-web spec: appliedTo: namespaceSelector: matchLabels: env: dev podSelector: matchLabels: app: dev externalIPPool: dev-external-ip-pool
In the above Egress YAML I match on any namespace labeled with env=dev and accordingly all pods which have same label, this means that traffic originated from those pods will be SNATed using any of the IP addresses in an External IP Pool called dev-external-ip-pool which is defined using the below YAML
apiVersion: crd.antrea.io/v1alpha2 kind: ExternalIPPool metadata: name: dev-external-ip-pool spec: ipRanges: - start: 10.110.0.2 end: 10.110.0.10 - cidr: 10.110.0.0/24 nodeSelector: matchLabels: type: egress-gw
The IP Pool has range 10.110.0.2 – 10.110.0.10 and is assigned to any node which will have the label type=egress-gw which in my setup is node kube-worker02 (see output of kubectl get nodes kube-worker02 –show-labels).
In order to generate external traffic from VMs to Antrea Cluster Pods and vice versa, I deployed 2 Ubuntu VMs connected to NSX segments, one VM resembles a production server while the other development.
From NSX topology view, this is how my setup looks like
Configuring VMware NSX DFW Policies to filter Antrea Cluster Egress traffic
The idea of my setup is to allow HTTP traffic only from a dev pod called mycurlpod under namespace developers to only access web pages hosted on a VM called WebDev which is connected to an NSX segment. This is to be done by first defining generic NSX security group matching on egress traffic from mycurlpod and limiting only http traffic to WebDev VM which will be part of another generic NSX security group called Dev Webservers.
Step 1: Create Generic NSX groups for Antrea Egress and WebDev VM
To add a generic NS group, login to NSX UI > Inventory > Groups and click on ADD GROUP
Under Compute Members you should click on SET to define selection criteria (already defined in my setup) in my setup I had already defined one as follows:
The name should be the egress CRD name from output of (kubectl get egress)
Next, repeat the same procedure to create a security group for WebDev VM which is called Dev Webservers
Step 2: Create DFW policies to allow dev pods to access only Dev Webservers
In order to allow my dev pod mycurlpod to only be able to access my development web server VM (WebDev) I configured the below DFW policy in NSX
The allow DNS rule is needed so that mycurlpod pod (using Antrea Egress group) can contact my DNS to be able to resolve hostnames, the rule beneath it is to only allow HTTP traffic to DevWeb VM which is member of the NSX generic group Dev Webservers, the last rule is drop any any to block any other traffic.
To verify this, from my kubernetes cluster I will login to mycurlpod shell and use cURL commands to my WebDev VM (http://webdev.nsxbaas.homelab) and another one to WebProd (http://webprod.nsxbaas.homelab) which is another VM running HTTP service exactly as WebDev:
From the above screenshot you can see that cURL to http://webdev.nsxbaas.homelab returns a response while cURL to http://webpeod.nsxbaas.homelab returns no response and prompt keeps hanging, this is due to NSX DFW rule allowing only egress HTTP traffic to WebDev VM.
Configuring VMware NSX DFW Policies to filter Ingress traffic from VMs into Antrea Cluster Pods
In this section I will use NSX DFW policies to allow only HTTP requests originating from production VM (WebProd) to access a microservices application hosted on my Antrea Cluster. The microservices application is exposed over Layer7 Ingress which is provisioned by AKO and is accessible on http://shop.alb.nsxbaas.homelab as shown below in the output of kubectl get ingress
Step 1: Create Generic NSX groups matching on Kubernetes Ingress and WebProd VM
Same as we did earlier, we need to define NSX generic groups containing dynamic criteria matching the above kubernetes Ingress service and another group for WebProd VM which is a VM resembling a production server which can HTTP to a web application hosted inside a pod in our Antrea Cluster and is exposed using Ingress on URL http://shop.alb.nsxbaas.homelab
From NSX UI > Inventory > Groups and click on ADD GROUP add a group called Prod Server which is matching on any VM having “prod” in its name:
I will then repeat the same to create a group called “Prod WebApp” which matches on Kubernetes Ingress called onlineshop-ingress (see output of kubectl get ingress) under Namespace microservices which is hosting my demo microservices application
Step 2: Create DFW policies to allow only Prod server VM to access Microservices App via Ingress
Last step is to create a DFW policy allowing only HTTP traffic from Prod VM to WebApp Ingress as shown below:
To verify the above DFW policy, I will SSH to my prod VM and cURL to Ingress http://shop.alb.nsxbaas.homelab this traffic should be allowed by NSX DFW and should see an output returned:
Then I will SSH to my Dev server and cURL the same URL, this time NSX DFW should block the HTTP request because of the DROP rule highlighted above
Hope you have found this blog useful!