In one of my previous blog posts I configured and enabled workload management (aka Kubernetes for vSphere) using NSX-T as networking provider, however not every environment is running or having NSX and will need to use HA proxy appliance to achieve load balancing across pods.

In this blog post I am going to enable workload management in vSphere using HA proxy appliance, this appliance can be downloaded from HERE. For the appliance you need to have two main networks available, management and workload (there is an extra optional network available called front end but I am skipping this for my lab setup).

Lab Inventory

For software versions I used the following:

  • VMware ESXi, 7.0.2, 17867351
  • vCenter server version 7.0.1.00200
  • TrueNAS 12.0-U7 used to provision NFS datastores to ESXi hosts.
  • VyOS 1.1.8 used as lab backbone router.
  • Ubuntu 20.04 LTS as Linux jumpbox.
  • Ubuntu 20.04.2 LTS as DNS and internet gateway.
  • Windows Server 2012 R2 Datacenter as management host for UI access.
  • HA proxy appliance v0.2.0

For virtual hosts and appliances sizing I used the following specs:

  • 2 x virtualised ESXi hosts each with 8 vCPUs, 4 x NICs and 64 GB RAM.
  • vCenter server appliance with 2 vCPU and 24 GB RAM.

Deployment Steps

Step 1: Deploy HA proxy OVA

  • Download the HA proxy ova appliance from the link above.
  • Login to your vCenter server.
  • Deploy the HA proxy OVA appliance.
    • Import your HA proxy OVA and go through standard steps 1 till 5, In step 6 we need to choose the configuration of the HA proxy interfaces, either Default or Front-End, choose Default. This option will provide ha proxy VM with 2 x NICs, one for management and one for communicating with workloads created later. The Frontend will add a third NIC that can be assigned to a frontend network for clients to access resources behind the HA proxy (recommended for production environments).
  • Next step is to configure networking parameters for the HA proxy appliance. The management IP address is used for managing the appliance and accessing the API to deploy virtual servers (load balancers). In my lab this is a 192.168.0.0/16 subnet on a DVS port group called MGMT_DPG.
  • For the workload network I used subnet 172.16.70.0/24 on a DVS port group called workload_management, this subnet will be used to assign addresses to virtual servers (load balancers created by HA proxy) and any Tanzu workloads (clusters and pods) will be assigned addresses in that subnet.
  • On section 4 of the HA proxy customisation template, we need to specify the address range that HA proxy will use to deploy load balancers, this must be part of the workload management subnet that we assigned above (172.16.70.0/24) however should not overlap with HA proxy IP address assigned in the same workload network. Please note, the address range need to be specified as a range and not CIDR.
  • Review deployment parameters and if all is good hit FINISH.

Step 2: Enabling vSphere Workload Management using HA Proxy

Before enabling workload management in vSphere you need to make sure that you understand the below topology to be able to understand what subnets we are configuring and why.

In nutshell, the below Management Network is our 192.168.0.0/16 subent where all appliances management interfaces are connected to. This subnet will be used while deploying the supervisor cluster control plane VMs and every control plane VM will have an IP assigned from this subnet.

Beside the Management Network, we need to have a workload network defined which is called Primary Workload Network in the below diagram. In my lab setup, this is the 172.16.70.0/24 network from which I assigned an HA proxy workload IP, IP address range for load balancers and later we will use the same subnet to assign IPs for Tanzu created pods.

You might be wondering why load balancers are needed in the whole setup, this is to achieve HA and load balancing across Tanzu created pods and clusters, so when a Namespace is deployed, it will be assigned an address from the load balancer range so in case of failure, resources hosted under Namespace can still be accessed via the load balancer VIP.

From your vCenter server navigate to Menu > Workload Management and click on GET STARTED

In Step 1 choose vCenter Server Network, this instructs vCenter than you will be using HA proxy appliance for this deployment.

Choose which cluster you want to enable workload management on

Go through step 3 and 4 (pretty straight forward) and then in step 5 fill in the HA proxy parameters. Note that for the Data plane API address you need to enter the HA proxy management address (from 192.168.0.0 subnet) followed by port 5556. Due to a bug make sure that you use IP address and not FQDN in this step.

For the certificate of HA proxy appliance, ssh using root to your ha proxy appliance, and then issue command cat /etc/haproxy/ca.crt then copy and paste the self signed certificate in the Server Certificate Authority field.

Click on NEXT, then fill in the supervisor control VM management network configuration

In step 7, you need to define some extra addressing and workload network. I have to be honest, this was a bit confusing to me and had to go through VMware documentation couple of times to understand this, simply for the services subnet (10.96.0.0/24) the default is okay, while for the workload management network you need to assign a subnet (in my case I just used a range from the 172.16.70.0 subnet) and a port group (in my case I used also the same workload management port group I used for ha proxy). This IP address range is actually will be used to assign a secondary IP address to the supervisor control plane VMs that will be created by this wizard and has nothing to do with the load balancer VIPs range created before for the ha proxy.

Once done, step 8 you will be assigning a content library to be used by Tanzu and in step 9 you review the configuration and then the deployment of vSphere kubernetes will kick in.

In about 15 minutes you should have everything up and running. For more details on how to setup Namespaces, download Kubectl, docker helper and enabling Harbor you can visit my previous blog post HERE.