In one of my previous blog posts I configured and enabled workload management (aka Kubernetes for vSphere) using NSX-T as networking provider, however not every environment is running or having NSX and will need to use HA proxy appliance to achieve load balancing across pods.
In this blog post I am going to enable workload management in vSphere using HA proxy appliance, this appliance can be downloaded from HERE. For the appliance you need to have two main networks available, management and workload (there is an extra optional network available called front end but I am skipping this for my lab setup).
For software versions I used the following:
For virtual hosts and appliances sizing I used the following specs:
Before enabling workload management in vSphere you need to make sure that you understand the below topology to be able to understand what subnets we are configuring and why.
In nutshell, the below Management Network is our 192.168.0.0/16 subent where all appliances management interfaces are connected to. This subnet will be used while deploying the supervisor cluster control plane VMs and every control plane VM will have an IP assigned from this subnet.
Beside the Management Network, we need to have a workload network defined which is called Primary Workload Network in the below diagram. In my lab setup, this is the 172.16.70.0/24 network from which I assigned an HA proxy workload IP, IP address range for load balancers and later we will use the same subnet to assign IPs for Tanzu created pods.
You might be wondering why load balancers are needed in the whole setup, this is to achieve HA and load balancing across Tanzu created pods and clusters, so when a Namespace is deployed, it will be assigned an address from the load balancer range so in case of failure, resources hosted under Namespace can still be accessed via the load balancer VIP.
From your vCenter server navigate to Menu > Workload Management and click on GET STARTED
In Step 1 choose vCenter Server Network, this instructs vCenter than you will be using HA proxy appliance for this deployment.
Choose which cluster you want to enable workload management on
Go through step 3 and 4 (pretty straight forward) and then in step 5 fill in the HA proxy parameters. Note that for the Data plane API address you need to enter the HA proxy management address (from 192.168.0.0 subnet) followed by port 5556. Due to a bug make sure that you use IP address and not FQDN in this step.
For the certificate of HA proxy appliance, ssh using root to your ha proxy appliance, and then issue command cat /etc/haproxy/ca.crt then copy and paste the self signed certificate in the Server Certificate Authority field.
Click on NEXT, then fill in the supervisor control VM management network configuration
In step 7, you need to define some extra addressing and workload network. I have to be honest, this was a bit confusing to me and had to go through VMware documentation couple of times to understand this, simply for the services subnet (10.96.0.0/24) the default is okay, while for the workload management network you need to assign a subnet (in my case I just used a range from the 172.16.70.0 subnet) and a port group (in my case I used also the same workload management port group I used for ha proxy). This IP address range is actually will be used to assign a secondary IP address to the supervisor control plane VMs that will be created by this wizard and has nothing to do with the load balancer VIPs range created before for the ha proxy.
Once done, step 8 you will be assigning a content library to be used by Tanzu and in step 9 you review the configuration and then the deployment of vSphere kubernetes will kick in.
In about 15 minutes you should have everything up and running. For more details on how to setup Namespaces, download Kubectl, docker helper and enabling Harbor you can visit my previous blog post HERE.
Overview NSX Advanced Load Balancer (a.k.a Avi) offers variety of advanced load balancing and application…
Overview With the release of VMware NSX 4.0 VMware announced the deprecation of NSX standard…
Overview Backup and restore is the main building block in any organisation's disaster recovery policy…
Overview In this blog post I am going to walk you through the configuration of…
Overview NodePortLocal is a feature that is part of the Antrea Agent, through which a…
Overview In part two of this blog post, we will be using NSX DFW to…
View Comments