As a continuation for my blog posts on setting up TKG clusters and deploying containerised workloads, I am discussing how to setup TKG workload clusters on vSphere 7 supervisor management clusters (workload management).
TKG requires having first a supervisor/management cluster to be up and running before eventually spinning up TKG workload clusters. There are two methods to deploy supervisor management clusters:
- Using vSphere 7 supervisor management clusters (by enabling workload management).
- Or using TKG management clusters deployed by custom Tanzu OVA.
VMware recommends using the first method and this what we are going to discuss further in this blog post.
If you want to learn how to enabled vSphere 7 workload management supervisor clusters using NSX-T or standard vSphere networking you can have a look at my previous blog posts HERE and HERE.
For software versions I used the following:
- VMware ESXi, 7.0.2, 17867351
- vCenter server version 7.0.1.00200
- TrueNAS 12.0-U7 used to provision NFS datastores to ESXi hosts.
- VyOS 1.1.8 used as lab backbone router.
- Ubuntu 20.04 LTS as Linux jumpbox.
- Ubuntu 20.04.2 LTS as DNS and internet gateway.
- Windows Server 2012 R2 Datacenter as management host for UI access.
- HA proxy appliance v0.2.0
For virtual hosts and appliances sizing I used the following specs:
- 2 x virtualised ESXi hosts each with 8 vCPUs, 4 x NICs and 64 GB RAM.
- vCenter server appliance with 2 vCPU and 24 GB RAM.
Before jumping into the deployment steps, it is important to mention that if you are not familier with basic kubernetes concepts and tools, it might be challenging to understand the deployment steps. However, I will try to explain the commands used to make things a bit clearer.
Tanzu clusters offers same functionality as Kubernetes clusters, which is orchestration for containers and managing how containers run and behave within a Kubernetes cluster. Tanzu clusters make use of control and worker nodes. When you deploy a Tanzu cluster, you need to specify minimum one control node and one worker node, control node is responsible for cluster management and scheduling pods on worker nodes. Worker nodes are the data plane nodes which run actual pods.
Tanzu has a management tanzu cli interface which you can use to deploy clusters, however I will be using the standard kubernetes kubectl cli tool to spin up tanzu clusters and workload pods.
kubectl with vSphere plugin can be downloaded by open a web browser and navigate to your vSphere supervisor management cluster IP (for more details you can check my previous blog post HERE).
Step 1: Namespace creation with Tanzu content library
Once the workload management configuration process is finalised, we can start creating Namespaces by clocking on Create Namespace
Create a Namespace called homelab
Once the Namespace is created, you need to download the Kubernetes CLI tools with vSphere plugin, assign the correct credentials and rights for a specific user on the Namespace (for my lab I used email@example.com) and then you need to assign a content library to the namespace, this is very important as this is where WCP (vSphere workload management process) will roll out and store Tanzu ova images used to build Tanzu control and worker nodes.
Step 2: Content library creation
For Tanzu images, you need to create a content library with a subscription to the following URL
This is very important as this is the location from the WCP service will be fetching the Tanzu cluster image to deploy control and worker VMs.
Once you add the content library, a task should start of synchronising the content library contents from the published URL above.
Step 3: Creating Tanzu workload cluster
Now switching to my Linux jumpbox and check the available namespaces:
bassem@jumpbox:~$ kubectl vsphere login -u firstname.lastname@example.org --server=https://172.16.70.64 --insecure-skip-tls-verify Password: Logged in successfully. You have access to the following contexts: 172.16.70.64 homelab homelab-tkg01 tanzu01 If the context you wish to use is not in this list, you may need to try logging in again later, or contact your cluster administrator. To change context, use `kubectl config use-context <workload name>`
From the above output, you see that I have already some Tanzu created clusters, for the rest of this blog post, I will be creating a new Tanzu cluster called Tanzu02 under homelab namespace. For that purpose, we need to switch context to homelab namespace
bassem@jumpbox:~$ kubectl config use-context homelab Switched to context "homelab". bassem@jumpbox:~$
Verify the available images (which will be pulled from the content library defined earlier under the homelab namespace) that can be used to deploy Tanzu control and worker VMs:
bassem@jumpbox:~$ kubectl get virtualmachineimages
From the above note the image version number which you will need to use while you deploy your Tanzu cluster.
Next is to deploy our Tanzu cluster, and for this I am going to use the below YAML configuration example:
apiVersion: run.tanzu.vmware.com/v1alpha1 kind: TanzuKubernetesCluster metadata: name: tanzu02 spec: topology: controlPlane: count: 1 class: best-effort-small storageClass: nfs-storagepolicy workers: count: 3 class: best-effort-small storageClass: nfs-storagepolicy distribution: version: v1.17.13+vmware.1-tkg.2.2c133ed
In order to build a similar configuration file to the above you need to know the following parameters and how to collect them:
- VM classes:
- VM storage policy:
Knowing the above values we can build the above yaml file and then run the following command to start Tanzu cluster deployment:
bassem@jumpbox:~/Tanzu-manifests$ kubectl create -f tanzu02.yml tanzukubernetescluster.run.tanzu.vmware.com/tanzu02 created bassem@jumpbox:~/Tanzu-manifests$
As you can see the cluster is created, however if you switch to your vCenter you will see the process of creating control and worker nodes is ongoing and will take about 10 minutes (depends on the size of the nodes) to finalise.
Once all Tanzu nodes are deployed, you should see a similar setup under your Hosts and Clusters in vCenter as the below:
To verify the cluster settings and nodes from CLI, I switched to my Linux jumpbox and used the following commands:
You may need to zoom in to view the commands I used to verify the cluster status and nodes, but in summary here are the steps:
- Login to the created Tanzu cluster using the following command:
kubectl vsphere login --server=SUPERVISOR-CLUSTER-CONTROL-PLANE-IP --tanzu-kubernetes-cluster-name TANZU-KUBERNETES-CLUSTER-NAME --tanzu-kubernetes-cluster-namespace SUPERVISOR-NAMESPACE-WHERE-THE-CLUSTER-IS-DEPLOYED --vsphere-username VCENTER-SSO-USER-NAME --insecure-skip-tls-verify
- Then you need to switch context to the newly created Tanzu cluster:
kubectl config use-context tanzu02
- Verify then the status of the control and worker nodes in a wider output using the -o wide switch:
kubectl get nodes -o wide
By now your Tanzu workload cluster should be up and running, you can enable Harbor image registry (check my previous blog post here on how to enable Harbor) and start deploying pods on top of your Tanzu cluster.
Pingback: Deploying VMware NSX NAPP (NSX Application Platform) on TKGS - NSXBaas