
Overview
With the release of vSphere 8, VMware introduced Tanzu Kubernetes Grid clusters version 2, with TKG 2 you can provision two types of workload clusters on Supervisor cluster, traditional Tanzu Kubernetes clusters (TKCs) and Clusters based on a ClusterClass. With the introduction of Cluster Class TKG deployment API, this will provide a unified method of creating and management for both types of Tanzu clusters TKGs or TKGm, basically means whether Tanzu guest clusters are created on top of supervisor clusters (TKGs) or deployed on top of VM-based management cluster (TKGm) both deployments will utilise the same v1beta1 APIs. This is an evolution of the Cluster API that lets you define templates for managing the life cycle of sets of clusters.
Lab Inventory
For software versions I used the following:
-
- VMware ESXi 8.0 IA
- vCenter server version 8.0 IA
- VMware NSX 4.0.1.1
- TrueNAS 12.0-U7 used to provision NFS data stores to ESXi hosts.
- VyOS 1.4 used as lab backbone router and DHCP server.
- Ubuntu 20.04.2 LTS as DNS and internet gateway.
- Windows Server 2012 R2 Datacenter as management host for UI access.
For virtual hosts and appliances sizing I used the following specs:
-
- 7 x ESXi hosts each with 12 vCPUs, 2 x NICs and 128 GB RAM.
- vCenter server appliance with 2 vCPU and 24 GB RAM.
Preparing TKG 2.0 cluster class deployment on zonal supervisor cluster
If you have been following my recent blog posts, I have created three vSphere 8 availability zones and deployed a supervisor cluster on top with a namespace called Pindakaas which will be hosting all my Tanzu Kubernetes Grid guest clusters. I have also created a multi-zonal TKC (Tanzu Kubernetes Cluster) called multizone-tkc01 which spans the three availability zones. This TKC is however of type TKC (Tanzu Kubernetes Cluster) and utilising API v1alpha3, in this blog post I will be deploying another multi-zone Tanzu cluster but this time using Cluster Class v1beta1 API.
Below is screenshots from my current supervisor clusters, namespace and multi-zone TKC, for step by step instructions on how I setup the below you can reference by previous blog posts HERE and HERE.
Step 1: login to supervisor cluster and Pindakaas namespace
kubectl-vsphere login --server=https://172.10.200.2 --insecure-skip-tls-verify -u administrator@vsphere.local kubectl config use-context pindakaas
Before proceeding, let’s revise the hosting namespace requirements that you need in order to create and configure TKG 2 clusters:
Each vSphere Namespace must be configured with:
- Cluster users and roles
- Synchronised TKR content library
- Bound VM classes
- vSphere storage policy for TKG cluster nodes and persistent volume.
We can also verify the above from the command line:
kubectl get virtualmachineimages.vmoperator.vmware.com
kubectl get virtualmachineclasses.vmoperator.vmware.com
kubectl get storageclasses.storage.k8s.io kubectl describe storageclasses.storage.k8s.io pindakaas-storagepolicy
One thing to note from the above output is that the storage class type is “Zonal” which is a special class type that was introduced in vSphere 8 with vSphere availability zones. The storage policy shown (pindakaas-storagepolicy) is manually created in vCenter (review my blog post HERE to learn more about that).
Now we have all the requirements in place, next step is to deploy the zonal Tanzu cluster class using the new v1beta1 API.
Step 2: Create and apply YAML deployment file for v1beta1 Tanzu Cluster across vSphere Zones
The YAML deployment file that I am going to use, is quite similar to the one I built in my previous blog post HERE. However, we will be using Cluster Class (v1beta1 API) to create our zonal Tanzu cluster in this example.
apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: v1beta1-zoned-cluster namespace: pindakaas spec: clusterNetwork: services: cidrBlocks: ["198.52.100.0/12"] pods: cidrBlocks: ["192.101.2.0/16"] serviceDomain: "cluster.local" topology: class: tanzukubernetescluster version: v1.23.8+vmware.2-tkg.2-zshippable controlPlane: replicas: 3 workers: machineDeployments: - class: node-pool name: worker-pool-1 replicas: 3 failureDomain: tonychocoloney - class: node-pool name: worker-pool-2 replicas: 3 failureDomain: stroopwaffels - class: node-pool name: worker-pool-3 replicas: 3 failureDomain: pindas variables: - name: vmClass value: best-effort-medium - name: storageClass value: pindakaas-storagepolicy
First things to notice in the above YAML, that it is shorter than the traditional v1alpha2/3 YAMLs and achieves the same rollout results, compare the above to the TKC deployment YAML in my previous blog post HERE
To apply the above YAML, from the command line of your Bootstrap machine run the command:
kubectl apply -f <aboveDeploymentFile.yaml>
Once you apply the above command, the Tanzu Cluster creation process will kick-in and after some time (depending on the size of your nodes) you should see your newly created Tanzu cluster rolled out nicely across the 3 availability zones:
Step 3: Tanzu cluster status verification
It is important to mention that Tanzu clusters created using Cluster Class are not considered TKC guest clusters and hence you cannot see or verify their status under Workload Management > Namespace from vCenter UI.
Apart from the above screenshot, you can login to your newly created Tanzu cluster via command line as follows:
kubectl vsphere login --server=IP-ADDRESS --vsphere-username USERNAME --tanzu-kubernetes-cluster-name CLUSTER-NAME --tanzu-kubernetes-cluster-namespace NAMESPACE-NAME
From this point forward you can start deploying your Pods/deployments/statfulsets/whatever Kubernetes resource further on that newly nicely created Tanzu cluster.
Hope you have found this post useful.