Deploying Tanzu Community Edition Managed Clusters on vSphere

Tanzu Community Edition is an open source kubernetes platform based on the successful VMware Tanzu offering. It is freely available and community supported. Tanzu community edition can be deployed on any cloud (vSphere, AWS, Azure, etc.).

Tanzu community edition can be deployed in either an unmanaged cluster mode (standalone) or managed clusters. Standalone mode is only suitable for development testing or a lab environment (single node, local workstation cluster) while managed clusters are more for production environments, since the manager cluster will take care of deploying, scaling and life cycle management for workload clusters which is more suitable for production.

In this blog post I will focus on deploying TCE managed clusters, however standalone clusters are very easy to deploy as well and if you understand the steps here then deploying standalone TCE cluster on your homelab or test environment should not be an issue.

Lab Inventory

For software versions I used the following:

  • VMware ESXi 7.0U3d
  • vCenter server version 7.0U3
  • TrueNAS 12.0-U7 used to provision NFS datastores to ESXi hosts.
  • VyOS 1.4 used as lab backbone router and DHCP server.
  • Ubuntu 20.04 LTS as bootstrap machine.
  • Ubuntu 20.04.2 LTS as DNS and internet gateway.
  • Windows Server 2012 R2 Datacenter as management host for UI access.
  • Tanzu Community Edition version 0.10.0

For virtual hosts and appliances sizing I used the following specs:

  • 3 x virtualised ESXi hosts each with 8 vCPUs, 4 x NICs and 32 GB RAM.
  • vCenter server appliance with 2 vCPU and 24 GB RAM.

Importnat note: starting from TCE version 0.11.0 there is a bug/issue causing a race condition while initialising the management cluster (specifically with cert-manager pods) this apparently happens with newer Linux and docker releases. Initially I was trying to deploy TCE 0.12.1 (as the screenshots below) but I ended up using TCE 0.10.0 in order to avoid this issue.

You will need a DHCP server running on the same network on which you are creating your Tanzu community management and workload clusters. However, the control endpoint IPs for control plane VMs need to be excluded from DHCP server lease.

Preparing the bootstrap machine

Bootstrap machine is a VM from where you manage Tanzu clusters deployment, it needs to run docker (since this is what Tanzu community edition uses as container runtime) and Tanzu CLI to interface with Tanzu clusters.

Your bootstrap machine can be either Linux, MacOS or Windows, in my home lab I am using Linux Ubuntu 20.04 LTS (if you have not noticed by now I am a big Debian/Ubuntu fan).

On your bootstrap machine you need to have the following packages installed:

  • Docker engine (installation steps can be found HERE).
  • Install Kubectl from HERE (this is needed to manage and deploy Tanzu workload pods and containers).
  • Upload a Tanzu Community Edition OVA (can be downloaded from VMware Customer connect). Download either a Photon OS or an Ubuntu Tanzu OVA, upload to your vcenter server and convert it to a template. This is needed by Tanzu to roll out VMs for Tanzu clusters (see screenshots for steps below)

Download the OVF which matches your TCE version and deploy it as OVF template to your vCenter:

Convert this to a VM template

  • Tanzu cli, the easiest way is to pull Tanzu cli using brew on Ubuntu.
    • To install brew on Ubuntu run the following command:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Add homebrew to your PATH and deploy Tanzu cli:

eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"

brew tap vmware-tanzu/tanzu

brew install tanzu-community-edition

Last step is to run Tanzu configuration script (configure-tce.sh) the default location for the config script is /home/linuxbrew/.linuxbrew/Cellar/tanzu-community-edition/v0.12.1/libexec

Deploying the management cluster

You can deploy Tanzu Community Edition management cluster either by using a configuration YAML descriptor file or using a UI that is included as part of Tanzu packages downloaded above. I will be using the UI as it is easier of course and later I will use the created configuration file to roll-out workload clusters.

To start the UI interface use the command:

tanzu management-cluster create -u

using a web browser, navigate to the address shown as output of that command, by default it is http://127.0.0.1:8080, if you would like to access the UI from another machine then you can ass –bind <interface-address:port> to the above command.

The UI should look like this:

Click Deploy under VMware vSphere to deploy to vSphere environment, you will then be prompted to add your vSphere credentials (no need to verify SSL Thumbprint as long as you are authenticating towards your own vCenter). Public SSH key is only needed to be able to SSH to Tanzu deployed nodes later on.

Connecting to your IaaS provider (vCenter in my case)
Choosing your deployment size and end-point provider
Specifying deployment vCenter location and resources
Specifying workload cluster pod range and attachment port group in vCenter
Selecting the Tanzu image template that we deployed earlier

Once the management cluster deployment is done, you can verify its deployment using the below command:

tanzu management-cluster get

login to your created management cluster

Deploying your first Tanzu workload cluster

In order to create your workload cluster, we need to get a sample cluster deployment YAML file, fill it with the workload cluster parameters (cluster name, control plane endpoint IP, CIDRs, node number and size, etc.) and then deploy it.

The YAML file of the management-cluster can be found under “~/.config/tanzu/tkg/clusterconfigs/<MGMT-CLUSTER-NAME>.yaml” it should look something like the below YAML file:

AVI_CA_DATA_B64: ""
AVI_CLOUD_NAME: ""
AVI_CONTROL_PLANE_HA_PROVIDER: ""
AVI_CONTROLLER: ""
AVI_DATA_NETWORK: ""
AVI_DATA_NETWORK_CIDR: ""
AVI_ENABLE: "false"
AVI_LABELS: ""
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_CIDR: ""
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_NAME: ""
AVI_PASSWORD: ""
AVI_SERVICE_ENGINE_GROUP: ""
AVI_USERNAME: ""
CLUSTER_CIDR: 100.96.0.0/16
CLUSTER_NAME: tce-cluster01
CLUSTER_PLAN: dev
ENABLE_AUDIT_LOGGING: "false"
ENABLE_CEIP_PARTICIPATION: "false"
ENABLE_MHC: "true"
IDENTITY_MANAGEMENT_TYPE: none
INFRASTRUCTURE_PROVIDER: vsphere
LDAP_BIND_DN: ""
LDAP_BIND_PASSWORD: ""
LDAP_GROUP_SEARCH_BASE_DN: ""
LDAP_GROUP_SEARCH_FILTER: ""
LDAP_GROUP_SEARCH_GROUP_ATTRIBUTE: ""
LDAP_GROUP_SEARCH_NAME_ATTRIBUTE: cn
LDAP_GROUP_SEARCH_USER_ATTRIBUTE: DN
LDAP_HOST: ""
LDAP_ROOT_CA_DATA_B64: ""
LDAP_USER_SEARCH_BASE_DN: ""
LDAP_USER_SEARCH_FILTER: ""
LDAP_USER_SEARCH_NAME_ATTRIBUTE: ""
LDAP_USER_SEARCH_USERNAME: userPrincipalName
OIDC_IDENTITY_PROVIDER_CLIENT_ID: ""
OIDC_IDENTITY_PROVIDER_CLIENT_SECRET: ""
OIDC_IDENTITY_PROVIDER_GROUPS_CLAIM: ""
OIDC_IDENTITY_PROVIDER_ISSUER_URL: ""
OIDC_IDENTITY_PROVIDER_NAME: ""
OIDC_IDENTITY_PROVIDER_SCOPES: ""
OIDC_IDENTITY_PROVIDER_USERNAME_CLAIM: ""
OS_ARCH: amd64
OS_NAME: photon
OS_VERSION: "3"
SERVICE_CIDR: 100.64.0.0/16
TKG_HTTP_PROXY_ENABLED: "false"
TKG_IP_FAMILY: ipv4
VSPHERE_CONTROL_PLANE_DISK_GIB: "40"
VSPHERE_CONTROL_PLANE_ENDPOINT: 172.16.110.4
VSPHERE_CONTROL_PLANE_MEM_MIB: "8192"
VSPHERE_CONTROL_PLANE_NUM_CPUS: "2"
VSPHERE_DATACENTER: /dc-01a
VSPHERE_DATASTORE: /dc-01a/datastore/TrueNAS_Pool2
VSPHERE_FOLDER: /dc-01a/vm/Tanzu Community
VSPHERE_INSECURE: "true"
VSPHERE_NETWORK: /dc-01a/network/TCE
VSPHERE_PASSWORD:
VSPHERE_RESOURCE_POOL: /dc-01a/host/Tanzu/Resources/Tanzu Community
VSPHERE_SERVER: vc-l-01a.corp.local
VSPHERE_SSH_AUTHORIZED_KEY: no key
VSPHERE_TLS_THUMBPRINT: ""
VSPHERE_USERNAME: administrator@vsphere.local
VSPHERE_WORKER_DISK_GIB: "40"
VSPHERE_WORKER_MEM_MIB: "8192"
VSPHERE_WORKER_NUM_CPUS: "2"

Once the above file is created, you can simply deploy the cluster using kubectl create -f command (see below screenshot)

Checking that the cluster is created, using the command “tanzu cluster list

Inspecting the workload cluster deployment further, use the command tanzu cluster get <workload cluster name>

Deploying pods on workload cluster

The idea of having a workload cluster is eventually to have development pods and containers running on it, so in order to do this we are going to implement the following steps:

  • Authenticate to the newly created workload cluster.
  • Switch context under the newly created cluster.
  • Create a namespace where our pods will be created and running.
  • Deploy my famous workload deployment (nginx, busybox and curl pods).

To authenticate and switch context to my tce-cluster01 (workload cluster) use the following 2 commands:

tanzu cluster kubeconfig get tce-cluster01 --admin
kubectl config use-context tce-cluster01-admin@tce-cluster01

Create a namespace to host the deployment:

kubectl create namespace vexpert-tce

For the pods deployment I used a simple deployment (rolling out 6 pods in total) just create a file called tce-deployment.yaml, copy and paste the following lines to it:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: quay.io/testing-farm/nginx
        ports:
        - containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox-deployment
  labels:
    app: curl
spec:
  replicas: 2
  selector:
    matchLabels:
      app: busybox
  template:
    metadata:
      labels:
        app: busybox
    spec:
      containers:
      - name: busybox
        image: quay.io/libpod/busybox
        command: ['sh', '-c', 'while true; do sleep 5; done']
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: curl-deployment
  labels:
    app: curl
spec:
  replicas: 2
  selector:
    matchLabels:
      app: curl
  template:
    metadata:
      labels:
        app: curl
    spec:
      containers:
      - name: curl
        image: quay.io/cilium/alpine-curl
        command: ['sh', '-c', 'while true; do sleep 5; done']

Now, deploy the above using the command kubectl create -f tce-deployment.yaml -n vexpert-tce

Give it couple of seconds and verify the pods deployment using the following command:

Final words

Tanzu community edition is a very powerful open source container orchestration and management platform and fits very well in the multi-cloud vision of VMware. With TCE you can spin-up workloads in various cloud providers and still achieve consistency for your application development platforms in addition to making use of Tanzu extra services such as Tanzu Mission Control. Definitely worth trying out in your environment.

Bassem Rezkalla

View Comments

Recent Posts

Gain Insight into Tanzu Kubernetes Clusters using VMware Project Octant

Overview Note: After I have written this blog post I got to hear that work…

4 days ago

Enable Workload Management (vSphere with Tanzu) with NSX ALB

Overview I am just back from VMware Explore in Barcelona after presenting an interesting session…

2 weeks ago

Deploying NSX NAPP on upstream (a.k.a native) Kubernetes – Part II

Overview In the second part of this blog post I will be finalising my NSX…

4 weeks ago

Deploying NSX NAPP on upstream (a.k.a native) Kubernetes – Part I

Overview In a previous blog post (HERE) I deployed NSX Application Platform on top of…

1 month ago

Deploying TKG 2 workload clusters with vSphere 8 with Tanzu

Overview With the release of vSphere 8, VMware introduced Tanzu Kubernetes Grid clusters version 2,…

1 month ago

Deploying TKG workload clusters across vSphere 8 Availability Zones

Overview vSphere 8 introduced zonal supervisor cluster deployments in order to improve Tanzu workload resiliency,…

2 months ago