
Overview
Managing multiple Tanzu clusters can be a challenge if you have not the right tools in place. VMware offers Tanzu Mission Control (TMC) as SaaS for managing multiple Tanzu deployments across multi-cloud and on-prem, however TMC requires an internet connectivity from your Tanzu clusters to TMC SaaS and this is not possible in an air gapped (isolated) environments and hence cannot make use of TMC.
VMware fling community has developed a fling called vsphere Console for Kubernetes which is a graphical tool that allows you to manage both TKGs and TKG in an air-gapped environment. The main functions include cluster creation, upgrade, scaling, backup and recovery, and the management of add-on packages (fluentbit, promethues, octant, contour, etc.). The tool also support provision TKGs/TKGm clusters with underlying NSX ALB as load balancer.
The MAP (Modern Application Platform) appliance comes also with an embedded Harbor registry which you can use in your air gapped environment to upload images and templates that can be used to create node or deployment images.
In this two parts blog post, I will be test driving this fling in my home lab and share my experience and feedback with you. Please remember, flings are best effort community developed tools and VMware does not offer official support for flings, so test it first before you deploy or use in production.
Lab Inventory
For software versions I used the following:
- VMware ESXi 7.0U3f
- vCenter server version 7.0U3f
- TrueNAS 12.0-U7 used to provision NFS data stores to ESXi hosts.
- VyOS 1.4 used as lab backbone router and DHCP server.
- Ubuntu 18.04 LTS as bootstrap machine.
- Ubuntu 20.04.2 LTS as DNS and internet gateway.
- Windows Server 2012 R2 Datacenter as management host for UI access.
For virtual hosts and appliances sizing I used the following specs:
- 3 x ESXi hosts each with 8 vCPUs, 2 x NICs and 96 GB RAM.
- vCenter server appliance with 2 vCPU and 24 GB RAM.
Prerequisites
We need to ensure that we meet the below requirements:
- TKGs supervisor and/or TKGm management cluster(s) installed, this is if you want to use the fling to provision new Tanzu clusters.
- TKGs: vSphere 7.0.2 (7u2) ~ 7.0.3 – TKGm: TKG 1.5.1 ~ 1.5.4
- AVI: >= 20.1.6, < 21.0 Avi is only needed if you need to deploy clusters with a load balancer.
- Linux host as a Jump Box
Deploy and Configure MAP appliance
Step 1: Download the ova installer from VMware Fling website
The OVA appliance and the user installation and administration guide can be downloaded from HERE
Step 2: Deploy the MPA OVA in vCenter

Upload the ova appliance file that you downloaded from VMware fling website (it is a ZIP file so you need to extract it first).

Select a location to deploy the MAP (Modern Application Platform) appliance.

Choose a compute resource

Review the deployment details

Accept the EULA

Choose a deployment size (I used Medium) which is enough for my home lab,

Choose a data store in which the MAP appliance will be stored.

Set networking parameters for the MAP appliance


Set various credentials that you will need later on to access Harbor and the MAP UI, clock on Next and Then FINISH to start the deployment of MAP.

To access the MAP UI, navigate to https://map-fqdn-or-ip-address:8443 and use username admin@tanzu.local and the password you set during the ova deployment.

Once you are logged in, you should see the initial MAP screen and that no clusters have been added yet

Step 4: Prepare Harbor and add it as private Image Repository in MAP
Before you can start creating clusters using MAP, you need to define an image registry from which MAP will be pulling templates and its own images of deployments by which it will install management namespace and pods in newly created clusters using MAP. MAP comes with an embedded Harbor registry, and this I am going to use a default repository.
Before we proceed with installing dependencies to Harbor, we need to ensure that we avoid any errors regarding Docker TLS certificate trust with our Harbor local private registry on your Bootstrap machine create a file called /etc/docker/daemon.json and add the following content (replace map.corp.local with your local harbor FQDN or IP address).
cat /etc/docker/daemon.json
{
"insecure-registries" : ["https://map.corp.local", "https://wdc-10-191-204-103.nimbus.eng.vmware.com"]
}
Make sure to restart docker (sudo systemctl restart docker) so that the above changes take effect, once done switch to the below directory and run the prepare.sh script to copy required images to our private Harbor registry (your bootstrap machine needs however to be connected to the internet to pull those images.

Once Prepare script is done, run the upload_dependency.sh script against your Harbor private registry:
./upload_dependency.sh -r map.corp.local
Once upload is done, login to your Harbor UI and you should see a list of created projects which are created by the upload script:

Now, switch to MAP UI and add Harbor as a new repository.
To add a repository, under System > Repositories, click on ADD REPOSITORY. Fill out your Harbor information as follows:

Testing the connection to Harbor should be successful, add the CA certificate from Harbor so that MAP will include this in the Tanzu cluster nodes while it is creating clusters (I omitted the certificate contents from the below screenshot.

To obtain the Harbor certificate, navigate back to Harbor, from the left pane choose Configuration > System settings and then download the Registry Root Certificate. This downloads a ca.crt file, open it with a text editor, copy its contents and paste it in the CA certificate field above.

Navigate back to MAP UI and check that the repository is added.

Step 5: Add Cluster Provider
A Cluster provider is your underlying Tanzu management cluster (Supervisor Cluster if TKGs and TKG management cluster if TKGm) and you need to define a cluster provider if you want to be able to provision clusters using MAP. This step is not needed if you just need to attach clusters to MAP for management (monitoring, patching, upgrading and so on.). In this blog post, I will be using my TKGs as cloud provider and will be provision a test cluster using MAP. To add a cloud provider, under System > Cluster Providers and click on Add Provider. You then need to enter your TKGs supervisor cluster details as follows:

Click on Next to add the TKGs supervisor cluster information and test connectivity

Once you test the connectivity and it is successful you should be able to see one or more of the configured namespaces (you must have at least one namespace configured under workload management in vCenter to be able to proceed). Click on Next and validate the settings, if all is good you should see screen similar to the below

Create your first Tanzu Cluster using MAP
In the last section of this blog post, I will be creating a TKG cluster using MAP on top of my TKGs cloud provider that we created above. In the second part of this blog I will be demoing some operations on the clusters we created.
To create a TKG cluster, from MAP UI left pane under Cluster Management click on Clusters and then Create Cluster, this will open the Create Cluster window, fill in the cluster info as shown below:

Next, is to define your cluster parameters:


Validate and revise the cluster parameters and then click on CREATE

Now, this is a lengthy step as MAP will be provisioning the new TKG cluster by rolling out VMs from templates, bringing up Tanzu cluster and fetching MAP images from Harbor and connecting your newly created cluster to MAP. your newly created cluster should show as Healthy once deployed:



From your bootstrap machine, log in to your newly created TKG cluster and verify the MAP created pods state, all should be running:

What’s Next
In the second part of this blog post series, I am going to demo some basic operations on Tanzu clusters using MAP. Hope you have found this post useful and thanks for your time reading.
Pingback: Managing Multiple Tanzu Clusters using vSphere Console for K8s fling – Part II - NSXBaas