Regardless of the type of the cloud services that your organisation is making use of (public, private or hybrid) the ability to offer your infra services as multi-tenant ready is crucial for the success of your service offerings. Recently VMware has also been busy providing multi-tenancy capabilities for the solutions it offers to serve that purpose and the Tanzu portfolio is of no exception. With the release of VMware Cloud Director 10.2 customers can integrate natively (without the need of any extra Cloud Director extensions or plugins) with vSphere instances running clusters with vSphere with Tanzu enabled (workload management) and offer the ability to provision Tanzu Kubernetes Clusters to tenants. This eliminates the need to deploy and configure Container Service Extension (CSE) for Cloud Director which makes offering Tanzu as a Service much easier and faster workflow.
In this two parts blog post I will be deploying a Cloud Director instance from scratch and connect it to a vSphere 8 with Tanzu enabled vCenter instance and eventually allow users in an organisation VDC to provision their own Tanzu Guest Clusters through their Cloud Director Tenant portal. In part one, I will deploy and prepare Cloud Director with two tenants and in part two I will be preparing Cloud Director for offering Tanzu services to tenants and will be rolling out a test Tanzu Kubernetes guest clusters from a tenant VDC.
For software versions I used the following:
- VMware ESXi 8.0a
- vCenter server version 8.0
- VMware NSX-T 126.96.36.199
- VMware Cloud Director 10.4.1
- TrueNAS 12.0-U7 used to provision NFS data stores to ESXi hosts.
- VyOS 1.4 used as lab backbone router and DHCP server.
- Ubuntu 20.04.2 LTS as DNS and internet gateway.
- Ubuntu 18.04 LTS as Jumpbox and running kubectl to manage Tanzu clusters.
- Windows Server 2012 R2 Datacenter as management host for UI access.
For virtual hosts and appliances sizing I used the following specs:
- 3 x ESXi hosts each with 12 vCPUs, 2 x NICs and 128 GB RAM.
- vCenter server appliance with 2 vCPU and 24 GB RAM.
Downloading and Deploying VMware Cloud Director 10.4.1
Assuming that you do not have Cloud Director installed, I will guide you through the steps of deploying and configuring Cloud Director in this section. First step, is to login to Vmware customer connect portal and download cloud director OVA file. Navigate Products and Accounts > All Products > VMware Cloud Director and from View Download Components chose VMware Cloud Director 10.4.1 virtual appliance
Step 1: Deploy Cloud Director OVA to your vSphere environment
For this step, I created a resource pool under which I will be deploying my Cloud Director appliance, this is just to have tidy view in my vCenter UI and not to limit or assign any resources to the cloud director appliance.
Right click the resource pool and choose Deploy OVF Template, then upload the OVA file we just downloaded from Customer Connect
Click NEXT and then assign a name to your Cloud Director appliance
Then choose the compute resource to where the appliance will be deployed, in my setup it is going to be the resource pool I created earlier
Review the appliance details
Press NEXT and accept the license agreement
Choose the deployment size, I chose Primary medium
Click NEXT and select the datastore which will host the cloud director appliance
Click NEXT and then assign Cloud Director interfaces eth0 and eth1 to corresponding VDS port groups.
Once you click on NEXT you will then be required to enter networking parameters for the appliance, then Click on NEXT and then review
installation details and once ready click on FINISH
Step 2: Finalising Post-Installation Configuration for Cloud Director Appliance
Once our Cloud Director is deployed and powered on, login to the address of your eth0 and port 5480 (in my setup http://vcloud.nsxbaas.homelab:5480) to manage the cloud director appliance VM, you should see the below screen
Cloud Director requires and NFS share to be used as transfer location for files related to cloud operations that cloud director will need to perform, for that purpose I used TrueNAS in my lab setup which is very easy to setup. In addition to the NFS share, you will need to specify a database password for the default vcloud username which cloud director uses for its embedded postgresql database.
Click on NEXT and finalise the initial setup of the Cloud Director appliance as shown below:
Click on SUBMIT, you should then see that status of VCD server as Running
If you click on the link provided in the above window, you should be directed to Cloud Director provider login portal
Sign in with administrator and the corresponding password and you should be presented with VMware Cloud Director page
Adding Infrastructure Resources and Deploying Provider and Org VDCs to Cloud Director
Before we can actually start creating and offering Tanzu clusters to tenants, we need to assign vcenter compute and networking resources to Cloud Director. For compute, we need to add a vCenter to which Cloud Director can deploy Provider and Tenants VDCs related objects, this includes Tanzu guest clusters as well. For that purpose, ensure that the vCenter instance that you add has a cluster with workload management enabled (Tanzu with vSphere) this to allow Cloud Director to make use of the supervisor cluster and deploy Tanzu guest clusters for tenants.
In my setup, I will be adding a vCenter instance which has cluster “TonyChocoloney” enabled for vSphere with Tanzu. To revise how you can enable workload management (vSphere with Tanzu) on vSphere clusters, you can revise my blog post HERE for more details.
Step 1: Add vCenter resources to Cloud Director
Login to Cloud Director portal, navigate to Resources > Infrastructure Resources > vCenter Server Instances and then click on ADD and follow the steps to add a vCenter server instance:
Add your vCenter server connection details and then click NEXT
Make sure to disable NSX-V networking since we are going to use NSX-T to provide tenant networking resources
Leave Access Configuration disabled
Click NEXT, then you should see vCenter successfully added to your Cloud Director instance
Step 2: Create Provider VDC
After adding our vCenter instance, I went ahead and added NSX-T Manager instance along with the rest of my networking resources for Cloud Director which later will be used by org VDCs. However, for the purpose of provisioning TKGs (vSphere with Tanzu guest clusters) no NSX-T configuration is needed on Cloud Director side, all Tanzu clusters networking requirements are backed and provided by the NSX-T Manager instance
which I used to enable workload management on my vSphere Cluster (shown earlier in this post). In order to provide multi-tenancy in Cloud Director we need to add provider VDC (Virtual Data Center), Organisation(s) and organisation VDCs (tenants) and then publish resources and policies to those Org VDCs.
From Cloud Director provider portal under Cloud Resources, click on Provider VDC and then NEW to create a new provider VDC
Assign a name to your provider VDC, mine is called nsxbaas-pVDC
Click NEXT and choose which vCenter instance you want to add to this provider VDC to provide compute resources
Click NEXT, you then need to choose the location (either cluster or resource pool) to where provider VDC objects will be deployed.
Notice the blue Kubernetes icon which indicates that these cluster and all sub resource pools support Tanzu Clusters
You then choose which storage policy should be applied to VMs provisioned under your provider VDC (those must be pre-configured in vCenter)
Click NEXT and then choose which networking pool you want to use for VMs provisioned under provider VDC, as mentioned earlier I am using
NSX-T backed networking but you can choose any other networking provider as shown in the list below.
Click NEXT and review your provider VDC configuration and if is good then click FINISH to start creating your pVDC
Once pVDC is successfully created, you should see the status of it similar to the below (notice the blue Kubernetes icon).
To make sure that your pVDC has created its default Kubernetes policy which later can be published to tenants, click on your pVDC name then under Policies
click on Kubernetes, you should be able to see the default kubernetes policy created but not yet published to any tenant organisations
Note: In my lab I ran into an issue which was caused by vcenter TLS management certificate trust and Cloud Director which resulted in the below error:
I was able to solve this issue by following the steps mentioned in KB https://kb.vmware.com/s/article/83583 and then performing a disconnect and reconnect followed by a refresh for my vCenter instance which is added under Cloud Director Infra resources.
Step 3: Create Organisation and Organisation VDC
In this step we will be creating our organisation which will be hosting two organisation VDCs. From Cloud Director provider portal navigate to Cloud Resources and then under Organizations click NEW to create a new Org
Click CREATE to create your organisation, next click on Organization VDCs and click on NEW to create a new organisation VDC
Click NEXT to move to step 2 and assign the org VDC to an organisation which we created above
Click NEXT to move to step 3 and choose your provider VDC to attach the new org VDC to
Click NEXT, in step 4 “Allocation Model” it is very important to choose Flex as the allocation model for your org VDC, this is a requirement in order to be able to publish Kubernetes services and deploy Kubernetes/Tanzu from within your org VDCs
Click NEXT and then in step 5 you can change any reservations or allocations for your org VDC
In step 6 you choose a storage policy for your org VDC, in my lab this is pulled from a pre-configured storage policy in vCenter
Click NEXT, in step 7 you need to choose a network pool for your org VDC, I have already a defined network pool backed by an NSX-T GENEVE overly transport zone
In step 8, revise that all the org VDC parameter are as you expect and then click on FINISH
Give the process a minute and then you should see your newly created org VDC status Ready and State Enabled
Publishing Container UI plugin and Kubernetes Policies to Tenants
In this step, I will be publishing Container services UI plugin and the default Kubernetes policy that was created under the provider VDC to my two tenants so that tenant users can deploy TKGs clusters from their organisation portal.
Step 1: Publish Container UI plugin
From your Cloud Director provider portal, click on More and then click on Customize Portal
Scroll down through the plugin list and choose Container UI Plugin and then click on PUBLISH
Choose the publishing scope and to which tenants you want to publish and then click on SAVE and trust the plugin
Step 2: Publish default Kubernetes policy from provider VDC to tenants org VDCs
From Cloud Director provider portal navigate to Resources > Provider VDCs and click on the provider VDC we created earlier in this blog post, under Policies Click on Kubernetes, then choose the default Kubernetes policy and then PUBLISH
This will open up a publish to org VDC wizard, fill in the name by which this Kubernetes policy will be published to org VDCs
Click NEXT then choose to which tenant you want to publish this kubernetes policy, I will choose Tenant-PindaKaas
Click Next, then choose CPU & Memory reservations/limitations that will be applied as part of this policy
Choose which VM classes you want to be available for your Tenants under this policy. This VM classes defines the t-shirt size of the control and worker nodes of TKGs clusters that tenants can deploy. If you notice, all the available classes are best-effort which means no guarantee for resources. This is due to the fact that in my allocation model of this provider VDC, I did not reserve any resources.
Click NEXT and then choose the storage policy that will be apply to VMs deployed (i.e. TKGs control and worker nodes)
Click NEXT, review the policy publish parameters and then click PUBLISH
The result should look similar to the below
At this point, our Cloud Director deployment is completed and prepared with two running org VDCs. In part two oft his blog post series we will add Tanzu clusters creation capabilities to our org VDCs and test the creation of a guest Tanzu Cluster from one of our org VDCs, stay tuned.