In this blog Post I demoed a vSphere K8s deployment on top of NSX-T networking. As a follow up, in this blog post I will be showcasing setting up Namespaces and Pods to eventually set up a containerised Nginx webserver.
Once the workload management configuration process is finalised, we can start creating Namespaces by clocking on Create Namespace

Create a Namespace called homelab

Once the Namespace is created, you need to download the Kubernetes CLI tools with vSphere plugin, just click on Open under the summary tab of your newly created Namespace.

This will open a new tab where you can download the kubectl cli tools for your operating system.

You need to choose the zip file with the kubectl plugin that matches your operating system. I personally prefer and use Ubuntu Linux for managing all kubernetes related deployment and hence will use Linux for the rest of this blog post.
Once you download the zip file, upload it to your Linux machine, extract it using “unzip” (if you do not have unzip installed then you can download it (sudo apt install unzip).
Once you extract the kubectl vsphere plugin add the current location where you unzipped the package to your PATH variable under ~/.bashrc so that you can run the kubectl commands from any shell directory.
# pwd
/root/kubectl-vsphere
# export PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/root/kubectl-vsphere/bin"
# source .bashrc
Next is to login to our supervisor cluster and start a new deployment, from our Ubuntue machine we will run the following command to login to supervisor cluster:
~# kubectl vsphere login --server=https://172.20.10.1 -u administrator@vsphere.local --insecure-skip-tls-verify
Password:
Logged in successfully.
You have access to the following contexts:
172.20.10.1
homelab
If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.
To change context, use `kubectl config use-context <workload name>`
~# kubectl config use-context homelab
Switched to context "homelab".
The –insecure-skip-tls-verify is needed if you have not installed the vCenter CA certificate on your management host (in that case my Ubuntu) box.
Next, we need to switch context to homelab Namespace and create our first deployment.
Deploy your first webserver Pod
To deploy pods under our Namespace we need to be able to “pull” K8s images that can be used to deploy our pods. This can be done by pulling images via Internet from Docker or use an image registry. In this blog post I will be setting up Harbor which is an integrated image registry in vCenter.
Enabling Harbor image registry
To enable Harbor, navigate to Hosts and Clusters and then select the cluster where you enabled workload management on, choose Configure, click Image Registry and then Enable Harbor

Choose your Storage Policy and click on OK, this will trigger enabling Harbor process.

Once Harbor is enabled you should see screen similar to the below

Click on Download SSL Root Certificate to download the Harbor certificate and import it to your docker cert.d on your management client, in my case this is my Ubuntu box. See the steps below to install docker on Ubuntu, installing the docker-credential-vsphere plugin. These preparation steps are needed in order to fetch an Nginx image from Docker repository and then push it to Harbor.
Install Docker and upload Nginx image to Harbor
Install docker on your Ubuntu client:
#sudo apt update
#sudo apt install docker*
After you download your Docker packages, we need to pull a docker image which we will push to Harbor in order to create our first Nginx webserver pod.
You will need to create a Docker account on Docker Hub in order to proceed from here.
Login to your docker account and fetch an Nginx image as below:
~# docker pull nginx
~# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest 605c77e624dd 3 weeks ago 141MB
debian latest 6f4986d78878 4 weeks ago 124MB
hello-world latest feb5d9fea6a5 3 months ago 13.3kB
If not created, create a certs.d directory and underneath it another directory with your Harbor image registry IP (In my lab it is 172.20.10.2):
#mkdir -p /etc/docker/certs.d/172.20.10.2
Using WCP or any other method, upload Harbor root certificate to the above directory. Once done login to Harbor registery using docker-credentials-vsphere as follows:
#docker-credential-vsphere login 172.20.10.2
Username: administrator@vsphere.local
Password: **********
INFO[0016] Fetched username and password
INFO[0017] Fetched auth token
INFO[0017] Saved auth token
Following steps are to push the nginx image to Harbor, we need to tag the image and then push it to Harbor. Note, that docker tags every image in docker library with a default tag of latest, for my lab I tagged the nginx image with tag “homelab”
~# docker tag nginx 172.20.10.2/homelab/nginx:homelab
~# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
172.20.10.2/homelab/nginx homelab 605c77e624dd 3 weeks ago 141MB
nginx latest 605c77e624dd 3 weeks ago 141MB
debian latest 6f4986d78878 4 weeks ago 124MB
hello-world latest feb5d9fea6a5 4 months ago 13.3kB
# docker push 172.20.10.2/homelab/nginx:homelab
The push refers to repository [172.20.10.2/homelab/nginx]
d874fd2bc83b: Pushed
32ce5f6a5106: Pushed
f1db227348d0: Pushed
b8d6e692a25e: Pushed
e379e8aedd4d: Pushed
2edcec3590a4: Pushed
homelab: digest: sha256:ee89b00528ff4f02f2405e4ee221743ebc3f8e8dd0bfd5c4c20a2fa2aaa7ede3 size: 1570
~#
From your web browser open Harbor page (https://172.20.10.2) under Projects > Repositories you should be able to see the image we pushed

Create an Nginx webserver deployment using embedded Harbor image registry
After we have pushed our Nginx image to Harbor, we will use kubectl to create a webserver from this image.
From my Ubuntu I run the following commands to create a nginx webserver, the deployment might take 30 to 60 seconds to show as Ready
~# kubectl create deployment webserver --image 172.20.10.2/homelab/nginx:homelab
deployment.apps/webserver created
~# kubectl get deployments.apps webserver
NAME READY UP-TO-DATE AVAILABLE AGE
webserver 1/1 1 1 62s
Next step is expose our deployment so that we can access it from outside kubernetes and testing an http connection to the external IP that will be assigned from Ingress CIDR block to our deployment:
~# kubectl expose deployment webserver --port=80 --type=LoadBalancer
service/webserver exposed
~# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
webserver LoadBalancer 10.96.0.81 172.20.10.3 80:30905/TCP 11s
~#
~# curl http://172.20.10.3
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
~#
Pingback: Monitoring VMware NCP based TKGi & K8s clusters using vRNI - NSXBaas