Monitoring Tanzu Community edition using Prometheus and Grafana

If you have been in the Tanzu/Kubernetes world for a while then you have definitely come across Prometheus and Grafana as open-source monitoring and visualisation tools available for Kubernetes.

Prometheus is free and an open-source event monitoring tool for containers or microservices, while Grafana is a multi-platform visualisation software which provides graphs and charts for visualising your data sources, no matter where they are stored.

During my setup of Prometheus and Grafana to monitor my Tanzu Community edition clusters, I made use of Helm. Helm is a package manager for Kubernetes which is part of the Artifact Hub open project (more about it HERE) I also used a Tanzu Community edition workload cluster which I deployed in a previous blog post (HERE).

Lab Inventory

For software versions I used the following:

  • VMware ESXi 7.0U3d
  • vCenter server version 7.0U3
  • TrueNAS 12.0-U7 used to provision NFS datastores to ESXi hosts.
  • VyOS 1.4 used as lab backbone router and DHCP server.
  • Ubuntu 20.04 LTS as bootstrap machine.
  • Ubuntu 20.04.2 LTS as DNS and internet gateway.
  • Windows Server 2012 R2 Datacenter as management host for UI access.
  • Tanzu Community Edition version 0.11.0

For virtual hosts and appliances sizing I used the following specs:

  • 3 x virtualised ESXi hosts each with 8 vCPUs, 4 x NICs and 32 GB RAM.
  • vCenter server appliance with 2 vCPU and 24 GB RAM.

Prerequisites

We need to ensure that we have the following components installed and running before we proceed:

  • TCE workload cluster.
  • Kubectl command line tool.
  • Helm package manager.

Deploying Prometheus & Grafana to TCE workload cluster

Before we proceed with installing and configuring Prometheus and Grafana pods, we will inspect our Tanzu cluster status and switch context to our workload cluster

kubectl cluster list
kubectl config use-context tce-wld-cluster01-admin@tce-wld-cluster01
kubectl get ns

Step1: Create a namespace for Prometheus and Grafana pods

Step 2: Verify helm version and install Prometheus operator

We need to ensure that helm packages are installed before proceeding with installing Prometheus and Grafana. It is important to note that by installing prometheus-operator packages we also install Grafana, so need for an extra step to deploy Grafana pods.

Add prometheus Helm repo and install Prometheus operator CRDs to monitoring namespace

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

helm install my-kube-prometheus-stack \
prometheus-community/kube-prometheus-stack --version 36.0.1 --namespace \
monitoring \

Once all relevant packages are fetched and Prometheus pods are deployed, the output of kubectl get pods -n monitoring should look similar to the below:

Step 3: Expose Prometheus and Grafana UI

By default, Prometheus server (dashboard UI) will listen on port 9090 and Grafana on port 3000. Those ports are not by default exposed outside of the Kubernetes cluster and in order to be able to access those ports we need to expose them using any of the below methods:

  • Using Kubectl port forwarding
  • Exposing the Prometheus deployment as a service with NodePort or a Load Balancer.
  • Adding an ingress object if you have an Ingress controller deployed.

I will be using the first method as it is the simplest one for a lab test. First, note down your Prometheus and Grafana main pods, in my case they are highlighted below in green:

Next, we need to expose both pods on ports 9090 and 3000 respectively

kubectl port-forward --address 0.0.0.0 -n monitoring prometheus-my-kube-prometheus-stack-prometheus-0 9090 &

kubectl port-forward --address 0.0.0.0 -n monitoring my-kube-prometheus-stack-grafana-5d7d448b46-77x5g 3000 &

The above commands will cause both pods to listen to incoming HTTP requests on all interfaces of the main machine running those pods.

Step 4: Accessing Prometheus and Grafana UI

From your web browser use command http://<machine where Promtheus is running>:9090

Navigate to Status then Targets to verify the K8s cluster discovery

To access Grafana, open an extra tap and then navigate to http://<machine where Grafana is running>:3000

default login credentails are:

admin and prom-operator

Navigate to Dashboards and then Browse

You then a get a list of the dahsboards which are added by default and populated by the metrics collected by Prometheus

Click on any metric of an interest and browse through the dashboards and graphs

Final words

Prometheus and Grafana are very powerful open-source metric collection and analysis tools and for customers adopting Tanzu Community edition this is a perfect match to complete an open-source Tanzu deployment.

Bassem Rezkalla

Share
Published by
Bassem Rezkalla

Recent Posts

Visualising VMware Antrea IDPS logs using EFK Stack

Overview In my previous blog post HERE I deployed VMware Antrea IDS and demonstrated how…

1 week ago

Securing Antrea Containers Using NSX IDPS

Overview With the release of NSX 4.0.0.1 and VMware Antrea 1.5.0 came a very interesting…

1 week ago

Managing Multiple Tanzu Clusters using vSphere Console for K8s fling – Part II

Overview In my first blog post HERE I started test driving the Modern Application Platform…

2 weeks ago

Managing Multiple Tanzu Clusters using vSphere Console for K8s fling – Part I

Overview Managing multiple Tanzu clusters can be a challenge if you have not the right…

2 weeks ago

My recommendations for VMware Explore EU 2022

After two remote VMworld (2020 - 2021) due to Corona pandemic travel restrictions, I could…

3 weeks ago

Migrating N-VDS ESXi Host Switch to VDS (vSphere Distributed Switch)

Overview N-VDS (or NSX Virtual Distributed Switch) was introduced with the release of NSX-T, and…

4 weeks ago