
Overview
In this blog post I am going to walk you through the configuration of HTTPS Layer 7 Ingress for Tanzu workloads using VMware NSX ALB (Avi) Kubernetes Operator (AKO). Ingress is a kubernetes resource which allows users to define Layer 7 routing rules and/or load balancing options for their HTTP/HTTPS backed services. Obviously HTTPS in the preferred method for production workloads as it offers SSL layer of security by means of HTTP traffic encryption in addition to identity verification and trust by means of certificates.
HTTPS offers privacy, integrity, and identification, this is achieved by making use of SSL certificates which ensure that both the sender and receiver are both legitimate and that the ongoing two way communication has not been manipulated during the transmission. The legitimacy of the communication is verified and guaranteed by means of certificates and for this purpose, Ingress endpoints that require HTTPS need to make use of a certificate authority service which is responsible of distributing and managing SSL certificates within the Kubernetes cluster.
Cert-manager is an open-source project which is used to manage X.509 certificates in Kubernetes, cert-manager adds certificates and certificate issuers as resource types in Kubernetes clusters, and simplifies the process of obtaining, renewing and using those certificates. It can issue certificates from a variety of supported issuers in addition to private PKI. In my lab setup I will be using Hashicorp Vault as my PKI engine and Certificate Authority. Certificate Authority is an entity that validates, signs and generate identities which in the digital world are called certificates, discussing how CA works is beyond the scope of this blog post but there are enough resources online to explain how CAs work.
How It Works
The below diagram is a very simplified representation of how cert-manager interacts with Kubernetes resources requesting certificates (in my case is an Ingress resource requiring HTTPS access) and an external issuer (CA) which is in my lab is a HashiCorp Vault pod running inside my Tanzu Kubernetes cluster.
The above work flow can be summarised as follows:
- A certificate is required by a Kubernetes resources (Ingress).
- Cert-manager sends the request for a certificate to the issuer (Vault).
- The issuer validates and authenticates the request and then generates/signs the requested certificate and returns it to cert-manager.
- Cert-Manager then stores the newly signed certificate as a Kubernetes secret.
- Ingress resources references the above secret under its TLS configuration in Ingress YAML.
Lab Inventory
For software versions I used the following:
-
- VMware ESXi 8.0U1
- vCenter server version 8.0U1
- VMware NSX ALB (Avi) 22.1.3 and AKO version 1.10.1
- TrueNAS 12.0-U7 used to provision NFS data stores to ESXi hosts.
- VyOS 1.3 used as lab backbone router and DHCP server.
- Windows Server 2019 as DNS server.
- Windows 10 pro as management host for UI access.
For virtual hosts and appliances sizing I used the following specs:
-
- 6 x ESXi hosts each with 12 vCPUs, 2 x NICs and 128 GB RAM.
- vCenter server appliance with 2 vCPU and 24 GB RAM.
Deployment Workflow
- Deploy Vault as local CA & PKI engine.
- Deploy and configure cert-manager.
- Configure & Verify Secure Ingress Resource.
Deploy Vault as local CA & PKI Engine
Step 1: Deploy Vault Helm Chart
The steps below are modified to match my lab environment, reference can be found on HashiCorp website.
I will be deploying Vault in my Tanzu cluster via Helm, if you do not have Helm installed please refer online on how to install Helm. Once Helm is installed you can deploy Vault using the Vault Helm chart as follows:
helm repo add hashicorp https://helm.releases.hashicorp.com helm repo update helm install vault hashicorp/vault --set "injector.enabled=false"
The above commands will create pod vault-0 under default namespace, if it is in pending state then this might be because no persistent volume is available for Vault pod. Vault deployment has a PV claim called data-vault-0 and this might need to be modified to match your CSI which you use in your Kubernetes cluster. In my setup i am using Tanzu Kubernetes Services on vSphere 8 so I am using vSphere CSI for persistent volumes provisioning, so I needed to replace the default data-vault-0 claim with the following:
apiVersion: v1 kind: PersistentVolumeClaim metadata: labels: app.kubernetes.io/instance: vault app.kubernetes.io/name: vault component: server name: data-vault-0 namespace: default spec: accessModes: - ReadWriteOnce storageClassName: zonal-sp resources: requests: storage: 10Gi
You will need of course to modify the above to match your CSI driver and storage policies.
Step 2: Initialise and unseal Vault
By default, Vault pod will be deployed and running but not in a ready state. This is because Vault needs first to be initialised and unsealed, run the following commands in the same sequence to initialise and unseal Vault:
kubectl exec vault-0 -- vault operator init -key-shares=1 -key-threshold=1 -format=json > init-keys.json VAULT_UNSEAL_KEY=$(cat init-keys.json | jq -r ".unseal_keys_b64[]") kubectl exec vault-0 -- vault operator unseal $VAULT_UNSEAL_KEY VAULT_ROOT_TOKEN=$(cat init-keys.json | jq -r ".root_token") kubectl exec vault-0 -- vault login $VAULT_ROOT_TOKEN
Once you apply the above commands, your vault-0 pod must be ready and in a running state
Vault will also create a ClusterIP service which other services (for example cert-manager) can use to interact with it to request certificates, this ClusterIP service can be verified as follows:
Step 3: Configure PKI secrets engine
In this step we will configure the Public Key Infrastructure engine in Vault which is responsible for generating dynamic X.509 certificates. With this, services can get certificates without going through the usual manual process of generating a private key and CSR, submitting to a CA, and waiting for a verification and signing process to complete.
Start an interactive shell session on the vault-0 pod
kubectl exec --stdin=true --tty=true vault-0 -- /bin/sh
Once you are inside the vault-0 pod shell, copy and paste the following commands after modifying them to match your environment
vault secrets enable pki vault secrets tune -max-lease-ttl=8760h pki vault write pki/root/generate/internal \ common_name=nsxbaas.homelab \ ttl=8760h vault write pki/config/urls \ issuing_certificates="http://vault.default:8200/v1/pki/ca" \ crl_distribution_points="http://vault.default:8200/v1/pki/crl" vault write pki/roles/nsxbaas-dot-homelab \ allowed_domains=nsxbaas.homelab \ allow_subdomains=true \ max_ttl=72h vault policy write pki - <<EOF path "pki*" { capabilities = ["read", "list"] } path "pki/sign/nsxbaas-dot-homelab" { capabilities = ["create", "update"] } path "pki/issue/nsxbaas-dot-homelab" { capabilities = ["create"] } EOF vault write auth/kubernetes/role/issuer \ bound_service_account_names=issuer \ bound_service_account_namespaces="*" \ policies=pki \ ttl=20m vault auth enable kubernetes vault write auth/kubernetes/config \ kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443"
Exit the vault-0 interactive shell using the command exit. At this point, we have configured Vault as our local CA and PKI engine to handle certificates requests that cert-manager will be sending towards Vault API.
Deploy and configure cert-manager
Step 1: Deploy Cert Manager
Since I am using VMware Tanzu Kubernetes Grid Services (TKGS) I need to deploy cert-manager from VMware package repositories and not from the cert-manager open source project or Github. To deploy cert-manager inside your Tanzu cluster, we need to login to our Tanzu cluster:
kubectl vsphere login --vsphere-username administrator@vsphere.local --server=https://192.168.26.51 --insecure-skip-tls-verify --tanzu-kubernetes-cluster-namespace stretched-homelab --tanzu-kubernetes-cluster-name zonal-cluster01
Once logged in, create a YAML file with the followinf contents and then apply it to the cluster
apiVersion: packaging.carvel.dev/v1alpha1 kind: PackageRepository metadata: name: tanzu-standard namespace: tkg-system spec: fetch: imgpkgBundle: image: projects.registry.vmware.com/tkg/packages/standard/repo:v1.6.0
This is will add VMware package repository to your cluster, from this repo we can deploy services to our cluster such as cert-manager. If the repo has been added successfully, it should have a state of Reconcile Succeeded
At this point, create a YAML file, paste the following contents in it and apply it to your cluster
apiVersion: v1 kind: ServiceAccount metadata: name: cert-manager-sa namespace: tkg-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: cert-manager-sa namespace: tkg-system --- apiVersion: packaging.carvel.dev/v1alpha1 kind: PackageInstall metadata: name: cert-manager namespace: tkg-system spec: serviceAccountName: cert-manager-sa packageRef: refName: cert-manager.tanzu.vmware.com versionSelection: constraints: 1.7.2+vmware.1-tkg.1 values: - secretRef: name: cert-manager-data-values --- apiVersion: v1 kind: Secret metadata: name: cert-manager-data-values namespace: tkg-system stringData: values.yml: | --- namespace: cert-manager
The above is the cert-manager deployment YAML, if all is good then all the pods inside cert-manager namespace should be in Running state
Step 2: Configure an issuer and generate a certificate
An issuer is a resource that represent certificate authorities (CAs) which is able to sign certificates in response to certificate signing requests. Cert-manager supports different types of issuers, however in my setup I will be using Vault as my cert-manager issuer. An issuer will interfaces with the Vault certificate generating endpoint and will be invoked when a certificate is created. In step 3 under the above section, we configured Vault’s Kubernetes authentication with a Kubernetes service account, named issuer, was granted the policy, named pki, to the certificate generation endpoints.
Create a service account named issuer within the namespace in which resources will be requesting certificates, in my lab I will deploy an Ingress resource under namespace called microservices
kubectl create serviceaccount issuer -n microservices
The service account generated a secret that is required by the Issuer automatically in Kubernetes 1.23, get the secret using the following command:
Note down the name of the secret highlighted above and create the following YAML in order to define Vault issuer for our cert-manager
apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: vault-issuer namespace: microservices spec: vault: server: http://vault.default:8200 path: pki/sign/nsxbaas-dot-homelab auth: kubernetes: mountPath: /v1/auth/kubernetes role: issuer secretRef: name: issuer-token-wxgr5 key: token
Issuer can be verified as seen below
Next, we need to create a Kubernetes certificate resource which actually will request from Vault the certificate through the Issuer configured. The name of this certificate will also be defined and referenced in our Ingress created later on, so that our Ingress can make use of the generated certificate that Vault will return to cert-manager. The returned certificate will also be stored in a secret as explained earlier.
To define the certificate resource, I used the below YAML
apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: onlineshop-ingress-secret namespace: microservices spec: secretName: onlineshop-ingress-secret issuerRef: name: vault-issuer commonName: onlineshop.zonal.nsxbaas.homelab dnsNames: - onlineshop.zonal.nsxbaas.homelab
The common name and DNS names are names within the allowed domains for the configured Vault endpoint, which we configured earlier under section configure pki secret engine.
Apply the above YAML, and if all is successful then you should see your certificate successfully issued
Configure & Verify Secure Ingress Resource
For the last part of this blog post I will configure an Ingress resource (based on NSX ALB AKO 1.10.1) for a demo microservices application which I deployed. The Ingress resource will expose a webpage hosted inside a pod under microservices namespace and exposed by ClusterIP service called frontend
Here is my Ingress resource YAML
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: # add an annotation indicating the issuer to use. cert-manager.io/issuer: vault-issuer name: onlineshop-ingress namespace: microservices spec: rules: - host: onlineshop.zonal.nsxbaas.homelab http: paths: - pathType: Prefix path: / backend: service: name: frontend port: number: 80 tls: - hosts: - onlineshop.zonal.nsxbaas.homelab # the secret name below must match the certificate we created earlier in which cert-manager stored the issued certificate from Vault issuer. secretName: onlineshop-ingress-secret
I then applied the above YAML and my Ingress has been successfully provisioned, notice port 443 added to list of ports indicating that this Ingress accepts SSL connections
To verify Ingress HTTPS access, from a web browser navigate to https://onlineshop.zonal.nsxbaas.homelabe
Although it is in Dutch, but the message looks similar to you since it is the standard notification you get when you try to access a secure HTTPS website however your browser cannot verify the issuer of the certificate. This is expected since I have not added my Vault’s root CA certificate to my web browser. However this message indicated that our Ingress has proposed a certificate to our web browser, lets export this certificate and examine it further. For Chrome, click on upper left message next to the URL and inspect the certificate details and click on Export
I then uploaded this cert to a Linux machine and ran the below command
openssl x509 -in onlineshop.zonal.nsxbaas.homelab.crt -text -noout
The highlighted sections show the relevant information regarding the certificate common and dns names and our Vault issuer information.
Hope you have found this blog post useful!
Pingback: Configuring NSX ALB Web Application Firewall (WAF) policies on Secure Ingress - nsxbaas