
Overview
VMware Antrea and NSX extend advanced data centre networking and security capabilities to containerised workloads and offer a single pane of glass for organisations so that security admins can configure micro-segmentation policies rules to both containers and standard workloads (VMs and bare metals) from the same NSX UI. VMware Antrea and NSX integration is Kubernetes platform agnostic and can be used with most of available Kubernetes platforms ranging from vanilla K8s up to Tanzu, OpenShift, Rancher, etc.
In previous blog posts I discussed the topic of leveraging VMware Antrea and NSX to secure native K8s workloads (HERE and HERE) however I have received some requests from couple of my customers to cover the same topic but for Openshift. Redhat Openshift is the Kubernetes platform offered by Redhat and is offered in commercial and community versions, it is widely adopted due to its early presence in the containers world. Although I am a big fan of Tanzu due to the strong enterprise grade eco-system that VMware has built around it, I decided to write this two part blog post to cover how VMware Antrea and VMware NSX can uplift Openshift workloads security.
In this two part blog post I will cover deploying an OKD cluster (Openshift community edition) with VMware Antrea as CNI on top of vSphere cloud in part one, while in part two I will integrate my OKD cluster to VMware NSX and use NSX DFW (Distributed Firewall) to micro-segment and secure containers running on top of my OKD cluster.
Lab Inventory
For software versions I used the following:
- VMware ESXi 8.0U1
- vCenter server version 8.0U1a
- VMware NSX 4.1.0.2
- VMware Antrea 1.7.0
- VMware NSX ALB 22.1.3 for Openshift Cluster load balancing requirements.
- Redhat OKD 4.12
- TrueNAS 12.0-U7 as backend storage system.
- VyOS 1.3 used as lab backbone router, NTP and DHCP server.
- Ubuntu 20.04 LTS as Linux jumpbox.
- Windows Server 2019 R2 Standard as DNS server.
- Windows 10 Pro as UI jump box.
For virtual hosts and appliances sizing I used the following specs:
- 3 x virtualised ESXi hosts each with 12 vCPUs, 2x NICs and 128 GB RAM.
- vCenter server appliance with 2 vCPU and 24 GB RAM.
- NSX Manager medium appliance
Deployment Workflow
- Deploy an OKD 4.12 Cluster with Antrea Operator on vSphere Cloud
- Connect Openshift Cluster to NSX Manager
- Deploy test microservices App
Deploy an OKD 4.12 Cluster with Antrea Operator
Introduction to OKD deployment methods
Openshift is a Kubernetes platform so it has the same construct of controller (called master) and worker nodes, it can be installed in different clouds and platforms such as vSphere, AWS, Azure and Bare metal. As mentioned earlier, I am deploying OKD which is the community supported edition of Redhat Openshift, it is the same as Openshift which runs on RHEL except the support part. I was a bit surprised to find that almost all online references of OKD deployment on vSphere use the bare metal method, which basically means that you need to manually deploy your master and worker nodes VMs in vSphere and manually install OKD binaries inside each one of them, which is really cumbersome process. In this blog post, I am using a more dynamic and much easier method to deploy my OKD cluster to my vSphere environment which is using openshift-install configuration file.
Openshift uses temporary bootstrap machine which is gets spun up first and pulls all binaries/packages needed for master and worker nodes, then it creates master and worker nodes accordingly and once OKD cluster is up and running, bootstrap machine gets automatically deleted.
Step 1: Download Openshift OKD packages
To prepare for our deployment we need to have a Linux machine (in my home lab it is a Fedora CoreOS but you can use other OS if you want) and download and unpack the “oc” command line tool and OKD 4.12 binaries from OKD Github repository I chose OKD 4.12 because this is the highest OKD version that VMware Antrea 1.7.0 currently supports. I downloaded the following tar archives to my Fedora jumpbox
The first package (openshift-client-linux-4.12.0-0.okd-2023-04-16-041331.tar.gz) is the Openshift command line tool (Kubectl is also included) while the second one (openshift-install-linux-4.12.0-0.okd-2023-04-16-041331.tar.gz) contains the Openshift installation scripts.
Unpack both tar archives and add the contents of the first package (oc and kubectl) to your PATH and ensure that oc command executes successfully
The second tar should have the openshift-install script and install directory which contains openshift deployment file under install directory
Step 2: OKD Networking Requirements
Openshift has some strict networking requirements that need to be met to ensure smooth deployment, a summary of those requirements are listed below:
- Bootstrap, master and worker nodes need all to be connected and reachable in same layer 2 domain (VLAN) this applies also to load balancer VIPs that will be created later for OKD cluster endpoints.
- DNS must be updated with A and PTR records for bootstrap, master and worker nodes beforehand the Openshift cluster deployment.
- DHCP with expiry/static address assignment is recommended.
- A load balancer/HA proxy to provide VIP address for Kube-API and configuration endpoints for OKD master and worker nodes.
- Internet connectivity or setup
DNS and DHCP preparation for OKD Cluster Deployment
Before we can deploy our OKD cluster we need to ensure that our DNS is updated with the following entries (both A and PTR records)
api.ocp01.nsxbaas.homelab |
192.168.31.100 |
api-init.ocp01.nsxbaas.homelab |
192.168.31.100 |
*.apps.ocp01.nsxbaas.homelab |
192.168.31.101 |
bootstrap.ocp01.nsxbaas.homelab |
192.168.31.1 |
master1.ocp01.nsxbaas.homelab |
192.168.31.2 |
master2.ocp01.nsxbaas.homelab |
192.168.31.3 |
master3.ocp01.nsxbaas.homelab |
192.168.31.4 |
worker1.ocp01.nsxbaas.homelab |
192.168.31.5 |
worker2.ocp01.nsxbaas.homelab |
192.168.31.6 |
The first 7 entries are standard A records (along with their PTR records) which are pointing to DHCP addresses that will be assigned to OKD nodes, I have added those entries before creating OKD cluster (this is a requirement) and since the order of nodes provisioning in OKD always starts with bootstrap then master followed by worker nodes, I have reserved the above addresses from 192.168.31.1 – 192.168.31.10 for OKD nodes.
The last 3 entries are required by OKD and will be used to access OKD cluster endpoints hosting kube-api and nodes configuration (for example). These entries has the form of <endpoint name exactly as shown in above table>.<okd cluster name>.<domain-name>, so in my case okd openshift cluster is called ocp01 and my domain is nsxbaas.homelab.
Note: Because I am using a DHCP server in my environment and I used Openshift IPI method (automated install) I could not map mac addresses of my VMs (because simply I did not know them) to a specific DHCP address, and since bootstrap machine used by openshift will be deleted after installation, there will be some re-addressing happening and some address shuffling across master nodes might take place. This is not a problem because we are including all master nodes in our server pool in Avi configured later. This also calrifies why in the screenshots of Avi virtual services the bootstrap machine IP (192.168.31.1) is still present.
Load Balancer preparation for OKD Cluster Deployment
As any Kubernetes platform, Openshift requires a layer 4 load balancer for cluster access. The load balancer function is to provide a virtual IP address (VIP) which is backed by OKD cluster master and worker nodes (depends on the access type). OKD cluster needs to be accessible via VIPs mapped to the above last 3 FQDNs, also this is important during bootstrap phase, since bootstrap machine will be configuring cluster related tasks by accessing the above listed URLs.
You can use any HA proxy or load balancer for this task, in my setup I am using NSX ALB (Avi) to configure the virtual services and VIPs needed for the above listed URLs. We need to create the below virtual services and corresponding pool members and VIPs before we start deploying OKD cluster. I will not be going through the details of creating the below, otherwise I will end up with unreadable long blog post, you can reference VMware NSX ALB and/or Avi networks documentation on how to create virtual services, server pools and assign VIPs.
Below are screenshots from my NSX ALB configuration required by Openshift
General Load Balancer Configuration
I added my vCenter server as cloud in NSX ALB (Avi) and configured an IPAM and DNS profiles. IPAM is used so that VIP and Service Engine IP addresses can be assigned and managed in a dynamic way, DNS so that I can bind my *.apps.ocp01.nsxbaas.homelab wildcard entries to Ingress VIPs which Openshift will be deploying as part of openshift cluster deployment.
api.okd01.nsxbaas.homelab
This is the cluster api URL and is accessed on TCP port 6443, this needs to be configured as fronted and backend port on the virtual service on your load balancer as well as backed port on server pools. Server pool in this case is your bootstrap machine and master nodes, once OKD cluster is provisioned make sure to remove bootstrap machine from server pool (in my setup it is 192.168.31.4)
Below is the VIP I assigned to the above virtual service, VIP address is 192.168.31.100, this address will be added in the OKD cluster deployment manifest YAML as VIP IP and is allocated by Avi from allocated VIP pool as shown below:
The server pool configuration is also shown below, ensure to have a health probe configured to ensure that the VIP address is not sending traffic to a dead server, for Health probe I used System-TCP
Below is a screenshot from virtual service configuration used in my setup
api-init.okd01.nsxbaas.homelab
Same as the previous URL, but this is on port 22623 (front and backend) and used for machine configuration
We have to use same VIP address as the above VS which is 192.168.31.100 while the virtual service will be using TCP port 22623. We also need to configure a new server including master nodes but this time listening on port 22623.
*.apps.okd01.nsxbaas.homelab
This is a wildcard DNS record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. We need to create two virtual services pointing to two server pools on port 80 (HTTP) and port 443 (HTTPS) respectively. For this Ingress I used VIP 102.168.31.101
Repeat the same but for Ingress with HTTPS (port 443)
Step 3: OKD Platform Deployment
Below is the installation YAML that I modified to deploy OKD cluster to my vSphere environment
additionalTrustBundlePolicy: Proxyonly apiVersion: v1 baseDomain: nsxbaas.homelab compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: vsphere: cpus: 4 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: vsphere: cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 replicas: 3 metadata: creationTimestamp: null name: ocp01 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.31.0/24 networkType: antrea serviceNetwork: - 172.30.0.0/16 platform: vsphere: apiVIPs: - 192.168.31.100 cluster: Twix datacenter: Homelab resourcePool: /Homelab/host/Twix/Resources/OKD folder: /Homelab/vm/OKD defaultDatastore: DS01 diskType: thin ingressVIPs: - 192.168.31.101 network: OKD password: VMware1! username: administrator@vsphere.local vCenter: vc-l-02a.nsxbaas.homelab publish: External pullSecret: #omitted sshKey: | ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG+QQFdso4r2OqnnQ25iWmZ7a3HwOLT8e6M3pvhseSck core@bootstrap.nsxbaas.homelab
Cluster Network: This is the CIDR from which Antrea will be assigning IPs to pods.
Machine Network: This is the subnet on which bootstrap, master and worker nodes are connected.
Network Type: This is the CNI to be used, make sure to change that to antrea
Service Network: This is the CIDR from which services will be assigned IPs in the OKD cluster.
apiVIPs: This is the VIP we created earlier and assigned to api.ocp01.nsxbaas.homelab and to api-init.ocp01.nsxbaas.homelab
IngressVIPs: This is the VIP we created earlier and assigned to the wildcard DNS entry *.apps.ocp01.nsxbaas.homelab
Make sure also to add your vCenter SSL certificates to your Linux Jumpbox from which the openshift-install will run, also you can generate a public and private SSH key pair and add the public key of your machine to the deployment file (as shown above) so that you can SSH to Openshift nodes once cluster is deployed (username is core).
Once the file is ready, navigate to the directory where the openshift-install script is extracted and run the following command:
./openshift-install create manifests
Make sure that the openshift-install.yaml is in the same location from which you run the above command, if successful you should see that the openshift-install script has generated the manifests YAML for cluster deployment
Step 4: Download and Configure VMware Antrea Openshift Operator
Go to https://downloads.vmware.com, search for “VMware Container Networking with Antrea” and select “VMware Antrea 1.x Product Binaries”. Download the following:
-
- VMware Container Networking with Antrea, K8s Operator Manifests
Once downloaded, extract the tar archive, you should see Antrea Openshift deployment manifests as below
- In operator.yaml, update the antrea-operator image with the URI of the Antrea operator container image.
- In operator.antrea.vmware.com_v1_antreainstall_cr.yaml, change antreaImage to the URI of the Antrea container image.
Once you have modified the above, save and exit both files and copy all the Antrea manifests files to the manifests directory which was created earlier by the openshift-install script.
Step 5: Finalise Openshift Cluster Deployment
Once all Antrea manifests are copied under the main manifest directory for OKD installation, run the following command to start OKD cluster deployment
./openshift-install create cluster --log-level info
The process will take something between 45 minutes to 1 hour and once completed, bootstrap machine will be deleted and you should be able to see your OKD master and worker nodes created and running in vCenter
From commad line using oc cli tool, you can verify also nodes status. OC cli tool uses the same arguments and command syntax as kubectl, you can also use kubectl to interact with your OKD clusters
We can also verify antrea pods deployment
OKD has also a UI which can be accessed using the URL and login credentials returned as a result of a successful cluster creation (see 2 screenshots above) from a web browser you should also be able to login to your newly deployed OKD cluster
Connect Openshift Cluster to NSX Manager
VMware Antrea requires a principle identity account with Enterprise Admin rights on NSX side in order to connect with NSX Manager. Principle identity accounts are certificate based accounts in NSX and hence we need to generate an SSL certificate and key on the Kubernetes nodes hosting Antrea Interworking pods and use those to create the enterprise admin account on NSX Manager side.
To generate an SSL certificate on your kubernetes controller use the below set of commands
openssl genrsa -out antrea-cluster-private.key 2048 openssl req -new -key antrea-cluster-private.key -out antrea-cluster.csr -subj "/C=US/ST=CA/L=Palo Alto/O=VMware/OU=Antrea Cluster/CN=antrea-cluster" openssl x509 -req -days 3650 -sha256 -in antrea-cluster.csr -signkey antrea-cluster-private.key -out antrea-cluster.crt
We will also need to generate a base64 output files from the private key and certificate we generated since this will need to be added to the bootstrap-config.yaml when connecting to NSX Manager, you can generate a base64 output from both files using the below commands:
cat antrea-cluster-private.key |base64 -w0 > antrea-cluster-key.base64 cat antrea-cluster.crt |base64 -w0 > antrea-cluster-crt.base64
Navigate to NSX UI in order to create a principle identity account with enterprise admin role, this to be used by interworking pods to connect our openshift cluster to NSX Manager.
- In the NSX Manager UI, click the System tab.
- Under Settings, navigate to User Management > User Role Assignment.
- Click Add Principal Identity.
- Enter a name for the principal identity user.
Navigate back to your jumpbox, switch to the directory to which you extracted antrea openshift manifests early, there we will need to edit 2 files:
- operator.antrea.vmware.com_v1_antreainstall_cr.yaml
- nsx-cert.yaml
With a text editor open operator.antrea.vmware.com_v1_antreainstall_cr.yaml and under BootstrapConfig section modify the highlighted parameters as shown below, save and exit the file.
Open nsx-cert.yaml and under tls.crt and tls.key section paste the contents of the base64 encoded certificare and key we generated in this section, save and exit the file.
You then need to apply both files to your openshift cluster:
oc apply -f operator.antrea.vmware.com_v1_antreainstall_cr.yaml -f nsx-cert.yaml
It can take up to 15 minutes to run the register job and download antrea interworking pods to connect to NSX, but once done you should be able to see your openshift cluster appearing in NSX UI.
Deploying a testing microservices App
For testing Ingress and eventually applying NSX DFW rules, I used the below manifest to deploy a testing microservices app to a namespace called microservices.
Note: I modified this YAML to include Pod security admission for Openshift, if you are deploying to another Kubernetes platform you might need to modify the security related parameters under Deployments and containers section
# Copyright 2018 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ---------------------------------------------------------- # WARNING: This file is autogenerated. Do not manually edit. # ---------------------------------------------------------- # [START gke_release_kubernetes_manifests_microservices_demo] --- apiVersion: apps/v1 kind: Deployment metadata: name: emailservice spec: selector: matchLabels: app: emailservice template: metadata: labels: app: emailservice spec: serviceAccountName: default terminationGracePeriodSeconds: 5 securityContext: fsGroup: 1000 runAsGroup: 1000 runAsNonRoot: true runAsUser: 1000 seccompProfile: type: RuntimeDefault containers: - name: server securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false readOnlyRootFilesystem: true image: gcr.io/google-samples/microservices-demo/emailservice:v0.5.2 ports: - containerPort: 8080 env: - name: PORT value: "8080" - name: DISABLE_PROFILER value: "1" readinessProbe: periodSeconds: 5 exec: command: ["/bin/grpc_health_probe", "-addr=:8080"] livenessProbe: periodSeconds: 5 exec: command: ["/bin/grpc_health_probe", "-addr=:8080"] resources: requests: cpu: 100m memory: 64Mi limits: cpu: 200m memory: 128Mi --- apiVersion: v1 kind: Service metadata: name: emailservice spec: type: ClusterIP selector: app: emailservice ports: - name: grpc port: 5000 targetPort: 8080 --- apiVersion: apps/v1 kind: Deployment metadata: name: checkoutservice spec: selector: matchLabels: app: checkoutservice template: metadata: labels: app: checkoutservice spec: serviceAccountName: default securityContext: fsGroup: 1000 runAsGroup: 1000 runAsNonRoot: true runAsUser: 1000 seccompProfile: type: RuntimeDefault containers: - name: server securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false readOnlyRootFilesystem: true image: gcr.io/google-samples/microservices-demo/checkoutservice:v0.5.2 ports: - containerPort: 5050 readinessProbe: exec: command: ["/bin/grpc_health_probe", "-addr=:5050"] livenessProbe: exec: command: ["/bin/grpc_health_probe", "-addr=:5050"] env: - name: PORT value: "5050" - name: PRODUCT_CATALOG_SERVICE_ADDR value: "productcatalogservice:3550" - name: SHIPPING_SERVICE_ADDR value: "shippingservice:50051" - name: PAYMENT_SERVICE_ADDR value: "paymentservice:50051" - name: EMAIL_SERVICE_ADDR value: "emailservice:5000" - name: CURRENCY_SERVICE_ADDR value: "currencyservice:7000" - name: CART_SERVICE_ADDR value: "cartservice:7070" resources: requests: cpu: 100m memory: 64Mi limits: cpu: 200m memory: 128Mi --- apiVersion: v1 kind: Service metadata: name: checkoutservice spec: type: ClusterIP selector: app: checkoutservice ports: - name: grpc port: 5050 targetPort: 5050 --- apiVersion: apps/v1 kind: Deployment metadata: name: recommendationservice spec: selector: matchLabels: app: recommendationservice template: metadata: labels: app: recommendationservice spec: serviceAccountName: default terminationGracePeriodSeconds: 5 securityContext: fsGroup: 1000 runAsGroup: 1000 runAsNonRoot: true runAsUser: 1000 seccompProfile: type: RuntimeDefault containers: - name: server securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false readOnlyRootFilesystem: true image: gcr.io/google-samples/microservices-demo/recommendationservice:v0.5.2 ports: - containerPort: 8080 readinessProbe: periodSeconds: 5 exec: command: ["/bin/grpc_health_probe", "-addr=:8080"] livenessProbe: periodSeconds: 5 exec: command: ["/bin/grpc_health_probe", "-addr=:8080"] env: - name: PORT value: "8080" - name: PRODUCT_CATALOG_SERVICE_ADDR value: "productcatalogservice:3550" - name: DISABLE_PROFILER value: "1" resources: requests: cpu: 100m memory: 220Mi limits: cpu: 200m memory: 450Mi --- apiVersion: v1 kind: Service metadata: name: recommendationservice spec: type: ClusterIP selector: app: recommendationservice ports: - name: grpc port: 8080 targetPort: 8080 --- apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: selector: matchLabels: app: frontend template: metadata: labels: app: frontend annotations: sidecar.istio.io/rewriteAppHTTPProbers: "true" spec: serviceAccountName: default securityContext: fsGroup: 1000 runAsGroup: 1000 runAsNonRoot: true runAsUser: 1000 seccompProfile: type: RuntimeDefault containers: - name: server securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false readOnlyRootFilesystem: true image: gcr.io/google-samples/microservices-demo/frontend:v0.5.2 ports: - containerPort: 8080 readinessProbe: initialDelaySeconds: 10 httpGet: path: "/_healthz" port: 8080 httpHeaders: - name: "Cookie" value: "shop_session-id=x-readiness-probe" livenessProbe: initialDelaySeconds: 10 httpGet: path: "/_healthz" port: 8080 httpHeaders: - name: "Cookie" value: "shop_session-id=x-liveness-probe" env: - name: PORT value: "8080" - name: PRODUCT_CATALOG_SERVICE_ADDR value: "productcatalogservice:3550" - name: CURRENCY_SERVICE_ADDR value: "currencyservice:7000" - name: CART_SERVICE_ADDR value: "cartservice:7070" - name: RECOMMENDATION_SERVICE_ADDR value: "recommendationservice:8080" - name: SHIPPING_SERVICE_ADDR value: "shippingservice:50051" - name: CHECKOUT_SERVICE_ADDR value: "checkoutservice:5050" - name: AD_SERVICE_ADDR value: "adservice:9555" # # ENV_PLATFORM: One of: local, gcp, aws, azure, onprem, alibaba # # When not set, defaults to "local" unless running in GKE, otherwies auto-sets to gcp # - name: ENV_PLATFORM # value: "aws" - name: ENABLE_PROFILER value: "0" # - name: CYMBAL_BRANDING # value: "true" resources: requests: cpu: 100m memory: 64Mi limits: cpu: 200m memory: 128Mi --- apiVersion: v1 kind: Service metadata: name: frontend spec: type: ClusterIP selector: app: frontend ports: - name: http port: 80 targetPort: 8080 --- apiVersion: v1 kind: Service metadata: name: frontend-external spec: type: NodePort selector: app: frontend ports: - name: http port: 80 targetPort: 8080 --- apiVersion: apps/v1 kind: Deployment metadata: name: paymentservice spec: selector: matchLabels: app: paymentservice template: metadata: labels: app: paymentservice spec: serviceAccountName: default terminationGracePeriodSeconds: 5 securityContext: fsGroup: 1000 runAsGroup: 1000 runAsNonRoot: true runAsUser: 1000 seccompProfile: type: RuntimeDefault containers: - name: server securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false readOnlyRootFilesystem: true image: gcr.io/google-samples/microservices-demo/paymentservice:v0.5.2 ports: - containerPort: 50051 env: - name: PORT value: "50051" - name: DISABLE_PROFILER value: "1" readinessProbe: exec: command: ["/bin/grpc_health_probe", "-addr=:50051"] livenessProbe: exec: command: ["/bin/grpc_health_probe", "-addr=:50051"] resources: requests: cpu: 100m memory: 64Mi limits: cpu: 200m memory: 128Mi --- apiVersion: v1 kind: Service metadata: name: paymentservice spec: type: ClusterIP selector: app: paymentservice ports: - name: grpc port: 50051 targetPort: 50051 --- apiVersion: apps/v1 kind: Deployment metadata: name: productcatalogservice spec: selector: matchLabels: app: productcatalogservice template: metadata: labels: app: productcatalogservice spec: serviceAccountName: default terminationGracePeriodSeconds: 5 securityContext: fsGroup: 1000 runAsGroup: 1000 runAsNonRoot: true runAsUser: 1000 seccompProfile: type: RuntimeDefault containers: - name: server securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false readOnlyRootFilesystem: true image: gcr.io/google-samples/microservices-demo/productcatalogservice:v0.5.2 ports: - containerPort: 3550 env: - name: PORT value: "3550" - name: DISABLE_PROFILER value: "1" readinessProbe: exec: command: ["/bin/grpc_health_probe", "-addr=:3550"] livenessProbe: exec: command: ["/bin/grpc_health_probe", "-addr=:3550"] resources: requests: cpu: 100m memory: 64Mi limits: cpu: 200m memory: 128Mi --- apiVersion: v1 kind: Service metadata: name: productcatalogservice spec: type: ClusterIP selector: app: productcatalogservice ports: - name: grpc port: 3550 targetPort: 3550 --- apiVersion: apps/v1 kind: Deployment metadata: name: cartservice spec: selector: matchLabels: app: cartservice template: metadata: labels: app: cartservice spec: serviceAccountName: default terminationGracePeriodSeconds: 5 securityContext: fsGroup: 1000 runAsGroup: 1000 runAsNonRoot: true runAsUser: 1000 seccompProfile: type: RuntimeDefault containers: - name: server securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false readOnlyRootFilesystem: true image: gcr.io/google-samples/microservices-demo/cartservice:v0.5.2 ports: - containerPort: 7070 env: - name: REDIS_ADDR value: "redis-cart:6379" resources: requests: cpu: 200m memory: 64Mi limits: cpu: 300m memory: 128Mi readinessProbe: initialDelaySeconds: 15 exec: command: ["/bin/grpc_health_probe", "-addr=:7070", "-rpc-timeout=5s"] livenessProbe: initialDelaySeconds: 15 periodSeconds: 10 exec: command: ["/bin/grpc_health_probe", "-addr=:7070", "-rpc-timeout=5s"] --- apiVersion: v1 kind: Service metadata: name: cartservice spec: type: ClusterIP selector: app: cartservice ports: - name: grpc port: 7070 targetPort: 7070 --- apiVersion: apps/v1 kind: Deployment metadata: name: loadgenerator spec: selector: matchLabels: app: loadgenerator replicas: 1 template: metadata: labels: app: loadgenerator annotations: sidecar.istio.io/rewriteAppHTTPProbers: "true" spec: serviceAccountName: default terminationGracePeriodSeconds: 5 restartPolicy: Always securityContext: fsGroup: 1000 runAsGroup: 1000 runAsNonRoot: true runAsUser: 1000 seccompProfile: type: RuntimeDefault initContainers: - command: - /bin/sh - -exc - | echo "Init container pinging frontend: ${FRONTEND_ADDR}..." STATUSCODE=$(wget --server-response http://${FRONTEND_ADDR} 2>&1 | awk '/^ HTTP/{print $2}') if test $STATUSCODE -ne 200; then echo "Error: Could not reach frontend - Status code: ${STATUSCODE}" exit 1 fi name: frontend-check securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false readOnlyRootFilesystem: true image: quay.io/quay/busybox env: - name: FRONTEND_ADDR value: "frontend:80" containers: - name: main securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false readOnlyRootFilesystem: true image: gcr.io/google-samples/microservices-demo/loadgenerator:v0.5.2 env: - name: FRONTEND_ADDR value: "frontend:80" - name: USERS value: "10" resources: requests: cpu: 300m memory: 256Mi limits: cpu: 500m memory: 512Mi --- apiVersion: apps/v1 kind: Deployment metadata: name: currencyservice spec: selector: matchLabels: app: currencyservice template: metadata: labels: app: currencyservice spec: serviceAccountName: default terminationGracePeriodSeconds: 5 securityContext: fsGroup: 1000 runAsGroup: 1000 runAsNonRoot: true runAsUser: 1000 seccompProfile: type: RuntimeDefault containers: - name: server securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false readOnlyRootFilesystem: true image: gcr.io/google-samples/microservices-demo/currencyservice:v0.5.2 ports: - name: grpc containerPort: 7000 env: - name: PORT value: "7000" - name: DISABLE_PROFILER value: "1" readinessProbe: exec: command: ["/bin/grpc_health_probe", "-addr=:7000"] livenessProbe: exec: command: ["/bin/grpc_health_probe", "-addr=:7000"] resources: requests: cpu: 100m memory: 64Mi limits: cpu: 200m memory: 128Mi --- apiVersion: v1 kind: Service metadata: name: currencyservice spec: type: ClusterIP selector: app: currencyservice ports: - name: grpc port: 7000 targetPort: 7000 --- apiVersion: apps/v1 kind: Deployment metadata: name: shippingservice spec: selector: matchLabels: app: shippingservice template: metadata: labels: app: shippingservice spec: serviceAccountName: default securityContext: fsGroup: 1000 runAsGroup: 1000 runAsNonRoot: true runAsUser: 1000 seccompProfile: type: RuntimeDefault containers: - name: server securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false readOnlyRootFilesystem: true image: gcr.io/google-samples/microservices-demo/shippingservice:v0.5.2 ports: - containerPort: 50051 env: - name: PORT value: "50051" - name: DISABLE_PROFILER value: "1" readinessProbe: periodSeconds: 5 exec: command: ["/bin/grpc_health_probe", "-addr=:50051"] livenessProbe: exec: command: ["/bin/grpc_health_probe", "-addr=:50051"] resources: requests: cpu: 100m memory: 64Mi limits: cpu: 200m memory: 128Mi --- apiVersion: v1 kind: Service metadata: name: shippingservice spec: type: ClusterIP selector: app: shippingservice ports: - name: grpc port: 50051 targetPort: 50051 --- apiVersion: apps/v1 kind: Deployment metadata: name: redis-cart spec: selector: matchLabels: app: redis-cart template: metadata: labels: app: redis-cart spec: securityContext: fsGroup: 1000 runAsGroup: 1000 runAsNonRoot: true runAsUser: 1000 seccompProfile: type: RuntimeDefault containers: - name: redis securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false readOnlyRootFilesystem: true image: redis:latest ports: - containerPort: 6379 readinessProbe: periodSeconds: 5 tcpSocket: port: 6379 livenessProbe: periodSeconds: 5 tcpSocket: port: 6379 volumeMounts: - mountPath: /data name: redis-data resources: limits: memory: 256Mi cpu: 125m requests: cpu: 70m memory: 200Mi volumes: - name: redis-data emptyDir: {} --- apiVersion: v1 kind: Service metadata: name: redis-cart spec: type: ClusterIP selector: app: redis-cart ports: - name: tcp-redis port: 6379 targetPort: 6379 --- apiVersion: apps/v1 kind: Deployment metadata: name: adservice spec: selector: matchLabels: app: adservice template: metadata: labels: app: adservice spec: serviceAccountName: default terminationGracePeriodSeconds: 5 securityContext: fsGroup: 1000 runAsGroup: 1000 runAsNonRoot: true runAsUser: 1000 seccompProfile: type: RuntimeDefault containers: - name: server securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false readOnlyRootFilesystem: true image: gcr.io/google-samples/microservices-demo/adservice:v0.5.2 ports: - containerPort: 9555 env: - name: PORT value: "9555" resources: requests: cpu: 200m memory: 180Mi limits: cpu: 300m memory: 300Mi readinessProbe: initialDelaySeconds: 20 periodSeconds: 15 exec: command: ["/bin/grpc_health_probe", "-addr=:9555"] livenessProbe: initialDelaySeconds: 20 periodSeconds: 15 exec: command: ["/bin/grpc_health_probe", "-addr=:9555"] --- apiVersion: v1 kind: Service metadata: name: adservice spec: type: ClusterIP selector: app: adservice ports: - name: grpc port: 9555 targetPort: 9555 # [END gke_release_kubernetes_manifests_microservices_demo]
Create a namespace called microservices app and apply the above file to it, you should see your application deployed
Note: if you do not see your pods deployed i.e. the above command returns an empty namespace then apply the following admission control command to microservices namespace
oc adm policy add-scc-to-user privileged -z default -n microservices
Verify that you can see the microservices namespace and the deployed pods from NSX UI
This concludes part one of this blog post series, in part two I will deploy a production and development applications on my openshift cluster and will use NSX DFW to control access to the above created microservices application from both namespaces, stay tuned.
Hop you have found this blog post helpful.
Pingback: Securing Openshift Clusters Using VMware Antrea and NSX 4.1 – Part II - nsxbaas