Overview

With the release of VMware NSX 4.0 VMware announced the deprecation of NSX standard load balancer and the intention to replace it with NSX Advanced Load Balancer (formerly Avi Networks). This is not quite a new message since the advise to adopt NSX ALB for greenfield deployments has been out and followed for quite sometime. NSX standard load balancer is a built-in load balancer service that can be configured from NSX UI and needs to reside on a T1 gateway, although it offers variety of load balancing features it does not offer the same level of advanced load balancing, WAF, Kubernetes Ingress and analytics functionalities that are offered by NSX ALB, hence the adoption plans around NSX Advanced Load Balancer. NSX Advanced Load Balancer (Avi Networks) is an independent solution and does not require NSX to provide load balancing for your physical, virtual or containerised workloads. However, in this post I am focusing on how you can migrate existing NSX standard load balancer(s) to NSX Advanced Load Balancer. 

Before I talk about the migration method/tool that we will be using in this post, I will revise fundamental NSX standard load balancer components definition:

Load Balancer: A load balancer distributes incoming service requests evenly among multiple servers in such a way that the load distribution is transparent to users. NSX standard load balancer is connected to a Tier-1 gateway. The load balancer hosts single or multiple virtual servers. 

Virtual Server: A virtual server is an abstract of an application service, which maps to a unique IP, port, and protocol. The virtual server is associated to single to multiple server pools.

Server Pool: A server pool consists of a group of servers (physical, virtual or Pods). 

Application Profile: A user needs to define an application load balancing profile if load balancer needs to load balance traffic across servers in server pool based on layer 7 information (HTTP URL for example).

Health Monitors: Active monitor which includes HTTP, HTTPS, TCP, UDP, and ICMP, and passive monitor.

NSX ALB (Avi) Migration Tool

Avi migration tool is an open-source tool written in Python and intended to provide a simple migration 2 phased workflow (configuration migration and cutover) to migrate NSX standard load balancer services to NSX Advanced Load Balancer. The tool supports specific NSX standard load balancer topologies and can be executed from a Linux/Mac machine or from NSX ALB controller directly (from version 22.1.2 and 21.1.6). In this blog post I will be deploying Avi migration tool in an Ubuntu Linux to be independent from the NSX ALB controller version.

Avi migration tools requires the following packages to be installed on the client from which you are going to launch the tool:

The migration tool also supports most of the commonly used NSX standard load balancer topologies with both one-arm and in-line modes of operations and both for VLAN or Overlay backed server pools, below is a summary of the supported NSX load balancer topologies:

  • One-Arm topologies

On the left is the NSX standard load balancer topology while on the right is the migrated NSX ALB corresponding topology. 

 

  • In-line topologies:

The above also supported if the webservers Pool is connected to a VLAN backed segments. Please note, in my lab setup I am enabling SNAT on my NSX standard load balancer configuration, which means that the load balancer will rewrite the source address of clients before sending the requests to the backend servers in the server pool, if you have a topology where SNAT is disabled then you need to make sure that in your NSX ALB (Avi) you have the preserve client IP option enabled and you will need to pay extra attention to routing and packet flows in that scenario.

Reference Architecture

Below is the architecture I am using in my setup, the right hand side is Avi components domain, it is not prerequisite to have a dedicated T1 for Avi service engines but it is recommended to do so. This is needed because for migration purpose you will need to add your NSX instance as NSX-T Cloud in Avi as cloud provider and specify which logical segments will be used to connect service engines management and data interfaces.

The left hand side is my NSX standard load balancer setup, it is an in-line deployment with a service interface on T1 (named Prod) which is providing standard L4 load balancing to backend web servers connected to an overlay segment (172.160.110.0/24).

A breakdown of every segment is as follows:

  • 172.160.110.0/24: this is an overlay segment to which 2 webserver VMs are connected (172.160.110.10 and 172.160.110.20) and to which Avi service engines data interfaces will be connected to as well.
  • 172.160.140.0/24: This is a DHCP enabled overlay segment which will be used to connect Avi service engines management interface. This subnet must be advertised or source NAT must be configured on T1 called Avi_SE_mgmt to allow service engines to reach Avi controller residing on my external management/Infre network.
  • 172.160.130.10/32: This is a floating VIP IP that is used by NSX standard load balancer and clients will be connecting to this IP address to access web pages hosted on my backend webservers. This IP address need to be advertised to external networks as well.

Lab Inventory

For software versions I used the following:

    • VMware ESXi 8.0U1
    • vCenter server version 8.0U1
    • VMware NSX 4.1.0.2
    • NSX Advanced Load Balancer 22.1.3
    • TrueNAS 12.0-U7 used to provision NFS data stores to ESXi hosts.
    • VyOS 1.3 used as lab backbone router and DHCP server.
    • Windows Server 2019 as DNS server.
    • Windows 10 pro as management host for UI access.
    • Ubuntu Linux 18.04 LTS as Linux client.

For virtual hosts and appliances sizing I used the following specs:

    • 2 x ESXi hosts each with 12 vCPUs, 2 x NICs and 128 GB RAM.
    • vCenter server appliance with 2 vCPU and 24 GB RAM.

Deployment Workflow

  • Prepare Linux client & install Avi Migration Tools.
  • Check NSX standard Load Balancer topology.
  • Prepare NSX ALB (Avi).
  • Run & verify migration workflow.
  • Run cutover & verify access to web servers.

Prepare Linux client & install Avi Migration Tools.

Step 1: Install Python3 and Pip 

Avi migration tool is written in Python and requires Python3 to run, I am using Ubuntu Linux so I used the following steps to install Python3 and Pip

sudo apt update
sudo apt install python3-pip
pip3 install --upgrade pip

Step 2: Download NSX and vSphere SDKs

From VMware Customer Connect, navigate to Products and Accounts >All Products>Networking & Security>VMware NSX choose the version of NSX that you have (I am running 4.1.0.2) click on Drivers & Tools then expand Automation Tools and SDK(s) and then click on GO TO DOWNLOADS for Python tools:

You then need to download both highlighted whl files below: 

 

Step 3: Install Avi Migration Tools 

mkdir avi_migration_tool
cd avi_migration_tool/
git clone https://github.com/vmware/alb-sdk.git

The git clone command will clone the above Github repo to your local machine, navigate to the directory to which you saved the cloned repo and run the following command to start installting the migration tool:

pip3 install avimigrationtools

If the tool successfully installed, you can navigate to path <installation directory>/alb-sdk/python/avi/migrationtools/nsxt_converter and run the following command:

python3 nsxt_converter.py -h

The above output indicates that Avi migration tool has been installed successfully.

Check NSX Standard Load Balancer Topology

In this step we will verify that our NSX standard load balancing topology is supported by the migration tool, as seen below I am using an in-line load balancer mode of type service interface VIP which is a very common deployment mode and is supported by the migration tool as mentioned earlier.

Navigating to Networking>Load Balancing you can see that I have a small form factor LB configured with one virtual server defined.

The Virtual Servers tab shows a virtual server called HTTP_VS (VIP 172.160.130.10) which is based on standard L4 load balancing algorithm is configured and is handling traffic to a backed server pool called WebServerPool.

This server pool is composed of 2 web servers names WebServer1 and WebServer2 with IP addresses 172.160.110.10 and .20 respectively.

At this stage we confirmed our NSX standard load balancer topology and that it is supported by Avi migration tool, next step is to prepare NSX ALB (Avi) for migration.

Prepare NSX ALB (Avi)

The following prerequisites need to be in-place before we run Avi migration tool to migrate NSX standard load balancer worklods:

  • NSX needs to be added as cloud provider in NSX ALB (Avi).
  • If SNAT is disabled in NSX standard load balancer configuration (i.e. preserve client IP is desired) then you need to enable the preserve client IP feature in the virtual service that will be created as part of the migration workflow. In this scenario Service Engines must be configured with a floating IP which will be configured as default gateway for servers backend.
  • The machine from which we will be running the migration tool must be able to reach both NSX manager and NSX ALB (Avi) controller.

Step 1: Add vcenter and NSX credentials to NSX ALB

Login to NSX ALB UI and navigate to Administration>User Credentials and then click on CREATE and create credentials for both vCenter and NSX. The vCenter credentials is required since NSX ALB will need to spin up service engine VMs under a vCenter instance. See below my vCenter and NSX credentials.

Once configured you should see the credentials listed as below

Step 2: Add NSX as Cloud Provider

Next step is to add our NSX instance as cloud provider in NSX ALB infrastructure clouds, from NSX ALB UI navigate to Infrastructure>Clouds and from the CREATE drop-down menu choose NSX-T Cloud

Give a cloud instance name, choose DHCP if this cloud allows DHCP IP address management, this is for IP address assignment for service engine management interfaces. You then need to add the NSX manager IP address and credentials to be used by NSX ALB to connect to it.

Further you need to specify the logical segment we created for Avi service engine management interfaces and the logical segment to be used for service engine data interfaces. The later is the same as the logical segment to which my webservers are connected to (refer to reference architecture). At the bottom you will need to add a vcenter with its credentials (which we created earlier).

Click on SAVE and wait till NSX cloud is successfully added, then navigate to Cloud Resources, choose your NSX cloud and under Cloud Resources expand Networks and add subnet 172.160.110.0/24 with corresponding pool. This subnet will be used for service engine data nic addresses assignment. The VIP of the migrated VS from NSX standard load balancer will use the same IP address which was configured in NSX, in my case this is 172.160.130.10 which will be treated as floating IP by the service engine. Service engines will also insert a static route on our T1 gateway (called Prod) so that T1 route any traffic destined to 172.160.130.10 to the service engines.

After we defined the above subnet we need to create an IPAM with the above as useable network and add that to our NSX cloud instance in Avi. From Avi UI navigate to Templates>IPAM/DNS Profiles and create an IPAM profile as shown below.

At this point, we need to navigate back to Infrastructure and edit our NSX cloud to include the above created IPAM profile.

At this stage, our NSX ALB (Avi) is ready to receive migrated NSX load balancer configuration from our Avi migration tool which we will run in the next section.

Run & Verify Migration Workflow

From your machine where you have installed the Avi migration tool, define the following two environment variables

export alb_controller_password=<STRONG_AVI_PSSWD>
export nsxt_password=<STRONG_NSX_PSSWD>

The above environment variables are NSX ALB and NSX login passwords which the tool will use to access NSX and NSX ALB to pull and apply load balancer configuration.

Change to the directory where the Avi migration tool script is (for NSX by default is alb-sdk/python/avi/migrationtools/nsxt_converter) and run the following command to start the migration workflow

python3 nsxt_converter.py --nsxt_ip 192.168.11.50 --nsxt_user admin --alb_controller_ip 192.168.11.2 --alb_controller_user admin --option auto-upload

From the above output you can see that the migration process has completed successfully and the migration tool has uploaded the Avi configuration used to a json file called avi_config.json. Inspecting the contents of this file, you will see the configuration that was applied by the migration tool to our NSX ALB to migrate the NSX standard load balancer config.

nsxbaas@services-linux:~/avi_migration_tool/alb-sdk/python/avi/migrationtools/nsxt_converter/output/192.168.11.50/output$ cat avi_config.json |jq
{
  "ApplicationProfile": [],
  "NetworkProfile": [
    {
      "name": "default-tcp-lb-app-profile",
      "tenant_ref": "/api/tenant/?name=admin",
      "profile": {
        "type": "PROTOCOL_TYPE_TCP_FAST_PATH",
        "tcp_fast_path_profile": {
          "session_idle_timeout": 1800
        }
      },
      "connection_mirror": false
    }
  ],
  "SSLProfile": [],
  "PKIProfile": [],
  "SSLKeyAndCertificate": [],
  "ApplicationPersistenceProfile": [],
  "HealthMonitor": [
    {
      "name": "default-icmp-lb-monitor",
      "failed_checks": 3,
      "receive_timeout": 5,
      "send_interval": 5,
      "successful_checks": 3,
      "tenant_ref": "/api/tenant/?name=admin",
      "type": "HEALTH_MONITOR_PING"
    }
  ],
  "IpAddrGroup": [],
  "VSDataScriptSet": [],
  "Pool": [
    {
      "lb_algorithm": "LB_ALGORITHM_ROUND_ROBIN",
      "name": "WebServerPool",
      "servers": [
        {
          "ip": {
            "addr": "172.160.110.20",
            "type": "V4"
          },
          "description": "member-1",
          "port": 80,
          "enabled": true
        },
        {
          "ip": {
            "addr": "172.160.110.10",
            "type": "V4"
          },
          "description": "member-1",
          "port": 80,
          "enabled": true
        }
      ],
      "tenant_ref": "/api/tenant/?name=admin",
      "conn_pool_properties": {
        "upstream_connpool_server_max_cache": 6
      },
      "min_servers_up": 1,
      "health_monitor_refs": [
        "/api/healthmonitor/?tenant=admin&name=default-icmp-lb-monitor"
      ],
      "tier1_lr": "/infra/tier-1s/Prod",
      "cloud_ref": "/api/cloud/?tenant=admin&name=nsx-l-04a"
    }
  ],
  "PoolGroup": [],
  "VirtualService": [
    {
      "name": "HTTP_VS",
      "traffic_enabled": false,
      "enabled": true,
      "cloud_ref": "/api/cloud/?tenant=admin&name=nsx-l-04a",
      "tenant_ref": "/api/tenant/?name=admin",
      "vsvip_ref": "/api/vsvip/?tenant=admin&name=HTTP_VS-vsvip&cloud=nsx-l-04a",
      "services": [
        {
          "port": 80,
          "enable_ssl": false
        }
      ],
      "network_profile_ref": "/api/networkprofile/?tenant=admin&name=default-tcp-lb-app-profile",
      "application_profile_ref": "/api/applicationprofile/?tenant=admin&name=System-L4-Application",
      "se_group_ref": "/api/serviceenginegroup/?tenant=admin&name=Default-Group&cloud=nsx-l-04a",
      "pool_ref": "/api/pool/?tenant=admin&name=WebServerPool&cloud=nsx-l-04a"
    }
  ],
  "VsVip": [
    {
      "name": "HTTP_VS-vsvip",
      "tier1_lr": "/infra/tier-1s/Prod",
      "cloud_ref": "/api/cloud/?tenant=admin&name=nsx-l-04a",
      "tenant_ref": "/api/tenant/?name=admin",
      "vip": [
        {
          "vip_id": "1",
          "ip_address": {
            "addr": "172.160.130.10",
            "type": "V4"
          }
        }
      ]
    }
  ],
  "HTTPPolicySet": [],
  "ServiceEngineGroup": [],
  "NetworkService": [],
  "NetworkSecurityPolicy": []
}
nsxbaas@services-linux:~/avi_migration_tool/alb-sdk/python/avi/migrationtools/nsxt_converter/output/192.168.11.50/output$

Once you execute the migration workflow, NSX ALB (Avi) will start deploying service engines as part of the default service engine group under NSX cloud (that we defined earlier)

Wait till deployment is finished and then go to the next step.

Run Cutover Workflow & Verify Access to Web Servers

Before you start the cutover phase in the migration tool, make sure that your NSX standard load balancer configuration has been properly migrated to your NSX ALB (Avi). Navigate to NSX ALB UI and under Applications>Dashboards make sure that the virtual service corresponding to the migrated virtual server object from NSX is green, which means virtual service in Avi has been successfully configured and can reach backend server pools (172.160.110.10 and 172.160.110.20)

You should also be able to see the migrated WebServerPool under NSX ALB Pools

At this stage, switch to your machine on which Avi migration tool is running and start the cutover phase, this is when the migration tool will disable the virtual server instance in NSX standard load balancer and switover the traffic to the virtual service in NSX ALB.

At this stage the virtual server in NSX should be marked as disabled 

Now to verify access to backend web servers, open two web browsers and open an HTTP session to VIP address 172.160.130.10, you should see similar response to the below.

 

Hope you have found this post useful.