Categories: NSX

Migrating N-VDS ESXi Host Switch to VDS (vSphere Distributed Switch)

Overview

N-VDS (or NSX Virtual Distributed Switch) was introduced with the release of NSX-T, and its main function was to provide the host with NSX data plane for handling NSX managed traffic (VMs which are connected to NSX segments and handled by NSX policies). This meant that for every NSX enabled host, administrators had to manage standard VDS (from vCenter) and its uplinks in addition to the new N-VDS and its assigned uplinks. This added to the complexity of operation and troubleshooting of NSX enabled hosts.

With the release of vSphere 7 (VDS 7.0) and NSX-T 3.0, VMware has introduced the ability to enable NSX data plane on standard VDS provided that NSX enabled hosts are connected to VDS version 7.0 or later. This simplifies host traffic management and allows consistent configuration and management from vcenter and NSX UI for logical segments.

It has been there for a while that VMware is going to deprecate N-VDS in favour of converged VDS (VDS 7.0 or later) for ESXi NSX enabled hosts. This removes the need to manage different virtual switch es within an NSX enabled host and allows all physical uplinks to be handled by a singe distributed switch (in this case VDS). This is in addition to the fact that Tanzu with NSX networking is only supported for ESXi hosts using VDS and not N-VDS.

With the release of VMware NSX 4.0. N-VDS is not supported for ESXi hosts anymore (it is still supported for Edges though) and customers who need to upgrade to NSX 4.x need to migrate their ESXi hosts running N-VDS to VDS.

In this blog post I am going to show you how to migrate NSX ESXi hosts from N-VDS to VDS using the manual method (invoking API calls to trigger and finalise host migration). The reason that I chose the manual method is due to the fact that this method is easily to integrate in scripts to perform bulk and customised nVDS to VDS migrations.

Lab Inventory

For software versions I used the following:

  • VMware ESXi 7.0U3f
  • vCenter server version 7.0U3f
  • TrueNAS 12.0-U7 used to provision NFS data stores to ESXi hosts.
  • VyOS 1.4 used as lab backbone router and DHCP server.
  • Ubuntu 18.04 LTS as jumpbox machine.
  • Ubuntu 20.04.2 LTS as DNS and internet gateway.
  • Windows Server 2012 R2 Datacenter as management host for UI access.
  • NSX-T 3.1.3.7.

For virtual hosts and appliances sizing I used the following specs:

  • 2 x virtualised ESXi hosts each with 8 vCPUs, 3 x NICs and 64 GB RAM.
  • vCenter server appliance with 2 vCPU and 24 GB RAM.

Prerequisites

We need to ensure that we have the following components installed and running before we proceed:

  • vCenter Server 7.0 or later
  • ESXi 7.0 or later
  • NSX-T is no longer represented as an opaque network after migration. You may need to update your scripts to manage the migrated representation of the NSX-T hosts.
  • A machine where you can invoke API calls to NSX manager cluster. I personally prefer cURL but Postman is also a good option.

Preparing your N-VDS environment for migration

Step1: Verify status of the hosts to be migrated

From NSX UI (System > Nodes > Host Transport Nodes) make sure that the hosts you want to migrate are stable and are not reporting any warnings or alerts:

Verify that both hosts are indeed making use of N-VDS, click on any of the hosts to be migrated (in my example I will choose esx-07.corp.local) and from the right hand pane click on Switch Virtualization

Repeat the same for the second host esx-08.corp.local

As you can see both hosts are having an N-VDS on which NSX TEPs are configured and an active uplink (vmnic2 on both hosts) is assigned. After the migration the hosts and their uplinks and TEPs will be migrated to a newly created VDS.

Note: you can only replace an N-VDS with a new VDS and cannot use an existing VDS.

Step 2: Running migration pre-checks and retrieve migration topology

Switch to your machine on which you are going to invoke the APIs to NSX manager. In my case I am using an Ubuntu 18.04 LTS with cURL.

If you are going to invoke the API calls by means of cURL then it is a best practice to store your NSX manager username and password in file ~/.netrc so that you do not need to type username and password in every API call you invoke. The structure of your ~/.netrc should look like the below:

$ cat ~/.netrc
machine nsx-l-01a
login admin
password <type your nsx manager password here>

Next, invoke the following API call (with the -n switch to cURL to use netrc file) to run vds migration pre-check

curl -k -n --request POST https://nsx-l-01a/api/v1/nvds-urt/precheck

The above output shows that there are no issues returned from running the migration pre-checks. If the above API call returns any issues/alarms then you need to fix those first before proceeding.

Record the pre-check id (in my case is 73e56fea-a7b0-418c-bb56-116a93c7c212) as you will need it later on.

Verify the status of the pre-check above:

The pre-check will build a recommended migration topology based on the discovered v-NDS switches and host states. Use the following command to view the recommended topology:

curl -k -n -X GET https://nsx-l-01a/api/v1/nvds-urt/topology/73e56fea-a7b0-418c-bb56-116a93c7c212

The highlighted value nVDS-Overlay is the existing N-VDS that will be migrated to a new VDS called CVDS-nVDS-Overlay-datacenter-3 the name can of course be changed as we will do in the following section.

Step 3: Building and applying your own topology

With the output of the previous command, copy and paste the contents in a json file, below is how my topology json file (vexpert-nvds-migration.json) looks like (I just modified the VDS name).

The above is the topology that I want the migration process to use, so I will invoke my cURL POST API and point cURL to read the contents via the above file. This can be done by using the –data switch and point to the location of json file (@ at the beginning of path and file name is a must)

curl -i -k -n -X POST -H "Content-Type: application/json" --data "@vexpert-nvds-migration.json" https://nsx-l-01a/api/v1/nvds-urt/topology?action=apply

Step 4: Starting Host Migration Workflow

Check the status of the migration, make the following API call:

curl -n -k -X GET https://nsx-l-01a/api/v1/nvds-urt/status-summary/73e56fea-a7b0-418c-bb56-116a93c7c212

When the host is ready for migration, precheck_status changes from APPLYING _TOPOLOGY to UPGRADE_READY.

Next is to place your hosts in maintenance mode from vCenter, I will start with host esx-07. Once host is in maintenance mode, invoke the following API call to start migrating the host

curl -i -n -k -X POST https://nsx-l-01a/api/v1/transport-nodes/8b5964d0-b5b2-4d6a-909f-1b7ab72d4095?action=migrate_to_vds

In the above screenshot you can see that I also invoked an API call to check the status of the migration and the host for which we kicked off the migration is in UPGRADE_IN_PROGRESS status. Keep on checking the status with the same command till the status shows SUCCESS as shown below:

Take the host out of maintenance mode and repeat the above steps for all the other hosts to be migrated.

Step 5: Verify the migration results

From NSX UI (System > Nodes > Host Transport Nodes) verify the node has been migrated to the newly created VDS:

From vCenter under Networking you can see the newly created VDS (vExpert-nVDS-VDS)

Final word

Migrating host switches from N-VDS to VDS is quite a straight-forward task and I would advise your organisation to consider that if you are still making use of N-VDS switches on your NSX enabled ESXi hosts.

Bassem Rezkalla

Share
Published by
Bassem Rezkalla
Tags: nsx

Recent Posts

Visualising VMware Antrea IDPS logs using EFK Stack

Overview In my previous blog post HERE I deployed VMware Antrea IDS and demonstrated how…

1 week ago

Securing Antrea Containers Using NSX IDPS

Overview With the release of NSX 4.0.0.1 and VMware Antrea 1.5.0 came a very interesting…

1 week ago

Managing Multiple Tanzu Clusters using vSphere Console for K8s fling – Part II

Overview In my first blog post HERE I started test driving the Modern Application Platform…

2 weeks ago

Managing Multiple Tanzu Clusters using vSphere Console for K8s fling – Part I

Overview Managing multiple Tanzu clusters can be a challenge if you have not the right…

2 weeks ago

My recommendations for VMware Explore EU 2022

After two remote VMworld (2020 - 2021) due to Corona pandemic travel restrictions, I could…

3 weeks ago

Deploying Tanzu Kubernetes Grid (TKGm) clusters with NSX ALB (Avi)

Overview NSX ALB (previously known as Avi) offers rich capabilities for L4-L7 load balancing across…

1 month ago