Starting NSX-T 3.0, NSX users can add and secure physical servers (Windows and Linux) using NSX. NIC teaming (LAG) on Windows servers just works fine, however under NSX-T 3.2 it says that this is not supported, so if you need to use LAG (teaming) on your physical windows machines connected to NSX then it is better to check with VMware GSS supportability of this setup.
In addition to VMware documentation there are couple of blogs that cover adding and securing windows physical workloads using NSX-T, however in my own experience there are some important notes and lessons learned that I want to shed more light on in this blog post. Although the process of adding Windows bare metal servers to NSX-T 3.2 became pretty straight forward, you need to properly prepare your windows server before adding it to NSX manager as a transport node.
Before proceeding, you might need to familiarise yourself with the physical server concepts detailed on VMware documentation, for your convenience I coped and pasted them here:
Physical Server Concepts:
- Application – represents the actual application running on the physical server server, such as a web server or a data base server.
- Application Interface – represents the network interface card (NIC) which the application uses for sending and receiving traffic. One application interface per physical server server is supported.
- Management Interface – represents the NIC which manages the physical server server.
- VIF – the peer of the application interface which is attached to the logical switch. This is similar to a VM vNIC.
Lab Inventory
For software versions I used the following:
- VMware ESXi, 7.0.2, 17867351
- vCenter server version 7.0.3
- NSX-T 3.2.0.1
- TrueNAS 12.0-U7 used to provision NFS datastores to ESXi hosts.
- VyOS 1.1.8 used as lab core router.
- Two Ubuntu 20.04.2 LTS VMs as destination test servers.
- One Windows 2016 DC edition VM which is used as physical server (deployed on another vcenter than the one connect to NSX) with 1 mgmt NIC (192.168.50.11) and another NIC for connecting to NSX segment and it does not have an IP address.
- Ubuntu 20.04.2 LTS as DNS and Internet gateway.
- Ubuntu 20.04.2 LTS as Linux management host.
- Windows Server 2012 R2 Datacenter as management host for UI access.
For virtual hosts and appliances sizing I used the following specs:
- 2 x virtualised ESXi hosts each with 8 vCPUs, 4 x nics and 32 GB RAM.
- vCenter server appliance with 2 vCPU and 24 GB RAM.
- NSX-T Manager medium sized deployment with 6 vCPUs and 24 GB RAM (no reservations).
Supported Windows Servers:
Windows versions supported
•2016 (minor version 14393.2248 and later)
•2019
Prerequisites
- A transport zone must be configured.
- An uplink profile must be configured, or you can use the default uplink profile.
- An IP pool must be configured, or DHCP must be available in the network deployment.
- At least one physical NIC must be available on the host node.
- Hostname
- Management IP address
- User name
- Password
- A segment (VLAN or Overlay), depending upon your requirement, must be available to attach to the application interface of the physical server.
- Verify that WinRM is enabled and configured on your Windows Server.
- If you are adding a Windows VM as your physical server (as in my lab scenario) ensure that the VM is NOT running VMtools.
Windows Server Preparation
Step 1: Ensure that your Windows machine runs the latest available updates from Microsoft.
In my lab setup I was trying to add the Windows server to NSX manager without checking the minor release and I was getting errors in NSX-T while running NSX switch configuration, the error in NSX UI is shown below:
Host configuration: Command [%systemroot%\System32\WindowsPowerShell\v1.0\powershell.exe -ExecutionPolicy ByPass -File "C:\Program Files\VMware\NSX\\win-bms-install.ps1" -operation tn-install -transport_bridge nsx-switch.0 -pnic "Ethernet1 2" -overlay true -vtep_on_static_ip true -ip 50.50.50.24 -netmask 255.255.255.0 -gateway 50.50.50.254 -transport_vlan 50 -mtu 1600] failed with return code [1],
reply:[win-bms tn-install transport_bridge: nsx-switch.0 pnic: Ethernet1 2 overlay: true vtep_on_static_ip: true Ethernet1 2 is not a teaming interface pnic: Ethernet1 2 pnic Ethernet1 2 do no need do teaming,
skip C:\ProgramData\VMware\NSX\Data\win-bms-tn-config.txt doesn't existing ============== Starting NSX TN install ============== C:\ProgramData\VMware\NSX\Data\win-bms-tn-config.txt is not existing, create a new one curr_transport_bridge: nsx-switch.0 curr_overlay: true curr_vtep_on_static_ip: true curr_ip: 50.50.50.24 curr_netmask: 255.255.255.0 curr_gateway: 50.50.50.254 curr_mtu: 1600 transport_vlan: 50 Name Value ---- ----- Ethernet1 2 overlay Status Name DisplayName ------ ---- ----------- Stopped ovsdb-server Open vSwitch Database Server Stopped ovs-vswitchd Open vSwitch Daemon Task ovsdb-server-watchdog not present Task ovs-vswitchd-watchdog not present Done stopping OVS service watchdogs [SC] ChangeServiceConfig2 SUCCESS
config ovsdb-server state to stop Stopped ovsdb-server Open vSwitch Database Server [SC] ControlService FAILED 1062: The service has not been started.
[SC] ChangeServiceConfig2 SUCCESS config ovs-vswitchd state to stop Stopped ovs-vswitchd Open vSwitch Daemon [SC] ControlService FAILED 1062: The service has not been started.
Done stopping OVS services stop nsx service: nsx-agent install path: C:\Program Files\VMware\NSX\ Task nsx-agent-watchdog not present config nsx-agent state to stop Stopped nsx-agent VMware NSX Agent Service [SC] ControlService FAILED 1062: The service has not been started. tn uninstall driver isn't installed not tn, no need to uninstall curr_pnic Ethernet1 2 is not teaming nic name: Ethernet1 2, ifIndex: 19 try to set teaming_interface Ethernet1 2 subinterfaces mtu Ethernet1 2 is not a teaming interface Ethernet1 2 is not a teaming interface curr_pnic Ethernet1 2 is not teaming get_dns_server dns server: Installing ovsim driver .... installed = False, attempt = 0 C:\Program Files\VMware\NSX\openvswitch\ovsim\win10_x64\ovsim.inf Trying to install ovsim ... ...
C:\Program Files\VMware\NSX\openvswitch\ovsim\win10_x64\ovsim.inf was copied to C:\Windows\INF\oem4.inf. ...
failed. Error code: 0x800706ef. Failed to install OVSIM driver ] errno:[]
The above error message is indicating that there was a failure while installing OVSIM driver on the windows machine which was not clear enough to understand what was going on.
After spending couple of hours trying to play around with Ansible modules and reconfiguring WinRM on Windows machine, I decided to run Windows update to update my Windows 2016, once I did that everything went smooth.
Step 2: Enable Windows Remote Management (WinRM)
Login to your Windows machine, open Powershell session as an administrator and run the following command:
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
The above command enables TLS v1.2 in order to be able to download WinRM enablement script from GitHub.
Download and run the following script to enable WinRM:
Download script:
wget -o ConfigureWinRMService.ps1 https://raw.githubusercontent.com/vmware/bare-metal-server-integration-with-nsxt/master/bms-ansible-nsx/windows/ConfigureWinRMService.ps1
Run script:
powershell.exe -ExecutionPolicy ByPass -File ConfigureWinRMService.ps1
You should get an output similar to the below:
Self-signed SSL certificate generated; thumbprint: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Ok.
If running the script generated an error similar to the below
The '<' operator is reserved for future use.
You can get in touch with me via the contact form in this page or on linkedin/twitter and I will provide you with a modified script to resolve the above issue (it has to do with powershell interpreting symbols in the original script on github).
Last step is to verify if WinRM is properly listening for HTTP and HTTPS. From your powershell session run the following commad:
winrm e winrm/config/listener
This should result in an output similar to the below (you might need to zoom in your browser to be able to see the screenshot contents):

Step 3: Add Windows physical server as Transport node from NSX UI
Login to your NSX UI and navigate to System > Fabric > Nodes and under Managed by ensure to keep Standalone Hosts and then click on ADD HOST NODE.
You need to fill in the Windows Machine information, for connecting to Windows machine you need to use an administrator account or account which is member of local/domain admin group as this account will be creating the NSX switch configuration on the windows machine. In this lab I used an account called “ansible” which I already added to local admin group on the Windows machine. After filling in the machine information click on Next and ADD the SHA-256 thumbprint of the machine

Click on Next and proceed with Host preparation. In this step we will be configuring NSX N-VDS on the Windows machine, for that purpose I initially created an uplink profile with 1 active uplink, an IP address Pool for VTEPs and an overlay transport zone (depending on your use case you can use Overlay or VLAN TZ) to which the windows host will be connected to.

Click on Next and wait till NSX is installed and configured on the windows host


In the following steps we need to create a segment and a segment port (representing VIF on the windows machine) and eventually connect that segment to T1. You can proceed from the above wizard by selecting Select Segment or Continue later to manually create segment and segment port. I will choose continue later to show you step by step.
By now, you should see your windows host added as a standalone transport node, click on Actions and choose Manage segement

Click on ADD segment and create a segment, connect it to a Gateway (in my case T1) and assign a gateway IP address (172.30.10.254/24)

After that we need to add segment port on windows machine and connect this to the BareMetal segment we created. Click on ADD Segment port, give it a name and assign a static IP address parameters under ATTACH APPLICATION INTERFACE and then click on Save.

Your segment port configuration should look like the below

Step 4: Verify VIF & vTEP creation on Windows machine and connectivity to other workloads overlay
Login to your windows machine, run Powershell and run the following commands to verify correct virtual interfaces creation:



Verify that you are able to ping the T1 gateway interface (172.30.10.254) and a VM with an ip address 100.100.100.10 which is connected to another segment connected to the same T1. This should verify the traffic flow over the overlay:

Securing Windows physical servers using DFW
At this point, our physical windows machine is another transport node. In the following steps I am going to create a security group with the physical server as a member, create a DFW policy with a simple rule blocking ICMP traffic from windows machine to multicast VMs under which the machine with 100.100.100.10 IP is.


Now switch back to your windows machine and retry pinging 100.100.100.10, ping should fail this time.