Building Hyper-V Hosts: Converged Networking

Previously we looked at the networking basics for building Hyper-V hosts, specifically hosts in a failover cluster.  We saw that having dedicated high speed 10 GbE adapters for some or all networks could be costly and inefficient. In this article we will tackle the concept of converged networking.

With Hyper-V on Windows Server 2012 and newer you can collapse your networks into fewer high speed NIC’s using converged networking.  Traditionally you would have a team of 1 GbE NIC’s for each network (management, cluster, VM, etc.) requiring 6-8 physical 1 GbE NIC’s on the server. But with converged networking you are able to combine all the networks to go across a pair (or more) of 10 GbE NIC’s while ensuring each network gets their fair share of bandwidth using QoS. Converged networking is implemented through PowerShell only, and we will walk through an example script that you can use to get your converged networking up and running.

But first let’s cover the basic concepts of building converged networks.

We take two or more high speed adapters (10 GbE or faster) and put them into a team. Next we create and attach a Hyper-V virtual switch to this team. This switch is then configured to use weighted bandwidth QoS mode (more on that later). Finally we create virtual network adapters that the host OS uses for the networks you need (i.e. Management, cluster, live migration, virtual machines, etc.).

So what is weighted bandwidth QoS?

Weighted bandwidth mode allows you to share the total bandwidth of the underlying physical NIC’s of a Hyper-V virtual switch between the networks you create. The total weight of all the networks plus the default (VM traffic) must equal 100. The QoS is only enforced when the bandwidth is in contention.  This means that a live migration could use nearly all the bandwidth of a 10 GbE physical adapter, and then bandwidth is available for the other networks to use when no live migrations are occurring.

Using the below table as an example, we will guarantee a percentage of bandwidth for each network. Adjust the weights and number of networks to match your environment.

Networks Weight
Management 15
Cluster 5
Live Migration 20
iSCSI A 20
iSCSI B 20
Default(VM) 20
Total 100

The Default(VM) “network”  is set to 20 (referenced as DefaultFlowMinimumBandwidthWeight in PowerShell). This is the reserved bandwidth for anything that doesn’t match the other networks specified. Since we are specifying bandwidth for management, cluster, live migration, and two iSCSI networks, what is “left over” is the virtual machine traffic. This means that 20% of the bandwidth will be reserved for virtual machine traffic.

Now that we have our networks and weights mapped out, it’s time to get into the PowerShell script.

First Things

You should copy this script to the Hyper-V host on which you wish to configure converged networks and modify the parameters to meet the names, IP addresses, VLAN’s, DNS servers, etc. that meet your environment. Note that running this script WILL disrupt your network access to the server, so you will need to run this script locally on the server from a OOB management (DRAC, iLO, CIMC).

Hyper-V Converged Networking PowerShell Script

Here’s a basic overview of what the script does.  Based upon the parameters you specify, the script will build a network team from two pre-defined NIC’s to which a Hyper-V virtual switch will be connected in weighted bandwidth mode. After which the individual virtual network interfaces are created with their appropriate VLAN, IP, subnet mask, weight, et cetera.

To use the script we need to edit the parameters. Although you can specify most of the parameters when you run the script, you may find it simpler to open the script in the ISE and modify the IP address information for each of the networks to match your environment .

If you are not using iSCSI Networks:

    Adjust the weight parameter of each network to what you need
    Comment out (or don’t run) the last two network sections where the iSCSI interfaces are created.

The script assumes you have two 10 GbE network adapters named 10GbE1 and 10 GbE2.  You can adjust this based upon the number and names of your network adapters. Find the line that begins New-NetLBFOTeam.

The script also enables Jumbo frames on all virtual network adapters that are created, but you will need to enable jumbo frames manually on the underlying physical network adapters. (Due to the various names of registry properties for jumbo frames for the multitude of NIC manufactures it is difficult to add that to the template script.) Comment out these lines to skip the jumbo frame configuration.

Once you have made all of the necessary parameter changes and given everything a thorough second look, you’re ready to run the script.

STOP!

Make sure the script is stored locally on the server, AND you’re connected via OOB management just in case something goes wrong. OK. Take a deep breath and run the script. Within a minute or two the script should finish and now you can validate your network connectivity. If something went wrong review the script output.

Troubleshooting Steps

  • Make sure the virtual network adapters have been created and have the correct IP addresses.
  • Verify the network adapters have the correct VLAN ID associated.
  • Verify the network team is healthy (lbfoadmin.exe)
  • Start over again by removing the virtual network adapters (Remove-VMNetworkAdapter), removing the virtual switch, and removing the team.

Once you’re satisfied that your first Hyper-V host is configured and working with converged networking, repeat this process for every node in your Hyper-V cluster, changing the IP addresses and such where appropriate. After each node has been configured with converged networking you should run the cluster validation wizard to verify network connectivity between each node.

TIP

Document your cluster networking and save your converged networking scripts should you need to rebuild or adjust your cluster networks.

Now that you have accomplished converged networking, your next step should be to configure VMQ to ensure the high IO networking bandwidth is spread out properly amongst the CPU cores on your hosts. We’ll save that for another blog post.