Previously we looked at the networking basics for building Hyper-V hosts, specifically hosts in a failover cluster. We saw that having dedicated high speed 10 GbE adapters for some or all networks could be costly and inefficient. In this article we will tackle the concept of converged networking.
With Hyper-V on Windows Server 2012 and newer you can collapse your networks into fewer high speed NIC’s using converged networking. Traditionally you would have a team of 1 GbE NIC’s for each network (management, cluster, VM, etc.) requiring 6-8 physical 1 GbE NIC’s on the server. But with converged networking you are able to combine all the networks to go across a pair (or more) of 10 GbE NIC’s while ensuring each network gets their fair share of bandwidth using QoS. Converged networking is implemented through PowerShell only, and we will walk through an example script that you can use to get your converged networking up and running.
But first let’s cover the basic concepts of building converged networks.
We take two or more high speed adapters (10 GbE or faster) and put them into a team. Next we create and attach a Hyper-V virtual switch to this team. This switch is then configured to use weighted bandwidth QoS mode (more on that later). Finally we create virtual network adapters that the host OS uses for the networks you need (i.e. Management, cluster, live migration, virtual machines, etc.).
So what is weighted bandwidth QoS?
Weighted bandwidth mode allows you to share the total bandwidth of the underlying physical NIC’s of a Hyper-V virtual switch between the networks you create. The total weight of all the networks plus the default (VM traffic) must equal 100. The QoS is only enforced when the bandwidth is in contention. This means that a live migration could use nearly all the bandwidth of a 10 GbE physical adapter, and then bandwidth is available for the other networks to use when no live migrations are occurring.
Using the below table as an example, we will guarantee a percentage of bandwidth for each network. Adjust the weights and number of networks to match your environment.
The Default(VM) “network” is set to 20 (referenced as DefaultFlowMinimumBandwidthWeight in PowerShell). This is the reserved bandwidth for anything that doesn’t match the other networks specified. Since we are specifying bandwidth for management, cluster, live migration, and two iSCSI networks, what is “left over” is the virtual machine traffic. This means that 20% of the bandwidth will be reserved for virtual machine traffic.
Now that we have our networks and weights mapped out, it’s time to get into the PowerShell script.
You should copy this script to the Hyper-V host on which you wish to configure converged networks and modify the parameters to meet the names, IP addresses, VLAN’s, DNS servers, etc. that meet your environment. Note that running this script WILL disrupt your network access to the server, so you will need to run this script locally on the server from a OOB management (DRAC, iLO, CIMC).
Created by: Jason Wasser @wasserja
Modified: 3/4/2016 04:10:28 PM
This script can be used as a template for deploying Hyper-V Converged networking.
With Windows Server 2012 and above you can combine your 10 GbE (or 1 GbE) NIC's
and utilize the bandwidth of all of your network adapters for management, cluster,
live migration, and even iSCSI traffic.
You can further customize the script to add/remove networks. You could add a backup
network or remove iSCSI networks. Be sure to adjust the bandwidth weights to total
Due to the network changes made in this script it is best to copy this script to the server
and run it from the server.
$TeamName = 'ConvergedNetTeam',
$SwitchName = 'ConvergedNetSwitch',
$DefaultFlowMinimumBandwidthWeight = 20,
$ManagementNetAdapterName = 'Management',
$ManagementNetAdapterVlan = 100,
$ManagementNetAdapterIp = '10.100.100.50',
$ManagementNetAdapterPrefix = 24,
$ManagementNetAdapterGateway = '10.100.100.1',
$ManagementNetAdapterDnsServers = '10.100.100.11,10.100.100.12',
$ManagementNetAdapterWeight = 15,
$ClusterNetAdapterName = 'Cluster',
$ClusterNetAdapterVlan = 101,
$ClusterNetAdapterIp = '10.100.101.50',
$ClusterNetAdapterPrefix = 24,
$ClusterNetAdapterWeight = 5,
$LiveMigrationNetAdapterName = 'LiveMigration',
$LiveMigrationNetAdapterVlan = 102,
$LiveMigrationNetAdapterIp = '10.100.102.50',
$LiveMigrationNetAdapterPrefix = 24,
$LiveMigrationNetAdapterWeight = 20,
$iScsiANetAdapterName = 'iSCSI A',
$iScsiANetAdapterVlan = 103,
$iScsiANetAdapterIp = '10.100.103.50',
$iScsiANetAdapterPrefix = 24,
$iScsiANetAdapterWeight = 20,
$iScsiBNetAdapterName = 'iSCSI B',
$iScsiBNetAdapterVlan = 104,
$iScsiBNetAdapterIp = '10.100.104.50',
$iScsiBNetAdapterPrefix = 24,
$iScsiBNetAdapterWeight = 20
# Change these IP addresses and netmask to parameters.
### Rename Physical Adapters First
# Rename-NetAdapter “Slot 2” –NewName VM1
# Rename-NetAdapter “Slot 2 2” –NewName VM1
# Rename-NetAdapter "LAN 1" -NewName "10GBE1"
# Rename-NetAdapter "LAN 2" -NewName "10GBE2"
### Disable Unused NIC Adapters
# Disable-NetAdapter -Name "LAN 4"
# May need to remove any existing switch, and vm network adapters first.
### Delete existing VMNetworkAdapters from the management OS
# Get-VMNetworkAdapter -ManagementOS | Remove-VMNetworkAdapter
### Delete existing VM Switches
# Get-VMSwitch | Remove-VMSwitch
### Remove Existing Team
# Get-NetLbfoTeam | Remove-NetLbfoTeam
### Creating Team from two 10 GbE adapters. The adapter names should be specified as 10GbE1 or whatever name is desired.
New-NetLBFOTeam –Name $TeamName –TeamMembers "10GBE1","10GBE2" –TeamingMode SwitchIndependent –LoadBalancingAlgorithm Dynamic
############ Create Hyper-V Switches for converged networking using weight based QoS. ##############
# Creating Hyper-V Switch for Management, cluster, Live migration, iSCSI, and VM converged networks.
# The switch uses weight bandwidth mode for QoS
# Bandwidth weight should total 100
# Management 10
# Cluster 5
# Live Migration 20
# iSCSI A 20
# iSCSI B 20
# Default(VM) 20
# Assuming all networks will go through this team.
New-VMSwitch $SwitchName –NetAdapterName $TeamName –AllowManagementOS 0 –MinimumBandwidthMode Weight -Notes "Management, cluster, live migration, iSCSI, and VM networks."
# Set default QoS bucket which will be used by VM traffic
Set-VMSwitch $SwitchName –DefaultFlowMinimumBandwidthWeight 20
# Create and configure Management network
Add-VMNetworkAdapter -ManagementOS -Name $ManagementNetAdapterName -SwitchName $SwitchName
Set-VMNetworkAdapter -ManagementOS -Name $ManagementNetAdapterName -MinimumBandwidthWeight $ManagementNetAdapterWeight
New-NetIPAddress -InterfaceAlias "vEthernet ($ManagementNetAdapterName)" -IPAddress $ManagementNetAdapterIp -PrefixLength $ManagementNetAdapterPrefix -DefaultGateway $ManagementNetAdapterGateway
Set-DnsClientServerAddress -InterfaceAlias "vEthernet ($ManagementNetAdapterName)" -ServerAddresses $ManagementNetAdapterDnsServers
Get-VMNetworkAdapter -ManagementOS $ManagementNetAdapterName | Set-VMNetworkAdapterVlan -Access -VlanId $ManagementNetAdapterVlan
Get-NetAdapter -Name "vEthernet ($ManagementNetAdapterName)" | Get-NetAdapterAdvancedProperty -DisplayName "Jumbo Packet" | Set-NetAdapterAdvancedProperty -RegistryValue 9014
# Create and configure the Cluster network
Add-VMNetworkAdapter -ManagementOS -Name $ClusterNetAdapterName -SwitchName $SwitchName
Set-VMNetworkAdapter -ManagementOS -Name $ClusterNetAdapterName -MinimumBandwidthWeight $ClusterNetAdapterWeight
New-NetIPAddress -InterfaceAlias "vEthernet ($ClusterNetAdapterName)" -IPAddress $ClusterNetAdapterIp -PrefixLength $ClusterNetAdapterPrefix
Get-VMNetworkAdapter -ManagementOS $ClusterNetAdapterName | Set-VMNetworkAdapterVlan -Access -VlanId $ClusterNetAdapterVlan
Get-NetAdapter -Name "vEthernet ($ClusterNetAdapterName)" | Get-NetAdapterAdvancedProperty -DisplayName "Jumbo Packet" | Set-NetAdapterAdvancedProperty -RegistryValue 9014
# Create and configure the Live Migration network
Add-VMNetworkAdapter -ManagementOS -Name $LiveMigrationNetAdapterName -SwitchName $SwitchName
Set-VMNetworkAdapter -ManagementOS -Name $LiveMigrationNetAdapterName -MinimumBandwidthWeight $LiveMigrationNetAdapterWeight
New-NetIPAddress -InterfaceAlias "vEthernet ($LiveMigrationNetAdapterName)" -IPAddress $LiveMigrationNetAdapterIp -PrefixLength $LiveMigrationNetAdapterPrefix
Get-VMNetworkAdapter -ManagementOS $LiveMigrationNetAdapterName | Set-VMNetworkAdapterVlan -Access -VlanId $LiveMigrationNetAdapterVlan
Get-NetAdapter -Name "vEthernet ($LiveMigrationNetAdapterName)" | Get-NetAdapterAdvancedProperty -DisplayName "Jumbo Packet" | Set-NetAdapterAdvancedProperty -RegistryValue 9014
# Create and configure iSCSI A network.
Add-VMNetworkAdapter -ManagementOS -Name $iScsiANetAdapterName -SwitchName $TeamName
Set-VMNetworkAdapter -ManagementOS -Name $iScsiANetAdapterName -MinimumBandwidthWeight $iScsiANetAdapterWeight
New-NetIPAddress -InterfaceAlias "vEthernet ($iScsiANetAdapterName)" -IPAddress $iScsiANetAdapterIp -PrefixLength $iScsiANetAdapterPrefix
Get-VMNetworkAdapter -ManagementOS $iScsiANetAdapterName | Set-VMNetworkAdapterVlan -Access -VlanId $iScsiANetAdapterVlan
Get-NetAdapter -Name "vEthernet ($iScsiANetAdapterName)" | Get-NetAdapterAdvancedProperty -DisplayName "Jumbo Packet" | Set-NetAdapterAdvancedProperty -RegistryValue 9014
# Create and configure the iSCSI B network.
Add-VMNetworkAdapter -ManagementOS -Name $iScsiBNetAdapterName -SwitchName $TeamName
Set-VMNetworkAdapter -ManagementOS -Name $iScsiBNetAdapterName -MinimumBandwidthWeight $iScsiBNetAdapterWeight
New-NetIPAddress -InterfaceAlias "vEthernet ($iScsiBNetAdapterName)" -IPAddress $iScsiBNetAdapterIp -PrefixLength $iScsiBNetAdapterPrefix
Get-VMNetworkAdapter -ManagementOS $iScsiBNetAdapterName | Set-VMNetworkAdapterVlan -Access -VlanId $iScsiBNetAdapterVlan
Get-NetAdapter -Name "vEthernet ($iScsiBNetAdapterName)" | Get-NetAdapterAdvancedProperty -DisplayName "Jumbo Packet" | Set-NetAdapterAdvancedProperty -RegistryValue 9014
# Verify Bandwidth percentage for the newly created NIC's
Get-VMNetworkAdapter -ManagementOS | Select-Object -Property Name,BandwidthPercentage
Here’s a basic overview of what the script does. Based upon the parameters you specify, the script will build a network team from two pre-defined NIC’s to which a Hyper-V virtual switch will be connected in weighted bandwidth mode. After which the individual virtual network interfaces are created with their appropriate VLAN, IP, subnet mask, weight, et cetera.
To use the script we need to edit the parameters. Although you can specify most of the parameters when you run the script, you may find it simpler to open the script in the ISE and modify the IP address information for each of the networks to match your environment .
If you are not using iSCSI Networks:
Adjust the weight parameter of each network to what you need
Comment out (or don’t run) the last two network sections where the iSCSI interfaces are created.
The script assumes you have two 10 GbE network adapters named 10GbE1 and 10 GbE2. You can adjust this based upon the number and names of your network adapters. Find the line that begins New-NetLBFOTeam.
The script also enables Jumbo frames on all virtual network adapters that are created, but you will need to enable jumbo frames manually on the underlying physical network adapters. (Due to the various names of registry properties for jumbo frames for the multitude of NIC manufactures it is difficult to add that to the template script.) Comment out these lines to skip the jumbo frame configuration.
Once you have made all of the necessary parameter changes and given everything a thorough second look, you’re ready to run the script.
Make sure the script is stored locally on the server, AND you’re connected via OOB management just in case something goes wrong. OK. Take a deep breath and run the script. Within a minute or two the script should finish and now you can validate your network connectivity. If something went wrong review the script output.
- Make sure the virtual network adapters have been created and have the correct IP addresses.
- Verify the network adapters have the correct VLAN ID associated.
- Verify the network team is healthy (lbfoadmin.exe)
- Start over again by removing the virtual network adapters (Remove-VMNetworkAdapter), removing the virtual switch, and removing the team.
Once you’re satisfied that your first Hyper-V host is configured and working with converged networking, repeat this process for every node in your Hyper-V cluster, changing the IP addresses and such where appropriate. After each node has been configured with converged networking you should run the cluster validation wizard to verify network connectivity between each node.
Document your cluster networking and save your converged networking scripts should you need to rebuild or adjust your cluster networks.
Now that you have accomplished converged networking, your next step should be to configure VMQ to ensure the high IO networking bandwidth is spread out properly amongst the CPU cores on your hosts. We’ll save that for another blog post.