Building Hyper-V Hosts: Networking Basics

In this second article on building Hyper-v Hosts series we will be focusing on network design for Hyper-V servers, specifically for failover clusters.

Before determining the number of required NIC’s, we need to talk about how many networks we plan to have. The number of networks you need depend on your existing network infrastructure as well as your existing SAN technology. Hyper-V standalone hosts require less networks since they do not have to deal with cluster traffic or dedicated live migration. However a Hyper-V failover cluster will require a greater number of networks.

Here’s a list of possible networks you may wish to consider:

  • Management
  • Cluster communication
  • Live Migration (vMotion)
  • Virtual Machine Network(s)
  • Storage Networking Fabric A
  • Storage Networking Fabric B
  • Backup

If you plan to use iSCSI or SMB SAN instead of fibre channel (FC), then you’ll need to account for those networks.  Many iSCSI SAN vendors require separate VLAN’s and hardware for each storage fabric path.  Some environments may require you to separate backup traffic from the management traffic. You may also choose to share networks for cluster communication and live migration depending on your needs.

Consider which network(s) should be used for cluster communication. Best practice is to ensure that cluster communication should be excluded from storage networks (i.e. iSCSI).

Hyper-V Networks 001

Specify a cluster network for cluster shared volume traffic. Every cluster shared volume has an owner node in your failover cluster. While each node can access the volumes simultaneously, metadata changes must be sent over the network to the owner nodes (add/remove files, expand VHD, etc.). The cluster network with the lowest metric is used for cluster shared volume traffic.

To configure a specific network to be used, set its metric to be the lowest.

Once you have determined the number of networks you need then you can determine the number and type of network interfaces you require.

A traditional simple design is to have a pair of 1 GbE NIC’s for each network.

  • 1 Team for Management
  • 1 Team for Cluster and Live Migration
  • 1 Team for Virtual Machine traffic
  • 2 separate NIC’s for SMB/iSCSI storage  (if applicable)

This design is great for a smaller cluster since it doesn’t require 10 GbE infrastructure. Unfortunately it will require a server with eight 1 GbE interfaces as well as eight switch ports, multiplied by the number of hosts needed in your cluster. You could see how this model could exhaust your switch ports pretty quickly. In addition you’ll find that it will take a long time to drain hosts of large VM’s when you can only transfer at 1 Gbps.

You could simply upgrade this model by changing your storage NIC’s to 10 GbE or all of your NIC’s to 10 GbE if you can. This model still requires a large number of NIC’s and switch ports which are even more costly at 10 GbE. On top of that you could have expensive 10 GbE ports tied up for resources that are only used sparingly (i.e. cluster, live migration).

If only there was a way to take advantage of these high speed network interfaces and share the high bandwidth between the networks giving each network traffic type their own priority. Good news! Converged networking allows us to do this, and we’ll discuss the implementation in the next post.

As always when building and validating Hyper-V clusters, it is helpful to step through the best practices checklist for each host.