Sharing Physical NICs with Guest VMs for iSCSI Traffic on Hyper-V

Since Microsoft has introduced support for virtualizing many more workloads with Lync Server 2010 we'll probably begin to see more and more deployments done with a hypervisor. One challenge can be providing shared storage access to a virtual machine guest operating system running on top of a hypervisor such as Hyper-V. An example here would be if a company wants to provide high-availability for the Lync pool back-end running a SQL server cluster, which requires shared storage for the quorum, MSDTC, and data volumes.

It's not uncommon for Hyper-V hosts to be configured in a failover cluster using dedicated NICs for iSCSI access, but how often does a guest VM typically need access to shared storage? Typically not often. Usually if a VM needs access to the SAN you can connect to the LUN from the host and then pass it through to the guest. With a SQL cluster though, both VMs should reside on separate hosts and need direct access to the SAN. In order to accomplish this you need to initiate iSCSI connections from within the guest VM directly to the SAN.

If you have unlimited NICs this is really easy. Just bind a physical NIC (pNIC) connected to the storage network to a Hyper-V virtual switch, create a new virtual NIC (vNIC) on the Hyper-V guest on this switch, and away you go - the guest now has iSCSI network access. For more realistic scenarios you may be limited on your NIC count and have to get creative because you can't dedicate a pNIC to VMs only.

If this is true the first option that comes to mind is to leverage VLAN tagging on a virtual switch already using a pNIC. This way you can tag LAN traffic and storage network traffic separately. The downside to this approach is you're now sharing the same pNIC for both LAN and storage traffic in VMs. The host OS still has a dedicated NIC for iSCSI traffic here. If the LAN traffic starts maxing out the capacity of the adapter you could potentially lose connectivity to the SAN and cause some serious problems for the cluster. This configuration would look like this:

Another option here is to leverage the network adapters already used by the host system for iSCSI access, i.e. the same adapters accessing the SAN and providing high-availability for the guest VMs. That may seem like a poor idea, but I think the alternative of sharing LAN and iSCSI network on the same adapter is far worse. In this configuration the Hyper-V host and the VM guest are both accessing the storage network through the same physical NIC, dedicated purely for storage traffic. A virtual NIC is created on the host and the host traffic also passes through the Hyper-V virtual switch in this scenario. The host and guest sharing a pNIC setup is depicted here:

In order to accomplish this setup you'll want to use the following steps:

  1. Shut down any VMs running on the host OS.
  2. Stop any iSCSI access from the host.
  3. Create a new Hyper-V virtual switch using the pNIC previously dedicated to iSCSI. Be sure to check the box "Allow management operating system to share this network adapter."
  4. Configure the new network adapter on the host OS with the old pNIC IP address and settings.
  5. Reconnect iSCSI targets.
  6. Edit the VM guest and add a new network adapter. Assign the network adapter to the iSCSI network virtual switch.
  7. Configure iSCSI targets from within the guest VM

At this point, all your storage traffic is isolated to the same physical NIC.

Warning: I cannot for the life of me find documentation from Microsoft on whether or not this actually supported. If I were to wager a guess it's on the unsupported side, but probably because of the "We haven't tested this" reason more than the "It doesn't work" reason. Your mileage may vary on your deployment, but this appears to be working just fine. I was a little reluctant to try this thinking the host OS may have iSCSI performance problems through the Hyper-V virtual switch, but all seems well so far.