Lync MX and Skype Crash on Windows 8.1

I got around to upgrading my main laptop to Windows 8.1 this week (finally!) and found I was unable to open the Metro-version apps of both Lync and Skype. They would start to open, show me the splash screen, and then exit. After a bit of digging it appears the crash is caused by the display adapter driver. Seems like Intel still has some issues with their driver for the HD 4000 Graphics on 8.1 and there doesn't seem to a fix.

In case it saves someone else the time, I've already tried all of the following revisions, the latest of which was released less than two weeks ago:

  • 10.18.10.3379
  • 10.18.10.3345
  • 10.18.10.3316
  • 9.18.10.3190 (Windows 8 version)

So you may want to hold off on that upgrade if you have an Intel HD 4000 chip and rely on Lync MX or Skype heavily.

Broadcom NIC Teaming and Hyper-V on Server 2008 R2

The short of this is if you’re trying to use NIC teaming for the virtual adapter on Server 2008 R2 save yourself the headache, pony up a few extra dollars and buy Intel NICs.  The Broadcoms have a bug in the driver that prevents  this from working correctly on Server 2008 R2 Hyper-V when using a team for the Hyper-V virtual switch. Per the Broadcom driver release notes this is supposed to be a supported configured now, but it does not work correctly. There are two scenarios so far where I’ve been able to reproduce the problem:

  • VM guest has a static MAC assigned and is running on a VM host. Shut down the VM, assign it a dynamic MAC and start it again on the same host. You’ll find it has no network connectivity.

  • VM guest is running on VM Host A with a dynamic MAC. Live Migrate the VM guest to Host B. It has network connectivity at this point, but if you restart the VM on the opposite host you’ll find it receives a new MAC and no longer has network connectivity.

Take a look at this diagram (only showing NICs relevant to Hyper-V) and you’ll see what the setup is that causes the issue. We have 2 Broadcom NICs on Dell R710’s each connected to a different physical switch to protect against a port, NIC, or switch failure. They are teamed in an Active/Passive configuration. No load balancing or link aggregation going on here. The virtual adapter composed of the two team members is then passed through as a virtual switch to Hyper-V and it is not shared with the host operating system. The host itself has a team for its own management and for the Live Migration network, which I’ll point both work flawlessly - the issue here is purely related to Broadcom’s teaming through a Hyper-V virtual switch.

image

Say I have a VM running on Host A where the NIC team has a hypothetical MAC called MAC A. When it boots up, it receives a dynamic MAC address we'll call MAC C from Host A’s pool. If you try to ping the VM guest’s IP 1.1.1.1 and then look at your ARP table you’ll see something like:

Internet Address Physical Address Type
1.1.1.1 MAC A Dynamic

This is because the NIC team is responsible for answering requests on behalf of the VM. When the NIC team receives traffic for the VM’s IP it will accept it, and then pass it along to the Hyper-V virtual switch. If you were to take a packet trace off the NIC you’ll see the team has modified the Layer 2 destination address to be MAC C, the dynamic MAC the VM got when it booted. This is how the teaming is supposed to work.

Now say I migrate the VM to Host B (where the NIC team has a MAC called MAC B) via Live or Quick migration. The VM retains connectivity and if you take a look at your MAC table you’ll now see something like:

Internet Address Physical Address Type
1.1.1.1 MAC B Dynamic

Yup, the MAC for Host B’s NIC team is now answering requests for the VM’s IP. Again, this is how the teaming is supposed to work. Everything is peachy and you might think your clustering is working out great, until you restart the VM.

image

When the VM restarts, upon booting it receives a new dynamic MAC from Host B’s pool and you’ll find it has no network connectivity. Your ARP table hasn’t changed (it shouldn’t, the same team is still responsible for the VM), but the guest has been effectively dropped. When I pulled out a packet trace what I noticed was the team was still receiving traffic for the VM’s IP, which ruled out a switching problem, but it was still modifying the packets and sending them to MAC C. When in fact, now the VM has restarted it has MAC D. The problem is that it seems somebody (the driver) forgot to notice the VM has a new MAC and is sending packets to the wrong destination, so the VM never receives any traffic.

image

I found that toggling the NIC team within the host actually fixes the problem. If you simply disable the virtual team adapter and then re-enable it the VM will instantly get its connectivity back so it seems that during the startup process the team reads the VM MACs it’s supposed to service. I would think this is something it should be doing constantly to prevent this exact issue, but for now it looks like it’s done only at initialization.

The most practical workaround I’ve found so far is to just set static MAC addresses on the VMs within the Hyper-V settings. If the VM’s MAC never changes, this problem simply doesn’t exist. So while that defeats the purpose of the dynamic MAC pool on a Hyper-V host it allows the teaming failover to operate properly while you restart VMs and move them between cluster nodes.

I’ve raised the issue with Dell/Broadcom and they agree it’s a driver problem. There is supposedly a driver update due mid-March, but no guarantees this will be addressed in that update. The next update isn’t slated until June which is a long time to wait, hence the recommendation to just use Intel NICs.

Other notes for the inquisitive:

  • Disabling the team and using only a single adapter makes this work properly.
  • Happens with or without all TOE, checksum and RSS features.
  • No VLAN tagging in use.
  • Issue persists when team members are plugged into the same switch.
  • Latest drivers from Dell/Broadcom (12/15/2009) as of this writing.
  • Happens whether teaming is configured before or after Hyper-V role is installed.

Device Review: Plantronics Voyager PRO UC

Disclaimer: Plantronics did me a sample device to test out, but this post is not a paid review in any way.

Prior to my poor experience with the Jabra GO 6430 and Communicator I had picked up a Plantronics Voyager PRO for use with my iPhone in the car because of California’s hands-free driving laws. I had been extremely happy with the quality of that device and was surprised to see Plantronics had also released a UC certified version for Communicator. My favorite headset up until then had been the Plantronics Savi Go, but I needed something a lot more portable on a day-to-day basis and the Savi Go charging stand was a bit bulky. I definitely needed to replace that Jabra so I picked up a PRO UC to try with Communicator with high hopes based on my experience with the Savi Go.

Unboxing photos:

IMG_0367

IMG_0368

IMG_0370

IMG_0371

I was very happy to see that the Voyager PRO UC worked well with MOC right out of the box – no installation or drivers needed, just the way it should be. The multi-function button worked great and the headset was extremely comfortable to wear for long periods of time with the felt ear bud cover. The sound quality is definitely on par with the Savi Go which was already the best device out there so you can’t go wrong with this headset. As an added bonus it also pairs with a mobile phone so I can get by with a single headset now for my work calls when I have Communicator open and when I’m on the road driving with my mobile.

There really isn’t much to say. The device works as advertised, it looks good and the sound quality is outstanding. For someone who is constantly mobile this is the headset I’d recommend using, but if you’re at a desk more often the Savi Go is still a great choice.