J003-Content-Virtualizing-Exchange-Best-Practices-Part2_SQVirtualization! It seems like everyone is moving to virtualize as much as they can. Whether they do it on-premises, or in the cloud, reducing physical machine counts in favour of increasing virtual machine counts is a trend that continues to grow. This is a three-part series where we look at best practices for virtualizing Microsoft Exchange. You can catch up with part one of the series – Virtualizing on VMware by click here.

As mentioned in the previous post we’re going to try to take a vendor agnostic approach, covering both VMware and Hyper-V, but not comparing the two. We point out where support differs, but we’re not going to try to convince you one platform is better than the other.

We’re also not going to talk about the merits of physical versus virtual. Instead, this series is going to go with three presumptions in place.

  1. You are going to virtualize your Microsoft Exchange server(s)
  2. You already have a virtualization platform in place
  3. You just want to know what you should watch out for, or anything you want to be sure you do, to have the best experience doing this.

If you’re good with that approach, this is part two-Exchange on Hyper-V. Note that some of the wording below is CTRL+C, CTRL+V from our first post, because the advice is the same no matter what the hypervisor happens to be.


If you want to assure you have the best possible support experience, where that is defined as having a single vendor to deal with and no excuses about “that’s not our product” or “best-effort only” then Hyper-V is the way to go. There’s nothing wrong with VMware, but if your virtualization is VMware and your messaging platform is Microsoft, then you have to rely upon both vendors if something goes wrong to help you figure out what that is, and there’s plenty of stories online about finger-pointing between the two. Even in the best of situations, it’s on you to engage your vendors. You can’t expect Microsoft to open a case with VMware if they think the problem’s root lies in the virtualization. You can expect them to conference in a Hyper-V engineer when the Exchange guy thinks it’s the underlying host though!


As with our post on VMware, our recommendations are based on Exchange 2013. You can, to a limited extent, use the same guidelines for Exchange 2010, but the disk I/O load for mailbox servers running Exchange 2010 is so much greater than for those running 2013, my advice is you stick with physical boxes for any production mailbox servers. Don’t even try to virtualize Exchange 2007; you should be getting off that platform ASAP! And if you are still running Exchange 2003, you have far more problems than trying to figure out how to virtualize them! As for Exchange 2016…these guidelines should hold true for that platform as well (if something major changes after GA, we will update this post in the comments section).


Here’s the bottom line on what you want to do with resource allocation to Exchange on VMware.


  • Use processors that support hardware-assisted virtualization, like Intel VT and AMD-V.
  • Use only processors that support Data Execution Prevention and make sure it is enabled.
  • Keep your CPU allocation 1:1. Don’t over-commit. Cores per socket should be 1. But do note that even a 2:1 ratio is officially supported by Microsoft when using Hyper-V.
  • Enable hyper-threading on both the host and the VM.
  • Enable non-uniform memory access.
  • Keep the VM sized to fit within a single NUMA node. If the NUMA node has 8 cores, don’t assign more than that to the Exchange server.
  • Fewer servers with multiple cores is better than more servers with fewer cores.


  • Do not over-commit memory. Not now, not ever. Don’t be that guy.
  • Reserve memory allocations to ensure your VM will always have the amount of memory it should.
  • Let the Hyper-V host handle page file sizing. Don’t try to tune this yourself.
  • Keep the paging file on its own disk, but don’t worry about mirroring this. You are going for speed.


  • Do not thin provision disks unless accessing them directly, or when using SCSI pass-through or iSCSI. In any case, ensure storage is presented as block-level storage.
  • Use SMB 3.0 to access VHDs over the network, rather than allocating expensive storage directly to the Hyper-V host. With this, you can even set up single, dual, or multi-node servers. Single-nodes are cheap and fast; while dual- and multi-node servers offer fault-tolerance.
  • Invest in RDMS capable NICs to maximize throughput while minimizing latency and host CPU load.


  • Use NIC teaming using the built-in capabilities of Windows Server 2012.
  • Use the synthetic network card driver only.


To avoid a DAG failover when you migrate a server,

  • Dedicate at least two network interfaces on each host for live migration
  • Set your cluster heartbeat to 2000ms
  • Use an anti-affinity rule for each DAG so you don’t have both DAG members on the same host


  • Deploy your Hyper-V host servers using Windows Server Core.
  • Enable non-uniform memory access.
  • Set the power policy to high-performance. This is not the time to try to save a few KWs.
  • Run your appropriate sizing calculations, jetstress your storage, and ensure you have sufficient Global Catalog servers for your Exchange environment. Virtualization does not make any of these less important than with physical servers.
  • Always refer to https://technet.microsoft.com/en-us/library/jj619301(v=exchg.150).aspx for the most up-to-date guidance, as it does change from time to time.


  • Don’t go past the 2:1 ratio of virtual processors to physical ones. Hyper-V may support it, but Exchange won’t!
  • Don’t use snapshots.
  • Don’t put two DAG members on the same host.
  • Don’t configure the guest operating system to “save state” when shutting down the host.

In our last post in this series, we will go over best practices/recommendations for virtualizing Exchange on cloud IaaS. Check back soon!

Get your free 30-day GFI MailEssentials trial

Email open you up to threats. See how you can protect yourself against malware and time-wasting spam.