[Resovled] Call for help — removing VM reservations & reclaiming resources 0

Update (05/24/2013)

As it turns out, this was simply a misunderstanding on my part on how the metrics in the advanced memory performance graphs should look after releasing VM memory reservations.

I was basing my assumption that resources were not being freed on the fact that the granted and consumed advanced performance metrics were not dropping down from their full reservation values.  After working with VMware support, and being referred to this communities post, I now realize where I went wrong.

From the communities post:

Note that for a host that is not memory overcommitted, the Consumed memory represents a “high water mark” of the memory usage by the VM. It is possible that in the past, the VM was actively using a large amount of host physical memory but currently it is not. Because host memory is not overcommitted, the Consumed memory will not be shrunk through ballooning or swapping. Hence, the Consumed memory could be much higher than the Active memory when host memory is not overcommitted.

As soon as I read the above paragraph I realized my error…  There is absolutely no contention in this environment since all VMs have been provisioned with full memory reservations from day 1.  The VMs are not required to dynamically give up their memory until there is some form of contention or overcommittment within the environment.

The VMs did, in fact, release their memory reservations, and the environment is now ready for some overcommittment to be introduced.

Moral of the story:  Make sure you understand all factors that may be yielding a specific result.

Thanks goes out to VMware’s Support team for pointing me in the right direction.


— Original Post —

I am composing this post as a call for help!  I have been tasked with removing full memory reservations from every virtual machine across multiple clusters in a client’s environment.

Here is a little background information:

  • Client previously had a standard operating policy that stated every virtual machine should run with full memory reservations
  • These memory reservations were set on a per-VM basis, on every VM in the environment (regardless of use case)
  • Client recently updated to vSphere 5.0, from 4.1.

After much persuasion, begging, and an animal sacrifice; I was able to convince the IT folks at this site that there were many gains to be had in allowing vSphere to manage its own resources.  Reservations of any kind (CPU, memory, etc.) should be used only when appropriate.  According to vCops, many of the clusters in this client’s environment were operating with about 75% to 93% of wasted resources due to oversized VMs.

Now that I have been given the green light to proceed, I need to find the most efficient and least disruptive procedure to remove the memory reservations from all of these VMs (we’re talking 1,000 – 2,000 VMs).  I was able to successfully remove the reservations with the following PowerCLI snippet:

get-vm | Get-VMResourceConfiguration | Set-VMResourceConfiguration -MemReservationMB 0

My problem:

After removing all of the reservations with PowerCLI, I now need to find a way to get the VMs to release the resources that they have already claimed.  This is where I am hoping that someone out there can help me!

As you and I know, VMs are hoarders when it comes to resource use via reservations.  Once a VM claims resources, they will not release them until the VM in question is fully powered down.  A simple VM reboot will not work, as the hypervisor (ESXi) is not aware of what is happening inside of the VM itself.  Reservations can always be expanded on the fly, but I am looking to eliminate them completely.

I have tried a vMotion, thinking that maybe the new host would reload the VM’s resource settings, but that did not help.

Plea for help!

I am seriously hoping that there is an API call out there or some other black magic that someone knows about that can aid me in my goal.  Telling management at this client’s site that a full VM power off operation is required for every VM is suicide for this cause.  They will immediately reject it and stick with full reservations, and continue wasting hundreds of thousands of dollars on unnecessary hardware purchases for extra capacity.

Even if they decided going forward that reservations would only be used for new VM standups, this could possibly become problematic, as new VMs would be competing against each other for a very small pool of resources that are not claimed by the existing reservations.

I am not afraid of digging deep in order to get this problem solved.  Programming APIs, etc. are not out of my reach. I am holding out hope! I will also attempt to get this resolved via VMware’s Support, reporting back any findings with that avenue.