By now, we all know that VMware has introduced a percentage-based cluster resource reservation model. Using this setting, you need to specify how many resources you want to reserve to accommodate a host failure. It also allows us to select different percentages for CPU and memory.
You might wonder how you would calculate how many resources you want to reserve for your HA cluster. While it was a straightforward approach when we used to select a number of hosts reserved for servicing a host failure, we have seen disadvantages as well. If you use the number of ESXi hosts failure in your HA cluster, you will reserve those completely; thus, it will not be efficient to tune into your HA cluster or put it to best use. Also, it avoids the commonly experienced slot size issue, where values are skewed due to a large reservation.
Percentage-based reservation is also much more effective as it considers the actual reservation per VM to calculate the available failover resources, which means that clusters dynamically adjust when resources are added.
What you get is an option to specify a percentage of failover resources for both CPU and memory:
The default host failover capacity percentage is dynamic, based on the number of hosts in the cluster and the host failures to tolerate. A cluster with three hosts and one host failure to tolerate would result in a failover capacity of 33 percent. A cluster with four hosts and one host failure to tolerate would result in a failover capacity of 25 percent. You can override this default and enter your own percentage if you wish.