To create a resource pool in a cluster (this procedure is similar for the single ESXi host), proceed as follows:
- Right-click the cluster and select the New Resource Pool option.
- Specify a name for the resource pool, giving a meaningful name that's useful to identify the resource scope better.
- Specify how CPU and RAM resources should be allocated, and then click OK. When the resource pool has been created, you can start adding VMs to it. Share values set as High, Normal, or Low specify share values in a 4:2:1 ratio, as shown in the following screenshot:
Reservations, Limits, and Shares work the same as with VMs. Reservation Type can be set to Expandable (we will discuss this specific type later).
Once you create a resource pool, you can quickly move VMs to the resource pool simply using the Drag and Drop function in the inventory.
You can also create a more complex structure of resource pools. Inside of the resource pool, you can create child resource pool(s) as well:
You may be wondering how resource pools work. To explain how they work, let's take a look at the following example. Three resource pools have been created that correspond to three different departments—RP-PROD, RP-Internal, and RP-DEV:
A configuration of the resource pools is shown in the following table:
Cluster |
CPU shares |
CPU limit |
CPU reservation |
Memory shares |
Memory limit |
Memory reservation |
RP-PROD |
Normal |
Unlimited |
20 GHz |
Normal |
Unlimited |
32 GB |
RP-Internal |
Normal |
Unlimited |
10 GHz |
Normal |
Unlimited |
16 GB |
RP-DEV |
Normal |
10 GHz |
0 |
Normal |
16 GB |
0 |
And following VMs are created:
VM |
CPU |
RAM |
RP |
DB1 |
8 vCPU / 19.2 GHz |
16 GB |
RP-PROD |
DB2 |
8 vCPU / 19.2 GHz |
16 GB |
RP-PROD |
WWW1 |
4 vCPU / 9.6 GHz |
8 GB |
RP-PROD |
WWW2 |
4 vCPU / 9.6 GHz |
8 GB |
RP-PROD |
DC1 |
1 vCPU / 2.4 GHz |
4 GB |
RP-Internal |
DC2 |
1vCPU / 2.4 GHz |
4 GB |
RP-Internal |
TS1 |
4v CPU / 9.6 GHz |
12 GB |
RP-Internal |
FS1 |
2 vCPU / 9.6 GHz |
8 GB |
RP-Internal |
WWW1 |
2 vCPU / 4.8 GHz |
8 GB |
RP-DEV |
WWW2 |
2 vCPU / 4.8 GHz |
8 GB |
RP-DEV |
DB1 |
4 vCPU / 9.6 GHz |
16 GB |
RP-DEV |
DB2 |
4 vCPU / 9.6 GHz |
16 GB |
RP-DEV |
Our clusters consist of three ESXi hosts, each containing eight physical CPU cores at 2.4 GHz and 32 GB RAM.
The overall cluster capacity is 57.6 GHz of CPU power and 96 GB RAM. If every VM consumes 100% of the configured resources, the total amount of required resources is 110.4 GHz and 124 GB of memory, which does not fit the cluster. Now, the resource pools come into play. Let's assume that all VMs are 100% utilized; what will the resource allocation for the VMs be?
First, the reservation must be satisfied. Based on that, we have 27.6 GHz of CPU power to be distributed and 48 GB of memory.
Resource pools are configured on the same level. Thus, the remaining resources are divided between resource pools using shares:
Resource pool
|
Reservation for CPU |
Remaining resources based on shares for CPU |
Total available resources for CPU |
Reservation for memory |
Remaining resources based on shares for memory |
Total available resources for memory |
RP-PROD |
20 GHz |
9.2 GHz |
29.2 GHz |
32 GB |
16 GB |
48 GB |
RP-Internal |
10 GHz |
9.2 GHz |
19.2 GHz |
16 GB |
16 GB |
32 GB |
RP-DEV |
0 |
9.2 GHz |
9.2 GHz |
0 |
16 GB |
16 GB |
So now the shares, reservations, and limits will be applied if they are configured on the individual VMs within the resource pool. If no RLS on the VMs are configured, each VM will get an equal amount of resources, so in the case of RP-PROD VMs the allocation will be as follows (no RLS is configured on any VM):
VM |
CPU |
RAM |
RP |
DB1 |
7.3 GHz |
16 GB |
RP-PROD |
DB2 |
7.3 GHz |
16 GB |
RP-PROD |
WWW1 |
7.3 GHz |
8 GB |
RP-PROD |
WWW1 |
7.3 GHz |
8 GB |
RP-PROD |
Memory allocation is not affected because the total size of configured memory does not exceed the Total Available Resource for Memory, but the CPU will be throttled for the VMs since the required power is 57.6 GHz, but the Total Available Resource for CPU is 29.2 GHz.
Using resource pools, resources assigned to a group of VMs can be adjusted from a single point with no need to edit every single VM.
Keep in mind that you can configure RLS settings on multiple levels, so the resource hierarchy might be quite complicated to calculate.