Let's see how resource requests/limits affect resource management. A preceding burstable YAML configuration declares both requests and limits by a different threshold as follows:
Type of resource definition |
Resource name |
Value |
Description |
requests |
CPU |
0.1 |
At least 10% of 1CPU core |
Memory |
10Mi |
At least 10 Mbytes of memory |
|
limits |
CPU |
0.5 |
Maximum 50% of 1 CPU core |
Memory |
300Mi |
Maximum 300 Mbytes of memory |
For the CPU resources, acceptable value expressions are either cores (0.1, 0.2 ... 1.0, 2.0) or millicpu (100 m, 200 m ... 1000 m, 2000 m). 1000 m is equivalent to 1.0 core. For example, if a Kubernetes node has 2 cores CPU (or 1 core with hyperthreading), there are a total of 2.0 cores or 2000 millicpu, as shown in the following figure:
By typing kubectl describe node <node name>, you can check what resources are available on the node:
//Find a node name
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready <none> 22h v1.9.0
//Specify node name 'minikube'
$ kubectl describe nodes minikube
Name: minikube
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
...
...
Allocatable:
cpu: 2
memory: 1945652Ki
pods: 110
This shows the node minikube , which has 2.0 CPU and approximately 1,945 MB memory. If you run the nginx example (requests.cpu: 0.1), it occupies at least 0.1 core, as shown in the following figure:
As long as the CPU has enough spaces, it may occupy up to 0.5 cores (limits.cpu: 0.5), as shown in the following figure:
Therefore, if you set requests.cpu to be more than 2.0, the pod won't be assigned to this node, because the allocatable CPU is 2.0 and the nginx pod already occupies at least 0.1 CPU.