Creating a ResourceQuota

The syntax of ResourceQuota is shown as follows. Note that it's a namespaced object:

apiVersion: v1
kind: ResourceQuota
metadata:
name: <name>
spec:
hard:
<quota_1>: <count> or <quantity>
...
scopes:
- <scope name>
...
scopeSelector:
- matchExpressions:
scopeName: PriorityClass
operator: <In, NotIn, Exists, DoesNotExist>
values:
- <PriorityClass name>

Only .spec.hard is a required field; .spec.scopes and .spec.scopeSelector are optional. The quota names for .spec.hard are those listed in the preceding table, and only counts or quantities are valid for their values. For example, count/pods: 10 limits pod counts to 10 in a namespace, and requests.cpu: 10000m makes sure that we don't have more requests than the amount specified.

The two optional fields are used to associated a resource quota on certain scopes, so only objects and usages within the scope would be taken into account for the associated quota. Currently, there are four different scopes for the .spec.scopes field:

  • Terminating/NotTerminating: The Terminating scope matches pods with their .spec.activeDeadlineSeconds >= 0, while NotTerminating matches pods without the field set. Bear in mind that Job also has the deadline field, but it won't be propagated to the pods created by the Job.
  • BestEffort/NotBestEffort: The former works on pods at the BestEffort QoS class and another one is for pods at other QoS classes. Since setting either requests or limits on a pod would elevate the pod's QoS class to non-BestEffort, the BestEffort scope doesn't work on compute quotas.

Another scope configuration, scopeSelector, is for choosing objects with a more free and flexible syntax, despite the fact that only PriorityClass is supported as of Kubernetes 1.13. With scopeSelector, we're able to bind a resource quota to certain priority classes with a corresponding PriorityClassName.

So, let's see how a quota works in an example, which can be found at chapter8/8-3_management/resource_quota.yml. In the template, two resource quotas restrict pod numbers (quota-pods) and resources requests (quota-resources) for BestEffort and other QoS, respectively. In this configuration, the desired outcome is confining workloads without requests by pod numbers and restricting the resource amount for those workloads that have requests. As a result, both jobs, capybara and politer-capybara, in the example, which set high parallelism but in different QoS classes, will be capped by two different resource quotas:

$ kubectl apply -f resource_quota.yml
namespace/team-capybara created
resourcequota/quota-pods created
resourcequota/quota-resources created
job.batch/capybara created
job.batch/politer-capybara created

$ kubectl get pod -n team-capybara
NAME READY STATUS RESTARTS AGE
capybara-4wfnj 0/1 Completed 0 13s
politer-capybara-lbf48 0/1 ContainerCreating 0 13s
politer-capybara-md9c7 0/1 ContainerCreating 0 12s
politer-capybara-xkg7g 1/1 Running 0 12s
politer-capybara-zf42k 1/1 Running 0 12s

As we can see, only a few pods are created for the two jobs, even though their parallelism is 20 pods. The messages from their controller confirms that they reached the resource quota:

$ kubectl describe jobs.batch -n team-capybara capybara
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 98s job-controller Created pod: capybara-4wfnj
Warning FailedCreate 97s job-controller Error creating: pods "capybara-ds7zk" is forbidden: exceeded quota: quota-pods, requested: count/pods=1, used: count/pods=1, limited: count/pods=1
...

$ kubectl describe jobs.batch -n team-capybara politer-capybara
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 86s job-controller Error creating: pods "politer-capybara-xmm66" is forbidden: exceeded quota: quota-resources, requested: requests.cpu=25m, used: requests.cpu=100m, limited: requests.cpu=100m
...

We can also find the consumption stats with describe on Namespace or ResourceQuota:

## from namespace
$ kubectl describe namespaces team-capybara
Name: team-capybara
...
Resource Quotas
Name: quota-pods
Scopes: BestEffort
* Matches all pods that do not have resource requirements set. These pods have a best effort quality of service.
Resource Used Hard
-------- --- ---
count/pods 1 1

Name: quota-resources
Scopes: NotBestEffort
* Matches all pods that have at least one resource requirement set. These pods have a burstable or guaranteed quality of service.
Resource Used Hard
-------- --- ---
requests.cpu 100m 100m
requests.memory 100M 1Gi

No resource limits.

## from resourcequotas
$ kubectl describe -n team-capybara resourcequotas
Name: quota-pods
Namespace: team-capybara
...
(information here is the same as above)
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset