The name of a resource is a unique identifier within a namespace in the Kubernetes cluster. Using a Kubernetes namespace could isolate namespaces for different environments in the same cluster. It gives you the flexibility of creating an isolated environment and partitioning resources to different projects and teams.
Pods, services, replication controllers are contained in a certain namespace. Some resources, such as nodes and PVs, do not belong to any namespace.
By default, Kubernetes has created a namespace named default
. All the objects created without specifying namespaces will be put into default namespaces. You could use kubectl
to list namespaces:
// check all namespaces # kubectl get namespaces NAME LABELS STATUS AGE default <none> Active 8d
Kubernetes will also create another initial namespace called kube-system
for locating Kubernetes system objects, such as a Kubernetes UI pod.
The name of a namespace must be a DNS label and follow the following rules:
new-namespace
by using the configuration file:# cat newNamespace.yaml apiVersion: v1 kind: Namespace metadata: name: new-namespace // create the resource by kubectl # kubectl create -f newNamespace.yaml
// list namespaces # kubectl get namespaces NAME LABELS STATUS AGE default <none> Active 8d new-namespace <none> Active 12m
You can see now that we have two namespaces.
// run a nginx RC in namespace=new-namespace # kubectl run nginx --image=nginx --namespace=new-namespace
# kubectl get pods NAME READY STATUS RESTARTS AGE
--namespace
parameter:// to list pods in all namespaces # kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE new-namespace nginx-ns0ig 1/1 Running 0 17m // to get pods from new-namespace # kubectl get pods --namespace=new-namespace NAME READY STATUS RESTARTS AGE nginx-ns0ig 1/1 Running 0 18m
We can see our pods now.
kubectl create
:# kubectl create -f myResource.yaml --namespace=new-namespace
It is possible to switch the default namespace in Kubernetes:
# kubectl config view | grep current-context current-context: ""
It reveals that we don't have any context setting now.
set-context
could create a new one or overwrite the existing one:# kubectl config set-context <current context or new context name> --namespace=new-namespace
# kubectl config view apiVersion: v1 clusters: [] contexts: - context: cluster: "" namespace: new-namespace user: "" name: new-context current-context: "" kind: Config preferences: {} users: []
We can see the namespace is set properly in the contexts section.
# kubectl config use-context new-context
# kubectl config view | grep current-context current-context: new-context
We can see that current-context
is new-context
now.
Namespace
parameter, as we can list the pods in new-namespace
:# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-ns0ig 1/1 Running 0 54m
# kubectl describe pod nginx-ns0ig Name: nginx-ns0ig Namespace: new-namespace Image(s): nginx Node: ip-10-96-219-156/10.96.219.156 Start Time: Sun, 20 Dec 2015 15:03:40 +0000 Labels: run=nginx Status: Running
kubectl delete
could delete the resources including the namespace. Deleting a namespace will erase all the resources under that namespace:# kubectl delete namespaces new-namespace namespace "new-namespace" deleted
# kubectl get pods NAME READY STATUS RESTARTS AGE
new-namespace
:# kubectl config view | grep current-context current-context: new-context
Will it be a problem?
nginx
replication controller again.# kubectl run nginx --image=nginx Error from server: namespaces "new-namespace" not found
It will try to create an nginx
replication controller and replica pod in the current namespace we just deleted. Kubernetes will throw out an error if the namespace is not found.
# kubectl config set-context new-context --namespace="" context "new-context" set.
nginx
again.# kubectl run nginx --image=nginx replicationcontroller "nginx" created Does it real run in default namespace? Let's describe the pod. # kubectl describe pods nginx-ymqeh Name: nginx-ymqeh Namespace: default Image(s): nginx Node: ip-10-96-219-156/10.96.219.156 Start Time: Sun, 20 Dec 2015 16:13:33 +0000 Labels: run=nginx Status: Running ...
We can see the pod is currently running in Namespace: default
. Everything looks fine.
Sometimes you'll need to limit the resource quota for each team by distinguishing the namespace. After you create a new namespace, the details look like this:
$ kubectl describe namespaces new-namespace Name: new-namespace Labels: <none> Status: Active No resource quota. No resource limits.
Resource quota and limits are not set by default. Kubernetes supports constraint for a container or pod. LimitRanger
in the Kubernetes API server has to be enabled before setting the constraint. You could either use a command line or configuration file to enable it:
// using command line- # kube-apiserver --admission-control=LimitRanger // using configuration file # cat /etc/kubernetes/apiserver ... # default admission control policies KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota" ...
The following is a good example for creating a limit in a namespace.
We will then limit the resources in a pod with the values 2
as max
and 200m
as min
for cpu
, and 1Gi
as max
and 6Mi
as min
for memory
. For the container, the cpu
is limited between 100m
- 2
and the memory is between 3Mi
- 1Gi
. If the max
is set, then you have to specify the limit in the pod/container spec during the resource creation; if the min
is set then the request has to be specified during the pod/container creation. The default
and defaultRequest
section in LimitRange
is used to specify the default limit and request in the container spec:
# cat limits.yaml apiVersion: v1 kind: LimitRange metadata: name: limits namespace: new-namespace spec: limits: - max: cpu: "2" memory: 1Gi min: cpu: 200m memory: 6Mi type: Pod - default: cpu: 300m memory: 200Mi defaultRequest: cpu: 200m memory: 100Mi max: cpu: "2" memory: 1Gi min: cpu: 100m memory: 3Mi type: Container // create LimitRange # kubectl create -f limits.yaml limitrange "limits" created
After the LimitRange
is created, we can list these down just like with any other resource:
// list LimitRange # kubectl get LimitRange --namespace=new-namespace NAME AGE limits 22m
When you describe the new namespace you will now be able to see the constraint:
# kubectl describe namespace new-namespace Name: new-namespace Labels: <none> Status: Active No resource quota. Resource Limits Type Resource Min Max Request Limit Limit/Request ---- -------- --- --- ------- ----- ------------- Pod memory 6Mi 1Gi - - - Pod cpu 200m 2 - - - Container cpu 100m 2 200m 300m - Container memory 3Mi 1Gi 100Mi 200Mi -
All the pods and containers created in this namespace have to follow the resource limits listed here. If the definitions violate the rule, a validation error will be thrown accordingly.
We could delete the LimitRange
resource via:
# kubectl delete LimitRange <limit name> --namespace=<namespace>
Here, the limit name is limits
and the namespace is new-namespace
. After that when you describe the namespace, the constraint is gone:
# kubectl describe namespace <namespace> Name: new-namespace Labels: <none> Status: Active No resource quota. No resource limits.
Many resources are running under a namespace, check out the following recipes: