Network policy

The network policy works as a software firewall to the pods. By default, every pod can communicate with each other without any boundaries. The network policy is one of the isolations you could apply to these pods. This defines who can access which pods in which port by namespace selector and pod selector. The network policy in a namespace is additive, and once a pod enables the network policy, it denies any other ingress (also known as deny all).

Currently, there are multiple network providers that support the network policy, such as Calico (https://www.projectcalico.org/calico-network-policy-comes-to-kubernetes/), Romana (https://github.com/romana/romana), Weave Net (https://www.weave.works/docs/net/latest/kube-addon/#npc), Contiv (http://contiv.github.io/documents/networking/policies.html), and Trireme (https://github.com/aporeto-inc/trireme-kubernetes). Users are free to choose between any of these. For the purpose of simplicity, though, we're going to use Calico with minikube. To do that, we'll have to launch minikube with the --network-plugin=cni option. The network policy is still pretty new in Kubernetes at this point. We're running Kubernetes version v.1.7.0 with the v.1.0.7 minikube ISO to deploy Calico by self-hosted solution (http://docs.projectcalico.org/v1.5/getting-started/kubernetes/installation/hosted/). Calico can be installed with etcd datastore or the Kubernetes API datastore. For convenience, we'll demonstrate how to install Calico with the Kubernetes API datastore here. Since rbac is enabled in minikube, we'll have to configure the roles and bindings for Calico:

# kubectl apply -f 
https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
clusterrole.rbac.authorization.k8s.io/calico-node configured
clusterrolebinding.rbac.authorization.k8s.io/calico-node configured

Now, let's deploy Calico:

# kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
configmap/calico-config created
service/calico-typha created
deployment.apps/calico-typha created
poddisruptionbudget.policy/calico-typha created
daemonset.extensions/calico-node created
serviceaccount/calico-node created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico....
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created

After doing this, we can list the Calico pods and see whether it's launched successfully:

# kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
calico-node-ctxq8 2/2 Running 0 14m

Let's reuse 6-2-1_nginx.yaml for our example:

# kubectl create -f chapter6/6-2-1_nginx.yaml
replicaset "nginx" created
service "nginx" created
// list the services
# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 36m
nginx NodePort 10.96.51.143 <none> 80:31452/TCP 5s

We will find that our nginx service has an IP address of 10.96.51.143. Let's launch a simple bash and use wget to see whether we can access our nginx:

# kubectl run busybox -i -t --image=busybox /bin/sh
If you don't see a command prompt, try pressing enter.
/ # wget --spider 10.96.51.143
Connecting to 10.96.51.143 (10.96.51.143:80)

The --spider parameter is used to check whether the URL exists. In this case, busybox can access nginx successfully. Next, let's apply a NetworkPolicy to our nginx pods:

// declare a network policy
# cat chapter6/6-3-1_networkpolicy.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: nginx-networkpolicy
spec:
podSelector:
matchLabels:
service: nginx
ingress:
- from:
- podSelector:
matchLabels:
project: chapter6

We can see some important syntax here. The podSelector is used to select pods that should match the labels of the target pod. Another one is ingress[].from[].podSelector, which is used to define who can access these pods. In this case, all the pods with project=chapter6 labels are eligible to access the pods with server=nginx labels. If we go back to our busybox pod, we're unable to contact nginx any more because, right now, the nginx pod has NetworkPolicy on it.

By default, it is deny all, so busybox won't be able to talk to nginx:

// in busybox pod, or you could use `kubectl attach <pod_name> -c busybox -i -t` to re-attach to the pod 
# wget --spider --timeout=1 10.96.51.143
Connecting to 10.96.51.143 (10.96.51.143:80)
wget: download timed out  

We can use kubectl edit deployment busybox to add the project=chapter6 label in busybox pods.

After that, we can contact the nginx pod again:

// inside busybox pod
/ # wget --spider 10.96.51.143 
Connecting to 10.96.51.143 (10.96.51.143:80)  

With the help of the preceding example, we now have an idea of how to apply a network policy. We could also apply some default polices to deny all, or allow all, by tweaking the selector to select nobody or everybody. For example, the deny all behavior can be achieved as follows:

# cat chapter6/6-3-1_np_denyall.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
spec:
podSelector: {}
policyTypes:
- Ingress

This way, all pods that don't match labels will deny all other traffic. Alternatively, we could create a NetworkPolicy whose ingress is listed everywhere. By doing this, the pods running in this namespace can be accessed by anyone:

# cat chapter6/6-3-1_np_allowall.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all
spec:
podSelector: {}
ingress:
- {}
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset