Service mesh

A service mesh is an infrastructure layer for handling service-to-service communication. Especially in the microservice world, the application at hand might contain hundreds of thousands of services. The network topology can be very complicated here. A service mesh can provide the following:

There are two major service mesh implementations on the market—Istio (https://istio.io) and Linkerd (https://linkerd.io). Both of these deploy network proxy containers alongside the application container (the so-called sidecar container) and provide Kubernetes support. The following diagram is a simplified common architecture of the service mesh:

A service mesh normally contains a control plane, which is the brain of the mesh. This can manage and enforce the policies for route traffic, as well as collect telemetry data that can be integrated with other systems. It also carries out identity and credential management for services or end users. The service mesh sidecar container, which acts as a network proxy, lives side by side with the application container. The communication between services is passed through the sidecar container, which means that it can control the traffic by user-defined policies, secure the traffic via TLS encryption, do the load balancing and retries, control the ingress/egress, collect the metrics, and so on.

In the following section, we'll use Istio as the example, but you're free to use any implementation in your organization. First, let's get the latest version of Istio. At the time of writing, the latest version is 1.0.5:

// get the latest istio
# curl
-L https://git.io/getLatestIstio |
sh -
Downloading istio-1.0.5 from https://github.com/istio/istio/releases/download/1.0.5/istio-1.0.5-osx.tar.gz ...

// get into the folder
# cd istio-1.0.5/

Next, let's create a Custom Resource Definition (CRD) for Istio:

# kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
customresourcedefinition.apiextensions.k8s.io/virtualservices.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/destinationrules.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/serviceentries.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/gateways.networking.istio.io created
...

In the following example, we're installing Istio with default mutual TLS authentication. The resource definition is under the install/kubernetes/istio-demo-auth.yaml file. If you'd like to deploy it without TLS authentication, you can use install/kubernetes/istio-demo.yaml instead:

# kubectl apply -f install/kubernetes/istio-demo-auth.yaml
namespace/istio-system created
configmap/istio-galley-configuration created
...
kubernetes.config.istio.io/attributes created
destinationrule.networking.istio.io/istio-policy created
destinationrule.networking.istio.io/istio-telemetry created

After deployment, let's check that the services and pods have all been deployed successfully into the istio-system namespace:

// check services are launched successfully
# kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.98.182.66 <none> 3000/TCP 13s
istio-citadel ClusterIP 10.105.65.6 <none> 8060/TCP,9093/TCP 13s
istio-egressgateway ClusterIP 10.105.178.212 <none> 80/TCP,443/TCP 13s
istio-galley ClusterIP 10.103.123.213 <none> 443/TCP,9093/TCP 13s
istio-ingressgateway LoadBalancer 10.107.243.112 <pending> 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:32320/TCP,8060:31750/TCP,853:30790/TCP,15030:30313/TCP,15031:30851/TCP 13s
istio-pilot ClusterIP 10.104.123.60 <none> 15010/TCP,15011/TCP,8080/TCP,9093/TCP 13s
istio-policy ClusterIP 10.111.227.237 <none> 9091/TCP,15004/TCP,9093/TCP 13s
istio-sidecar-injector ClusterIP 10.107.43.206 <none> 443/TCP 13s
istio-telemetry ClusterIP 10.103.118.119 <none> 9091/TCP,15004/TCP,9093/TCP,42422/TCP 13s
jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 11s
jaeger-collector ClusterIP 10.110.234.134 <none> 14267/TCP,14268/TCP 11s
jaeger-query ClusterIP 10.103.19.74 <none> 16686/TCP 12s
prometheus ClusterIP 10.96.62.77 <none> 9090/TCP 13s
servicegraph ClusterIP 10.100.191.216 <none> 8088/TCP 13s
tracing ClusterIP 10.107.99.50 <none> 80/TCP 11s
zipkin ClusterIP 10.98.206.168 <none> 9411/TCP 11s

After waiting for a few minutes, check that the pods are all in Running and Completed states, as follows:

# kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
grafana-7ffdd5fb74-hzwcn 1/1 Running 0 5m1s
istio-citadel-55cdfdd57c-zzs2s 1/1 Running 0 5m1s
istio-cleanup-secrets-qhbvk 0/1 Completed 0 5m3s
istio-egressgateway-687499c95f-fbbwq 1/1 Running 0 5m1s
istio-galley-76bbb946c8-9mw2g 1/1 Running 0 5m1s
istio-grafana-post-install-8xxps 0/1 Completed 0 5m3s
istio-ingressgateway-54f5457d68-n7xsj 1/1 Running 0 5m1s
istio-pilot-7bf5674b9f-jnnvx 2/2 Running 0 5m1s
istio-policy-75dfcf6f6d-nwvdn 2/2 Running 0 5m1s
istio-security-post-install-stv2c 0/1 Completed 0 5m3s
istio-sidecar-injector-9c6698858-gr86p 1/1 Running 0 5m1s
istio-telemetry-67f94c555b-4mt4l 2/2 Running 0 5m1s
istio-tracing-6445d6dbbf-8r5r4 1/1 Running 0 5m1s
prometheus-65d6f6b6c-qrp6f 1/1 Running 0 5m1s
servicegraph-5c6f47859-qzlml 1/1 Running 2 5m1s

Since we have istio-sidecar-injector deployed, we can simply use kubectl label namespace default istio-injection=enabled to enable the sidecar container injection for every pod in the default namespace. istio-sidecar-injector acts as a mutating admission controller, which will inject the sidecar container to the pod if the namespace is labelled with istio-injection=enabled. Next, we can launch a sample application from the samples folder. Helloworld demonstrates the use of canary deployment (https://en.wikipedia.org/wiki/Deployment_environment), which will distribute the traffic to the helloworld-v1 and helloworld-v2 services:

// launch sample application
# kubectl run nginx --image=nginx
deployment.apps/nginx created

// list pods
# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-64f497f8fd-b7d4k 2/2 Running 0 3s

If we inspect one of the pods, we'll find that the istio-proxy container was injected into it:

# kubectl describe po nginx-64f497f8fd-b7d4k
Name: nginx-64f497f8fd-b7d4k
Namespace: default
Labels: pod-template-hash=2090539498
run=nginx
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container nginx
sidecar.istio.io/status:
{"version":"50128f63e7b050c58e1cdce95b577358054109ad2aff4bc4995158c06924a43b","initContainers":["istio-init"],"containers":["istio-proxy"]...
Status: Running
Init Containers:
istio-init:
Container ID: docker://3ec33c4cbc66682f9a6846ae6f310808da3a2a600b3d107a0d361b5deb6d3018
Image: docker.io/istio/proxy_init:1.0.5
...
Containers:
nginx:
Container ID: docker://42ab7df7366c1838489be0c7264a91235d8e5d79510f3d0f078726165e95665a
Image: nginx
...
istio-proxy:
Container ID: docker://7bdf7b82ce3678174dea12fafd2c7f0726bfffc562ed3505a69991b06cf32d0d
Image: docker.io/istio/proxyv2:1.0.5
Image ID: docker-pullable://istio/proxyv2@sha256:8b7d549100638a3697886e549c149fb588800861de8c83605557a9b4b20343d4
Port: 15090/TCP
Host Port: 0/TCP
Args:
proxy
sidecar
--configPath
/etc/istio/proxy
--binaryPath
/usr/local/bin/envoy
--serviceCluster
istio-proxy
--drainDuration
45s
--parentShutdownDuration
1m0s
--discoveryAddress
istio-pilot.istio-system:15005
--discoveryRefreshDelay
1s
--zipkinAddress
zipkin.istio-system:9411
--connectTimeout
10s
--proxyAdminPort
15000
--controlPlaneAuthPolicy
MUTUAL_TLS

Taking a closer look, we can see that the istio-proxy container was launched with the configuration of its control plane address, tracing system address, and connection configuration. Istio has now been verified. There are lots of to do with Istio traffic management, which is beyond of the scope of this book. Istio has a variety of detailed samples for us to try, which can be found in the istio-1.0.5/samples folder that we just downloaded. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset