2

Getting Started with Istio

In the previous chapter, we discussed monolithic architecture and its drawbacks. We discussed microservice architecture and how it provides modularity to large complex applications. Microservice architectures are scalable, easier to deploy, resilient, and fault-tolerant via isolation and modularization, leveraging cloud containers and Kubernetes. Containers are the default packaging format for cloud-native applications, and Kubernetes is the de facto platform for container life cycle management and deployment orchestration. The ability of microservices to be distributed, highly scalable, and work in parallel with other microservices amplifies the communication challenges between microservices, and also operational challenges such as visibility in the communication and execution of microservices.

Microservices need to have secure communication with each other to avoid exploitation and attacks such as man-in-the-middle attacks. To solve such challenges in a cost-efficient and performant manner, there is a need for an application networking infrastructure, also called a Service Mesh. Istio is one such implementation of the Service Mesh that has been developed and supported by some great organizations, including Google, Red Hat, VMware, IBM, Lyft, Yahoo, and AT&T.

In this chapter, we will install and run Istio, and while doing that, we will go through its architecture and its various components as well. This chapter will help you understand the difference between Istio and other Service Mesh implementations. By the end, you should be able to configure and set up your environment and then install Istio, after getting a good understanding of how installation works. Once installed, you will then enable Istio sidecar injection to a sample application that comes along with Istio installation. We will take a step-by-step look at pre- and post-enablement of Istio for a sample application and get an idea of how Istio works.

We will be doing this by exploring the following topics:

  • Why is Istio the most popular Service Mesh?
  • Preparation of your workstation environment to install and run Istio
  • Installing Istio
  • Installing observability tools
  • An introduction to Istio architecture

Why is Istio the most popular Service Mesh?

Istio stands for the Greek word ιστίο, pronounced as Iss-tee-oh. Istio means sail, which is a non-bending, non-compressing structure made of fabric or similar. It propels sailing ships via the lift and drag produced by the wind. What made the initial contributors select Istio as the name probably has something to do with the naming of Kubernetes, which also has a Greek origin, pronounced as koo-burr-net-eez and written as κυβερνήτης. Kubernetes means helmsman – that is, the person standing at the helm of a ship and steering it.

Istio is an open source services mesh distributed under Apache License 2.0. It is platform-independent, meaning it is independent of underlying Kubernetes providers. It also supports not only Kubernetes but also non-Kubernetes environments such as virtual machines. Having said that, Istio development is much more mature for the Kubernetes environment and is adapting and evolving very quickly for other environments. Istio has a very mature development community, a strong user base, and is highly extensible and configurable, providing solid operational control of traffic and security within a Service Mesh. Istio also provides behavioral insights using advanced and fine-grained metrics. It supports WebAssembly, which is very useful for extensibility and tailoring for specific requirements. Istio also offers support and easy configuration for multi-cluster and multi-network environments.

Exploring alternatives to Istio

There are various other alternatives to Istio, all with their own pros and cons. Here, we will list a few of the other Service Mesh implementations available.

Kuma

At the time of writing (2022), Kuma is a Cloud Native Computing Foundation (CNCF) sandbox project and was originally created by Kong Inc., the company that also provides the Kong API management gateway both in open source and commercial variants. Kuma is advertised by Kong Inc. as a modern distributed control plane with bundled Envoy proxy integration. It supports multi-cloud and multi-zone connectivity for highly distributed applications. The Kuma data plane is composed of Envoy proxies, which are then managed by Kuma control planes, and it supports workloads deployed on not only Kubernetes but also virtual machines and bare-metal environments. Kong Inc also provides an enterprise Service Mesh offering called Kong Mesh, which extends CNCF’s Kuma and Envoy.

Linkerd

Linkerd was originally created by Buoyant, Inc. but was later made open source, and it is now licensed under Apache V2. Buoyant, Inc. also provides a managed cloud offering of Linkerd, as well as an enterprise support offering for customers who want to run Linkerd themselves but need enterprise support. Linkerd makes running services easier and safer by providing runtime debugging, observability, reliability, and security. Like Istio, you don’t need to change your application source code; instead, you install a set of ultralight transparent Linkerd2-proxy next to every service.

The Linkerd2-proxy is a micro-proxy written in Rust and deployed as a sidecar in the Pod along with the application. Linkerd proxies have been written specifically for Service Mesh use cases and are arguably faster than Envoy, which is used as a sidecar in Istio and many other Service Mesh implementations like Kuma. Envoy is a great proxy but designed for multiple use cases – for example, Istio uses Envoy as an Ingress and also Egress gateway, as well as a sidecar simultaneously to go along with applications. Many Linkerd implementations use Linkerd as a Service Mesh and Envoy-based Ingress controllers.

Consul

Consul is a Service Mesh solution from Hashicorp; it is open source but also comes with a cloud and enterprise support offering from Hashicorp. Consul can be deployed on Kubernetes as well as VM-based environments. On top of the Service Mesh, Consul also provides all functionality for service catalogs, TLS certificates, and service-to-service authorizations. The data plane of Consul provides two options; the user can either choose an Envoy-based sidecar model similar to Istio, or native integration via Consul Connect SDKs, which takes away the need to inject a sidecar and provides better performance than Envoy proxies. Another difference is that you need to run a consul agent as a daemon on every worker node in the Kubernetes cluster and every node in non-Kubernetes environments.

AWS App Mesh

App Mesh is a Service Mesh offering from AWS and, of course, is available for workloads deployed in AWS on Elastic Container Service (ECS), Elastic Container Service for Kubernetes, or self-managed Kubernetes clusters running in AWS. Like Istio, App Mesh also uses Envoy as a sidecar proxy in the Pod, while the control plane is provided as a managed service by AWS, similar to EKS. App Mesh provides integration with various other AWS services such as Amazon Cloudwatch and AWS X-Ray.

OpenShift Service Mesh

Red Hat OpenShift Service Mesh is based on Istio; in fact, Red Hat is also a contributor to Istio open source projects. The offering is bundled with Jaeger for distributed tracing and Kiali for visualizing the mesh, viewing configuration, and traffic monitoring. As with other products from Red Hat, you can buy enterprise support for OpenShift Service Mesh.

F5 NGINX Service Mesh

NGINX is part of F5, and hence, its Service Mesh offering is called F5 NGINX Service Mesh. It uses NGINX Ingress controller with NGINX App Protect to secure the traffic at the edge and then route to the mesh using Ingress controllers. NGINX Plus is used as a sidecar to the application, providing seamless and transparent load balancing, reverse proxy, traffic routing, and encryption. Metrics collection and analysis are performed using OpenTracing and Prometheus, while inbuilt Grafana dashboards are provided for the visualization of Prometheus metrics.

This briefly covers Service Mesh implementation, and we will cover some of them in greater depth in Appendix A. For now, let’s return our focus to Istio. We will read more about the benefits of Istio in the upcoming sections and the rest of the book, but let’s first get things going by installing Istio and enabling it for an application that is packaged along with Istio.

Preparing your workstation for Istio installation

We will be using minikube for installing and playing with Istio in the first few chapters. In later chapters, we will install Istio on AWS EKS to mimic real-life scenarios. First, let’s prepare your laptop/desktop with minikube. If you already have minikube installed in your environment, it is strongly recommended to upgrade to the latest version.

If you don’t have minikube installed, then follow the instructions to install minikube. minikube is a local Kubernetes installed on your workstation that makes it easy for you to learn and play with Kubernetes and Istio, without needing a contingent of computers to install a Kubernetes cluster.

System specifications

You will need Linux or macOS or Windows. This book will primarily follow macOS as the target operating system. Where there is a big difference in commands between Linux and macOS, you will find corresponding steps/commands in the form of little notes. You will need at least two CPUs, 2 GB of available RAM, and either Docker Desktop (if macOS or Windows) or Docker Engine for Linux. If you don’t have Docker installed, then just follow the instructions at https://docs.docker.com/ to install Docker on your computer, based on the respective operating system.

Installing minikube and the Kubernetes command-line tool

We will be using Homebrew to install minikube. However, if you don’t have Homebrew installed, you can install Homebrew using the following command:

$/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Let’s get started:

  1. Install minikube using brew install minikube:
    $ brew install minikube
    Running `brew update --preinstall`...
    ..
    ==> minikube cask is installed, skipping link.
    ==> Caveats
    Bash completion has been installed to:
      /usr/local/etc/bash_completion.d
    ==> Summary
      /usr/local/Cellar/minikube/1.25.1: 9 files, 70.3MB
    ==> Running `brew cleanup minikube`...

Once installed, create a symlink to the newly installed binary in the Homebrew Cellar folder:

$ brew link minikube
Linking /usr/local/Cellar/minikube/1.25.1... 4 symlinks created.
$ which minikube
/usr/local/bin/minikube
$ ls -la /usr/local/bin/minikube
lrwxr-xr-x  1 arai  admin  38 22 Feb 22:12 /usr/local/bin/minikube -> ../Cellar/minikube/1.25.1/bin/minikube

To test the installation, use the following command to find the minikube version:

$ minikube version
minikube version: v1.25.1
commit: 3e64b11ed75e56e4898ea85f96b2e4af0301f43d

Attention, Linux users!

If you are installing on Linux, you can use the following commands to install minikube:

$ curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64

$ sudo install minikube-linux-amd64 /usr/local/bin/minikube.

  1. The next step is to install kubectl if you do not have it already installed on your machine.

kubectl is a short form of the Kubernetes command-line tool and is pronounced as kube-control. kubectl allows you to run commands against Kubernetes clusters. You can install kubectl on Linux, Windows, or macOS. The following steps install kubectl on macOS using Brew:

$ brew install kubectl

You can use the following steps to install kubectl on Debian-based machines:

  1. sudo apt-get update
  2. sudo apt-get install -y apt-transport-https ca-certificates curl
  3. sudo curl -fsSLo /usr/share/keyrings/ kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
  4. echo "deb [signed-by=/usr/share/keyrings/ kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
  5. echo “deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main” | sudo tee /etc/apt/sources.list.d/kubernetes.list
  6. sudo apt-get update
  7. sudo apt-get install -y kubectl

The following steps can be used to install kubectl on Red Hat machines:

  1. cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
  2. [kubernetes]
  3. name=Kubernetes
  4. baseurl=https://packages.cloud.google.com /yum/repos/kubernetes-el7-x86_64
  5. enabled=1
  6. gpgcheck=1
  7. repo_gpgcheck=1
  8. gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
  9. ckages.cloud.google.com/yum/doc/rpm-package-key.gpg
  10. EOF
  11. sudo yum install -y kubectl

You have now all that you need to run Kubernetes locally, so go ahead and type the following command. Make sure you are logged in as a user with administrative access.

You can use minikube start with the Kubernetes version as follows:

$ minikube start --kubernetes-version=v1.23.1
  minikube v1.25.1 on Darwin 11.5.2
  Automatically selected the hyperkit driver
..
  Done! kubectl is now configured to use the "minikube" cluster and "default" namespace by default

You can see in the console output that minikube is using the HyperKit driver. HyperKit is an open source hypervisor used on macOS. We could have also explicitly specified to minikube to use the Hyperkit driver by passing —driver=hyperkit.

For Linux users

For Linux, you can use minikube start --driver=docker. In this case, minikube will run as a Docker container. For Windows, you can use minikube start –driver=virtualbox. To avoid typing --driver during every minikube start, you can configure the default driver by using minikube config set driver DRIVERNAME, where DRIVERNAME can be either Hyperkit, Docker, or VirtualBox.

You can verify that kubectl is working properly and that minikube has also started properly by using the following:

$ kubectl cluster-info
Kubernetes control plane is running at https://192.168.64.6:8443
CoreDNS is running at https://192.168.64.6:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

In the output, you can see that both the Kubernetes control plane and the DNS servers are running. This concludes the installation of minikube and kubernetes-cli. You now have a locally running Kubernetes cluster and a means to communicate with it via kubectl.

Installing Istio

This section is the one you must have been eagerly waiting to read. The wait is over, and you are all set to install Istio. Just follow the instructions provided.

The first step is to download Istio from https://github.com/istio/istio/releases. You can download using curl as well with the following command. It is a good idea to make a directory where you want to download the binaries and run the following command from within that directory. Let’s name that directory ISTIO_DOWNLOAD, from which we can run following commands:

$ curl -L https://istio.io/downloadIstio | sh -
Downloading istio-1.13.1 from https://github.com/istio/istio/releases/download/1.13.1/istio-1.13.1-osx.tar.gz ...
Istio 1.13.1 Download Complete!

The preceding command downloads the latest version of Istio into the ISTIO_DOWNLOAD location. If we dissect this command, it has two parts:

$ curl -L https://istio.io/downloadIstio

The first part of the command downloads a script from https://raw.githubusercontent.com/istio/istio/master/release/downloadIstioCandidate.sh (the location might change), and the second part of the script is then fed to sh for execution. The scripts analyze the processor architecture and operating system and, based on that, decide what are the appropriate values of the Istio version (ISTIO_VERSION), the operating system (OSEXT), and the processor architecture (ISTIO_ARCH). The script then populates these values into the following URL, https://github.com/istio/istio/releases/download/${ISTIO_VERSION}/istio-${ISTIO_VERSION}-${OSEXT}-${ISTIO_ARCH}.tar.gz, and then downloads the gz file and decompresses it.

Let’s investigate what has been downloaded into the ISTIO_DOWNLOAD location:

$ ls
istio-1.13.1
$ ls istio-1.13.1/
LICENSE  README.md bin  manifest.yaml manifests samples  tools

The following is a brief description of the folders:

  • bin contains istioctl, also called Istio-control, which is the Istio command line to debug and diagnose Istio, as well as creating, listing, modifying, and deleting configuration resources.
  • samples contains a sample application that we will be using for learning.
  • manifest has Helm charts, which you don’t need to worry about for now. They have relevance when we want the installation process to pick up the charts from manifest rather than the default ones.

Since we will be making use of istioctl to perform the installation, let’s add it to the executable path:

$ pwd
/Users/arai/istio/istio-1.13.1
$ export PATH=$PWD/bin:$PATH
$ istioctl version
no running Istio pods in "istio-system"
1.13.1

We are one command away from installing Istio. Go ahead and type in the following command to complete the installation:

$ istioctl install --set profile=demo
This will install the Istio 1.13.1 demo profile with ["Istio core" "Istiod" "Ingress gateways" "Egress gateways"] components into the cluster. Proceed? (y/N) y
 Istio core installed
 Istiod installed
 Egress gateways installed
 Ingress gateways installed
 Installation complete
Making this installation the default for injection and validation.
Thank you for installing Istio 1.13.

Tip

You can pass -y to avoid the (Y/N) question. Just use istioctl install --set profile=demo -y.

Viola! You have successfully completed the installation of Istio, including platform setup, in eight commands. If you have been using minikube and kubectl, then hopefully you should have been able to install in three commands. If you have installed this on an existing minikube setup, then it is advisable at this stage to install Istio on a new cluster, rather than an existing one with your other applications.

Let’s look at what has been installed. We’ll start first by analyzing the namespaces:

$ kubectl get ns
NAME              STATUS   AGE
default           Active   19h
istio-system      Active   88m
kube-node-lease   Active   19h
kube-public       Active   19h
kube-system       Active   19h

We can see that the installation has created a new namespace called istio-system.

Let’s check what Pods and Services are in the istio-system namespace:

$ kubectl get pods -n istio-system
NAME                           READY   STATUS    RESTARTS   AGE
pod/istio-egressgateway-76c96658fd-pgfbn   1/1     Running   0          88m
pod/istio-ingressgateway-569d7bfb4-8bzww   1/1     Running   0          88m
pod/istiod-74c64d89cb-m44ks                1/1     Running   0          89m

While the preceding part of the output shows various Pods running under the istio-system namespace, the following will show Services in the istio-system namespace:

$ kubectl get svc -n istio-system
NAME           TYPE      CLUSTER-IP       EXTERNAL-IP    PORT(S)         AGE
service/istio-egressgate way    ClusterIP      10.97.150.168     <none>        80/TCP,443/TCP             88m
service/istio-ingressgateway   LoadBalancer   10.100.113.119    <pending>     15021:31391/TCP,80:32295/TCP,443:31860/TCP,31400:31503/TCP,15443:31574/TCP   88m
service/istiod          ClusterIP      10.110.59.167     <none>        15010/TCP,15012/TCP,443/TCP,15014/TCP    89m

You can check all resources by using the following command:

$ kubectl get all -n istio-system

In the istio-system namespace, Istio installs the istiod component, which is the control plane of Istio. There are various other custom configs such as Kubernetes Custom Resource Definitions, ConfigMaps, Admission Webhooks, Service Accounts, Role Bindings, as well as Secrets installed.

We will look into istiod and other control plane components in more detail in the next chapter. For now, let’s enable Istio for a sample application that is packaged with it.

Enabling Istio for a sample application

To keep our work in a sample application segregated from other resources, we will first create a Kubernetes namespace called bookinfons. After creating the namespace, we will deploy the sample application in the bookinfons namespace.

You need to run the second command from within the Istio installation directory – that is, $ISTIO_DOWNLOAD/istio-1.13.1:

 $ kubectl create ns bookinfons
namespace/bookinfons created
$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -n bookinfons

All the created resources are defined in samples/bookinfo/platform/kube/bookinfo.yaml.

Check what Pods and Services have been created using the following commands:

$ kubectl get po -n bookinfons
$ kubectl get svc -n bookinfons

Note that there is one Pod each for details, productpage, and ratings, and three Pods for various versions of review. There is one service for each microservice. All of them are similar, except for the kubectl review service, which has three endpoints. Using the following command, let’s check how the review service definition is different from other Services:

$ kubectl describe svc/reviews -n bookinfons
...
Endpoints:         172.17.0.10:9080,172.17.0.8:9080,172.17.0.9:9080
...
$ kubectl get endpoints -n bookinfons
NAME          ENDPOINTS                                           AGE details     172.17.0.6:9080                                     18h
productpage   172.17.0.11:9080                                    18h
ratings  172.17.0.7:9080                                     18h
reviews       172.17.17.0.10:9080,172.17.0.8:9080,172.17.0.9:9080   18h

Now that the bookinfo application has successfully deployed, let’s access the product page of the bookinfo application using the following commands:

$ kubectl port-forward svc/productpage 9080:9080 -n bookinfons
Forwarding from 127.0.0.1:9080 -> 9080
Forwarding from [::1]:9080 -> 9080
Handling connection for 9080

Go ahead and type in http://localhost:9080/productpage in your internet browser. If you don’t have one, you can do it via curl:

Figure 2.1 – The product page of the BookInfo app

Figure 2.1 – The product page of the BookInfo app

If you can see productpage, then you have successfully deployed the sample application.

What if I do not have a browser?

If you don’t have a browser, you can use this:

curl -sS localhost:9080/productpage

So, now that we have successfully deployed the sample application that comes along with Istio, let’s move on to enabling Istio for it.

Sidecar injection

Sidecar injection is the means through which istio-proxy is injected into the Kubernetes Pod as a sidecar. Sidecars are additional containers that run alongside the main container in a Kubernetes Pod. By running alongside the main container, the sidecars can share the network interfaces with other containers in the Pod; this flexibility is leveraged by the istio-proxy container to mediate and control all communication to and from the main container. We will read more about sidecars in the Chapter 3. For now, we will keep the ball rolling by enabling Istio for the sample application.

Let’s check out some interesting details before and after we enable Istio for this application:

$ kubectl get ns bookinfons –show-labels
NAME         STATUS   AGE    LABELS
bookinfons   Active   114m   kubernetes.io/metadata.name=bookinfons

Let’s look at one of the Pods, productpage:

$ kubectl describe pod/productpage-v1-65b75f6885-8pt66 -n bookinfons

Copy the output to a safe place. We will use this information to compare the findings once you have enabled Istio for the bookinfo application.

We will need to delete what we have deployed:

$ kubectl delete -f samples/bookinfo/platform/kube/bookinfo.yaml -n bookinfons

Wait for a few seconds and check that all the resources in the bookinfons namespace have been terminated. After that, enable istio-injection for bookinfons:

$ kubectl label namespace bookinfons istio-injection=enabled
namespace/bookinfons labeled
$ kubectl get ns bookinfons –show-labels
NAME         STATUS   AGE   LABELS
bookinfons   Active   21h   istio-injection=enabled,kubernetes.io/metadata.name=bookinfons

Manual injection of sidecars

The other option is to manually inject the sidecar by making use of istioctl kube-inject to augment the deployment descriptor file and then applying it using kubectl:

$ istioctl kube-inject -f deployment.yaml -o deployment-injected.yaml | kubectl apply -f –

Go ahead and deploy the bookinfo application:

$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -n bookinfons

Let’s check what has been created:

$ kubectl get po -n bookinfons

We can see that the number of containers in Pods is now not one but instead two. Before we enabled istio-injection, the number of containers in Pods was one. We will discuss shortly what the additional container is. Let’s also check for any change in the number of Services:

$ kubectl get svc -n bookinfons

Alright, so there is a change in Pod behavior but no noticeable change in service behavior. Let’s look deeper into one of the Pods:

$ kubectl describe po/productpage-v1-65b75f6885-57vnb -n bookinfons

The complete output of this command can be found in Output references/Chapter 2/productpage pod.docx on the GitHub repository of this chapter.

Note that the Pod description of the productpage Pod as well as every other Pod in the bookinfons have another container named istio-proxy and an init container named istio-init. They were absent when we initially created them but got added after we applied the istio-injection=enabled label, using the following command:

kubectl label namespace bookinfons istio-injection=enabled

The sidecars can be injected either manually or automatically. Automatic is the easier way to inject sidecars. However, once we have familiarized ourselves with Istio, we will look at injecting sidecars manually by modifying application resource descriptor files in Part 2 of the book. For now, let’s briefly look at how automatic sidecar injection works.

Istio makes use of Kubernetes admission controllers. Kubernetes admission controllers are responsible for intercepting a request to the Kubernetes API server. Interception happens post-authentication and authorization but pre-modification/creation/deletion of objects. You can find these admission controllers using the following:

$ kubectl describe po/kube-apiserver-minikube -n kube-system | grep enable-admission-plugins
--enable admission plugins=NamespaceLifecycle, LimitRanger,ServiceAccount,DefaultStorageClass, DefaultTolerationSeconds,NodeRestriction, MutatingAdmissionWebhook,ValidatingAdmissionWebhook, ResourceQuota

Istio makes use of mutating admission webhooks for automatic sidecar injection. Let’s find out what mutating admission webhooks are configured in our cluster:

$ kubectl get --raw /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations | jq '.items[].metadata.name'
"istio-revision-tag-default"
"istio-sidecar-injector"

The following diagram describes the role of admission controllers during API calls to the Kubernetes API server. The mutating admission Webhook controllers are responsible for the injection of the sidecar.

Figure 2.2 – Admission controllers in Kubernetes

Figure 2.2 – Admission controllers in Kubernetes

We will cover sidecar injection in more detail in Chapter 3. For now, let’s switch our focus back to what has changed in Pod descriptors due to istio-injection.

istio-iptables was mentioned in the istio-init configuration of the product page Pod description using the following command:

kubectl describe po/productpage-v1-65b75f6885-57vnb -n bookinfons

The following is a snippet from the Pod descriptor:

istio-iptables -p 15001 -z 15006 -u 1337 -m REDIRECT -I '*' -x "" -b '*' -d 15090,15021,15020

istio-iptables is an initialization script responsible for setting up port forwarding via iptables for the Istio sidecar proxy. The following are various arguments that are passed during the execution of the script:

  • -p specifies the Envoy port to which all TCP traffic will be redirected
  • -z specifies the port to which all inbound traffic to the Pod should be redirected
  • -u is the UID of the user for which the redirection is not to be applied
  • -m is the mode to be used for redirecting inbound connections
  • -I is a list of IP ranges in CIDR block destinations of outbound connections that need to be redirected to Envoy
  • -x is a list of CIDR block destinations of outbound connections that need to be exempted from being redirected to Envoy
  • -b is a list of inbound ports for which traffic needs to be redirected to Envoy
  • -d is a list of inbound ports that need to be excluded from being redirected to Envoy

To summarize the preceding argument in the istio-init container, the container is executing a script, istio-iptables, which is basically creating iptables rules at the Pod level – that is, applied to all containers within the Pod. The script configures an iptables rule that applies the following:

  • All traffic should be redirected to port 15001
  • Any traffic to the Pod should be redirected to port 15006
  • This rule doesn’t apply to UID 1337
  • The mode for redirection to be used is REDIRECT
  • All outbound connections to any destination (*) should be redirected to 15001
  • No outbound destination is exempt from this rule
  • The redirection needs to happen for all inbound connections coming from any IP address, except when the destination ports are 15090, 15021, or 15020

We will dig deeper into this in Chapter 3, but for now, remember that the init container basically sets up an iptables rule at the Pod level, which will redirect all traffic coming to the product page container at port 9080 to 15006, while all traffic going out from product page container will be redirected to port 15001. Both ports 15001 and 15006 are exposed by the istio-proxy container, which is created from docker.io/istio/proxyv2:1.13.1. The istio-proxy container runs alongside the product page container. Along with 15001 and 15006, it also has ports 15090, 15021, and 15020.

Istio-iptables.sh can be found here: https://github.com/istio/cni/blob/master/tools/packaging/common/istio-iptables.sh.

You will also notice that both istio-init and istio-proxy are spun from the same Docker image, docker.io/istio/proxyv2:1.13.1. Inspect the Docker file here: https://hub.docker.com/layers/proxyv2/istio/proxyv2/1.13.4/images/sha256-1245211d2fdc0f86cc374449e8be25166b9d06f1d0e4315deaaca4d81520215e?context=explore. The dockerfile gives more insight into how the image is constructed:

# BASE_DISTRIBUTION is used to switch between the old base distribution and distroless base images
..
ENTRYPOINT ["/usr/local/bin/pilot-agent"]

The entry point is an Istio command/utility called pilot-agent that bootstraps Envoy to run as a sidecar when the proxy sidecar argument is passed in the istio-proxy container. pilot-agent also sets iptables during initialization when the istio-iptables argument is passed during initialization in the istio-init container.

More information on pilot-agent

You can find more details about the pilot agent by executing pilot-agent from outside the container, picking any Pod that has the istio-proxy sidecar injected. In the following command, we have to use the Ingress gateway Pod in the istio-system namespace:

$ kubectl exec -it po/istio-ingressgateway-569d7bfb4-8bzww -n istio-system -c istio-proxy -- /usr/local/bin/pilot-agent proxy router --help

Like in the earlier section, you can still access the product page from your browser using kubectl port-forward:

$ kubectl port-forward svc/productpage 9080:9080 -n bookinfons
Forwarding from 127.0.0.1:9080 -> 9080
Forwarding from [::1]:9080 -> 9080
Handling connection for 9080

So far, we have looked at sidecar injection and what effects it has on Kubernetes resource deployments. In the following section, we will read about how Istio manages the Ingress and Egress of traffic.

Istio gateways

Rather than using port-forward, we can also make use of the Istio Ingress gateway to expose the application. Gateways are used to manage inbound and outbound traffic to and from the mesh. Gateways provide control over inbound and outbound traffic. Go ahead and try the following command again to list the Pods in the istiod namespace and to discover about gateways already installed during the Istio installation:

$ kubectl get pod -n istio-system
NAME                        READY   STATUS    RESTARTS   AGE
istio-egressgateway-76c96658fd-pgfbn   1/1     Running   0          5d18h
istio-ingressgateway-569d7bfb4-8bzww   1/1     Running   0          5d18h
istiod-74c64d89cb-m44ks                1/1     Running   0          5d18h
$ kubectl get po/istio-ingressgateway-569d7bfb4-8bzww -n istio-system -o json  | jq '.spec.containers[].image'
"docker.io/istio/proxyv2:1.13.1"
$ kubectl get po/istio-egressgateway-76c96658fd-pgfbn -n istio-system -o json  | jq '.spec.containers[].image'
"docker.io/istio/proxyv2:1.13.1"

You can see that the gateways are also another set of Envoy proxies that are running in the mesh. They are similar to Envoy proxies deployed as a sidecar in the Pods, but in the gateway, they run as standalone containers in the Pod deployed via pilot-agent, with proxy router arguments. Let’s investigate the Kubernetes descriptors of the Egress gateway:

$ kubectl get po/istio-egressgateway-76c96658fd-pgfbn -n istio-system -o json  | jq '.spec.containers[].args'
[
  "proxy",
  "router",
  "--domain",
  "$(POD_NAMESPACE).svc.cluster.local",
  "--proxyLogLevel=warning",
  "--proxyComponentLogLevel=misc:error",
  "--log_output_level=default:info"
]

Let’s look at the Gateway Services next:

$ kubectl get svc -n istio-system
NAME                TYPE          CLUSTER-
IP          EXTERNAL-IP    PORT(S)                                              AGE
istio-egressgateway    ClusterIP      10.97.150.168    <none>        80/TCP,443/TCP                                    5d18h
istio-ingressgateway   LoadBalancer
   10.100.113.119   <pending>     15021:31391/TCP,80:32295/TCP,443:31860/TCP,31400:31503/TCP,15443:31574/TCP   5d18h
istiod   ClusterIP      10.110.59.167    <none>        15010/TCP,15012/TCP,443/TCP,15014/TCP                                        5d18h

Now, let’s try to make sense of ports for the Ingress gateway using the following command:

$ kubectl get svc/istio-ingressgateway -n istio-system -o json | jq '.spec.ports'
[
…
  {
    "name": "http2",
    "nodePort": 32295,
    "port": 80,
    "protocol": "TCP",
    "targetPort": 8080
  },
  {
    "name": "https",
    "nodePort": 31860,
    "port": 443,
    "protocol": "TCP",
    "targetPort": 8443
  },
  ….

You can see that the Ingress gateway service takes http2 and https traffic at ports 32295 and 31860 from outside the cluster. From inside the cluster, the traffic is handled at ports 80 and 443. The http2 and https traffic is then forwarded to ports 8080 and 8443 to underlying Ingress Pods.

Let’s enable the Ingress gateway for the bookinfo service:

$ kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml -n bookinfons
gateway.networking.istio.io/bookinfo-gateway created
virtualservice.networking.istio.io/bookinfo created

Let’s look at the bookinfo virtual service definition:

$ kubectl describe virtualservice/bookinfo -n bookinfons
Name:         bookinfo
..
API Version:  networking.istio.io/v1beta1
Kind:         VirtualService
...
Spec:
  Gateways:
    bookinfo-gateway
  Hosts:
    *
  Http:
    Match:
      Uri:
        Exact:  /productpage
      Uri:
        Prefix:  /static
      Uri:
        Exact:  /login
      Uri:
        Exact:  /logout
      Uri:
        Prefix:  /api/v1/products
    Route:
      Destination:
        Host:  productpage
        Port:
          Number:  9080

The virtual service is not restricted to any particular hostname. It routes /productpage, login, and /logout, and any other URI with the /api/v1/products or /static prefix to the productpage service at port 9080. If you remember, 9080 was also the port exposed by the productpage service. The spec.gateways annotation implies that this virtual service config should be applied to bookinfo-gateway, which we will investigate next:

$ kubectl describe gateway/bookinfo-gateway -n bookinfons
Name:         bookinfo-gateway
..
API Version:  networking.istio.io/v1beta1
Kind:         Gateway
..
Spec:
  Selector:
    Istio:  ingressgateway
  Servers:
    Hosts:
      *
    Port:
      Name:      http
      Number:    80
      Protocol:  HTTP
..

The gateway resource describes a load balancer receiving incoming and outgoing connections to and from the mesh. The preceding example first defines that the configuration should be applied to the Pod with the Istio: ingressgateway labels (Ingress gateway Pods in the istiod namespace). The config is not bound to any hostnames, and it takes connection at port 80 for HTTP traffic.

So, to summarize, you have a load balancer configuration defined in the form of a gateway along with routing configuration to backend in the form of virtual services. These configs are applied to a proxy Pod, which in this case is istio-ingressgateway-569d7bfb4-8bzww.

Go ahead and check the logs of the proxy Pod while opening the product page in the browser.

First, find the IP and the port (the HTTP2 port in the Ingress gateway service):

$ echo $(minikube ip)
192.168.64.6
$ echo $(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
32295

Fetch the products via following URL: http://192.168.64.6:32295/api/v1/products. You can do this either in the browser or through curl.

Stream the log of the istio-ingressgateway Pod to stdout:

$ kubectl logs -f pod/istio-ingressgateway-569d7bfb4-8bzww -n istio-system
"GET /api/v1/products HTTP/1.1" 200 - via_upstream - "-" 0 395 18 16 "172.17.0.1" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36" "cfc414b7-10c8-9ff9-afa4-a360b5ad53b8" "192.168.64.6:32295" "172.17.0.10:9080" outbound|9080||productpage.bookinfons.svc.cluster.local 172.17.0.5:56948 172.17.0.5:8080 172.17.0.1:15370 - -

From the logs, you can infer that an inbound request GET /api/v1/products HTTP/1.1 arrived at 192.168.64.6:32295, which was then routed to 172.17.0.10:9080. This is the endpoint – that is, the IP address of the productpage Pod.

The following diagram illustrates the composition of the bookinfo Pods with injected istio-proxy sidecars and the Istio Ingress gateway.

Figure 2.3 – The BookInfo app with sidecar injection and the Istio Ingress gateway for traffic Ingress

Figure 2.3 – The BookInfo app with sidecar injection and the Istio Ingress gateway for traffic Ingress

Tip

If you are getting TLS errors such as certificate expired or any other OpenSSL error, then just try restarting the BookInfo application and Istio components using the following command:

$ kubectl rollout restart deployment --namespace bookinfons

$ kubectl rollout restart deployment --namespace istio-system.

I hope by now you are familiarized with the basic concepts of Istio and its installation on your workstations. In the next section, we will continue with the installation of add-on components in Istio.

Observability tools

Istio produces various metrics that can then be fed into various telemetry applications. The out-of-the-box installation is shipped with add-ons that include Kiali, Jaeger, Prometheus, and Grafana. Let’s take a look at them in the following sections.

Kiali

The first component to install will be Kiali, the default management UI for Istio. We’ll start by enabling the telemetry tools by running the following command:

$ kubectl apply -f samples/addons
serviceaccount/grafana created
…….
$ kubectl rollout status deployment/kiali -n istio-system
Waiting for deployment "kiali" rollout to finish: 0 of 1 updated replicas are available...
deployment "kiali" successfully rolled out

Once all the resources have been created and Kiali has successfully deployed, you can then open the dashboard of Kiali by using the following command:

$ istioctl dashboard kiali
http://localhost:20001/kiali

Kiali is very handy when you want to visualize or troubleshoot the mesh topology as well as underlying mesh traffic. Let’s take a quick look at some of the visualizations.

The Overview page provides an overview of all the namespaces in the cluster.

Figure 2.4 – The Kiali dashboard Overview section

Figure 2.4 – The Kiali dashboard Overview section

You can click on the three dots in the top-right corner to dive further into that namespace and also to change the configuration for it.

Figure 2.5 – Istio configuration for a namespace on the Kiali dashboard

Figure 2.5 – Istio configuration for a namespace on the Kiali dashboard

You can also check out individual applications, Pods, Services, and so on. One of the most interesting visualizations is Graph, which represents the flow of traffic in the mesh for a specified period.

Figure 2.6 – A versioned app graph on the Kiali dashboard

Figure 2.6 – A versioned app graph on the Kiali dashboard

The preceding screenshot is of a versioned app graph, where multiple versions of an application are grouped together; in this case, it is a reviews app. We will look into this in much more detail in Chapter 8.

Jaeger

Another add-on is Jaeger. You can open the Jaeger dashboard type with the following command:

$ istioctl dashboard jaeger
http://localhost:16686

The preceding command should open your browser on the Jaeger dashboard. Jaeger is an open source, end-to-end, distributed transaction monitoring software. The need for such a tool will become explicit when we build and deploy a hands-on application in Chapter 4.

In the Jaeger dashboard under Search, select any service for which you are interested to look at traffic. Once you select the service and click on Find Traces, you should be able to see all traces involving the Details app in the bookinfons namespace.

Figure 2.7 – The Jaeger dashboard Search section

Figure 2.7 – The Jaeger dashboard Search section

You can then click on any of the entries for further details:

Figure 2.8 – The Jaeger dashboard details section

Figure 2.8 – The Jaeger dashboard details section

You can see that the overall invocation took 69.91 ms. The details were called by productpage, and it took 2.97 ms for them to return the response. You can then click further on any of the services to see a detailed trace.

Prometheus

Next, we will look into Prometheus, which is also an open source monitoring system and time series database. Prometheus is used to capture all metrics against time to track the health of the mesh and its constituents.

To start the Prometheus dashboard, use the following command:

$ istioctl dashboard prometheus
http://localhost:9090

This should open the Prometheus dashboard in your browser. With our installation, Prometheus is configured to collect metrics from istiod, the Ingress and Egress gateways, and the istio-proxy.

In the following example, we are checking the total requests handled by Istio for the productpage application.

Figure 2.9 – The Istio total request on the Prometheus dashboard

Figure 2.9 – The Istio total request on the Prometheus dashboard

Another add-on to look at is Grafana, which, like Kiali, is another visualization tool.

Grafana

To start the Grafana dashboard, use the following command:

$ istioctl dashboard grafana
http://localhost:3000

The following is a visualization of the total requests handled by Istio for productpage:

Figure 2.10 – The Grafana dashboard Explore section

Figure 2.10 – The Grafana dashboard Explore section

The following is another visualization of the Istio performance metrics.

Figure 2.11 – The Grafana Istio Performance Dashboard

Figure 2.11 – The Grafana Istio Performance Dashboard

Note that by just applying a label, istio-injection: enabled, we enabled the Service Mesh for the BookInfo application. Sidecars were injected automatically and mTLS was enabled by default for communication between different microservices of the application. Moreover, a plethora of monitoring tools provide information about the BookInfo application and its underlying microservices.

Istio architecture

Now that we have installed Istio, enabled it for the BookInfo application, and also analyzed it’s operations, it is time to simplify what we have seen so far with a diagram. The following figure is a representation of Istio architecture.

Figure 2.12 – Istio architecture

Figure 2.12 – Istio architecture

The Istio Service Mesh comprises a data plane and a control plane. The example we followed in this chapter installs both of them on one node. In a production or non-production environment, the Istio control plane will be installed on its own separate set of nodes. The control plane comprises istiod components as well as a few other Kubernetes configs, which, altogether, are responsible for managing and providing service discovery to the data plane, propagation of configuration related to security and traffic management, as well as providing and managing identity and certificates to data plane components.

The data plane is another part of the Service Mesh that consists of Istio proxies deployed alongside the application container in the Pod. Istio proxies are basically Envoy. Envoy is an application-aware service proxy that mediates all network traffic between microservices, based on instructions from the control plane. Envoy also collects various metrics and reports back telemetry to various add-on tools.

Subsequent chapters will be dedicated to the control plane and data plane, in which we will dive deeper into understanding their functions and behavior.

Summary

In this chapter, we prepared a local environment to install Istio using istioctl which is the Istio command-line utility. We then enabled sidecar injection by applying a label called istio-injection: enabled to the namespace that hosts the microservices.

We briefly looked at Kubernetes admission controllers and how mutating admission webhooks inject sidecars to the deployment API calls to the Kubernetes API server. We also read about gateways and looked at the sample Ingress and Egress gateways that are installed with Istio. The gateway is a standalone istio-proxy, aka an Envoy proxy, and is used to manage Ingress and Egress traffic to and from the mesh. Following this, we looked at how various ports are configured to be exposed on the Ingress gateway and how traffic is routed to upstream services.

Istio provides integration with various telemetry and observability tools. The first tool we looked at was Kiali, the visualization tool providing insight into traffic flows. It is also the management console for the Istio Service Mesh. Using Kiali, you can also perform Istio management functions such as checking/modifying various configurations and checking infrastructure status. After Kiali, we looked at Jaeger, Prometheus, and Grafana, all of which are open source and can be integrated easily with Istio.

The content of this chapter sets the foundations for and prepares you to deep dive into Istio in the upcoming chapters. In the next chapter, we will be reading about Istio’s control and data planes, taking a deep dive into their various components.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset