12

Summarizing What We Have Learned and the Next Steps

Throughout the book, you learned about and practiced various concepts of Service Mesh and how to apply them using Istio. It is strongly recommended that you practice the hands-on examples in each chapter. Don’t just limit yourself to the scenarios presented in this book but rather explore, tweak, and extend the examples and apply them to real-world problems you are facing in your organizations.

In this chapter, we will revise the concepts discussed in this book by implementing Istio for an Online Boutique application. It will be a good idea to look at scenarios presented in this chapter and try to implement them yourself before looking at code examples. I hope reading this last chapter provides you with more confidence in using Istio. We will go through the following topics in this chapter:

  • Enforcing best practices using OPA Gatekeeper
  • Applying the learnings of this book to a sample Online Boutique application
  • Istio roadmap, vision, and documentation, and how to engage with the community
  • Certification, learning resources, and various pathways to learning
  • The Extended Berkeley Packet Filter

Technical requirements

The technical requirements in this chapter are similar to Chapter 4. We will be using AWS EKS to deploy a website for an online boutique store, which is an open source application available under Apache License 2.0 at https://github.com/GoogleCloudPlatform/microservices-demo.

Please check Chapter 4’s Technical requirements section to set up the infrastructure in AWS using Terraform, set up kubectl, and install Istio including observability add-ons. To deploy the Online Boutique store application, please use the deployment artifacts in the Chapter12/online-boutique-orig file on GitHub.

You can deploy the Online Boutique store application using the following commands:

$ kubectl apply -f Chapter12/online-boutique-orig/00-online-boutique-shop-ns.yaml
namespace/online-boutique created
$ kubectl apply -f Chapter12/online-boutique-orig

The last command should deploy the Online Boutique application. After some time, you should be able to see all the Pods running:

$ kubectl get po -n online-boutique
NAME                        READY   STATUS    RESTARTS   AGE
adservice-8587b48c5f-v7nzq               1/1     Running   0          48s
cartservice-5c65c67f5d-ghpq2             1/1     Running   0          60s
checkoutservice-54c9f7f49f-9qgv5         1/1     Running   0          73s
currencyservice-5877b8dbcc-jtgcg         1/1     Running   0          57s
emailservice-5c5448b7bc-kpgsh            1/1     Running   0          76s
frontend-67f6fdc769-r8c5n                1/1     Running   0          68s
paymentservice-7bc7f76c67-r7njd          1/1     Running   0          65s
productcatalogservice-67fff7c687-jrwcp   1/1     Running   0          62s
recommendationservice-b49f757f-9b78s     1/1     Running   0          70s
redis-cart-58648d854-jc2nv               1/1     Running   0          51s
shippingservice-76b9bc7465-qwnvz         1/1     Running   0          55s

The name of the workloads also reflects their role in the Online Boutique application, but you can find more about this freely available open source application at https://github.com/GoogleCloudPlatform/microservices-demo.

For now, you can access the application via the following command:

$ kubectl port-forward svc/frontend 8080:80 -n online-boutique
Forwarding from 127.0.0.1:8080 -> 8079
Forwarding from [::1]:8080 -> 8079

You can then open it on the browser using http://localhost:8080. You should see something like the following:

Figure 12.1 – Online Boutique application by Google

Figure 12.1 – Online Boutique application by Google

This completes the technical setup required for code examples in this chapter. Let’s get into the main topics of the chapter. We will begin with setting up the OPA Gatekeeper to enforce Istio deployment best practices.

Enforcing workload deployment best practices using OPA Gatekeeper

In this section, we will deploy OPA Gatekeeper using our knowledge from Chapter 11. We will then configure OPA policies to enforce that every deployment has app and version as labels, and all port names have protocol names as a prefix:

  1. Install OPA Gatekeeper. Deploy it by following the instructions in Chapter 11, in the Automating best practices using OPA Gatekeeper section:
    % kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/deploy/gatekeeper.yaml
  2. After deploying OPA Gatekeeper, you need to configure it to sync namespaces, Pods, services and Istio CRD gateways, virtual services, destination rules, and policy and service role bindings into its cache. We will make use of the configuration file we created in Chapter 11:
    $ kubectl apply -f Chapter11/05-GatekeeperConfig.yaml
    config.config.gatekeeper.sh/config created
  3. Configure OPA Gatekeeper to apply the constraints. In Chapter 11, we configured constraints to enforce that Pods should have app and version numbers as labels (defined in Chapter11/gatekeeper/01-istiopodlabelconstraint_template.yaml and Chapter11/gatekeeper/01-istiopodlabelconstraint.yaml), and all port names should have a protocol name as a prefix (defined in Chapter11/gatekeeper/02-istioportconstraints_template.yaml and Chapter11/gatekeeper/02-istioportconstraints.yaml). Apply the constraints using the following commands:
    $ kubectl apply -f Chapter11/gatekeeper/01-istiopodlabelconstraint_template.yaml
    constrainttemplate.templates.gatekeeper.sh/istiorequiredlabels created
    $ kubectl apply -f Chapter11/gatekeeper/01-istiopodlabelconstraint.yaml
    istiorequiredlabels.constraints.gatekeeper.sh/mesh-pods-must-have-app-and-version created
    $ kubectl apply -f Chapter11/gatekeeper/02-istioportconstraints_template.yaml
    constrainttemplate.templates.gatekeeper.sh/allowedistioserviceportname created
    $ kubectl apply -f Chapter11/gatekeeper/02-istioportconstraints.yaml
    allowedistioserviceportname.constraints.gatekeeper.sh/port-name-constraint created

This completes the deployment and configuration of OPA Gatekeeper. You should extend the constraints with anything else you might like to be included to ensure good hygiene of deployment descriptors of the workloads.

In the next section, we will redeploy the Online Boutique application and enable istio sidecar injection and then discover the configurations that are in violation of OPA constraints and resolve them one by one.

Applying our learnings to a sample application

In this section, we will apply the learnings of the book – specifically, the knowledge from Chapters 4 to 6 – to our Online Boutique application. Let’s dive right in!

Enabling Service Mesh for the sample application

Now that OPA Gatekeeper is in place with all the constraints we want it to enforce on deployments, it’s time to deploy a sample application. We will first start with un-deploying the online-boutique application and redeploying with istio-injection enabled at the namespace level.

Undeploy the Online Boutique application by deleting the online-boutique namespace:

% kubectl delete ns online-boutique
namespace " online-boutique " deleted

Once undeployed, let’s modify the namespace and add an istio-injection:enabled label and redeploy the application. The updated namespace configuration will be as follows:

apiVersion: v1
kind: Namespace
metadata:
  name: online-boutique
  labels:
    istio-injection: enabled

The sample file is available at Chapter12/OPAGatekeeper/automaticsidecarinjection/00-online-boutique-shop-ns.yaml on GitHub.

With automatic sidecar injection enabled, let’s try to deploy the application using the following commands:

$ kubectl apply -f Chapter12/OPAGatekeeper/automaticsidecarinjection
namespace/online-boutiquecreated
$ kubectl apply -f Chapter12/OPAGatekeeper/automaticsidecarinjection
Error from server (Forbidden): error when creating "Chapter12/OPAGatekeeper/automaticsidecarinjection/02-carts-svc.yml": admission webhook "validation.gatekeeper.sh" denied the request: [port-name-constraint] All services declaration must have port name with one of following  prefix http-, http2-,https-,grpc-,grpc-web-,mongo-,redis-,mysql-,tcp-,tls-

There will be errors caused by constraint violations imposed by OPA Gatekeeper. The output in the preceding example is truncated to avoid repetitions but from the output in your terminal, you must notice that all deployments are in violation and hence no resource is deployed to the online-boutique namespace.

Try to fix the constraint violation by applying the correct labels and naming ports correctly as suggested by Istio best practices.

You need to apply app and version labels to all deployments. The following is an example for a frontend deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
  namespace: online-boutique
spec:
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
        version: v1

Similarly, you need to add name to all port definitions in the service declaration. The following is an example of a carts service:

apiVersion: v1
kind: Service
metadata:
  name: frontend
  namespace: online-boutique
spec:
  type: ClusterIP
  selector:
    app: frontend
  ports:
  - name: http-frontend
    port: 80
    targetPort: 8080

For your convenience, the updated files are available in Chapter12/OPAGatekeeper/automaticsidecarinjection. Deploy the Online Boutique application using the following command:

% kubectl apply -f Chapter12/OPAGatekeeper/automaticsidecarinjection

With that, we have practiced the deployment of the Online Boutique application in the Service Mesh. You should have the Online Boutique application along with automatic sidecar injection deployed in your cluster. The Online Boutique application is part of the Service Mesh but not yet completely ready for it. In the next section, we will apply the learning from Chapter 5 on managing application traffic.

Configuring Istio to manage application traffic

In this section, using the learnings from Chapter 4, we will configure the Service Mesh to manage application traffic for the Online Boutique application. We will first start with configuring the Istio Ingress gateway to allow traffic inside the mesh.

Configuring Istio Ingress Gateway

In Chapter 4, we read that a gateway is like a load balancer on the edge of the mesh that accepts incoming traffic that is then routed to underlying workloads.

In the following source code block, we have defined the gateway configuration:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: online-boutique-ingress-gateway
  namespace: online-boutique
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "onlineboutique.com"

The file is also available in Chapter12/trafficmanagement/01-gateway.yaml on GitHub. Apply the configuration using the following command:

$ kubectl apply -f Chapter12/trafficmanagement/01-gateway.yaml
gateway.networking.istio.io/online-boutique-ingress-gateway created

Next, we need to configure VirtualService to route traffic for the onlineboutique.com host to the corresponding frontend service.

Configuring VirtualService

VirtualService is used to define route rules for every host as specified in the gateway configuration. VirtualService is associated with the gateway and the hostname is managed by that gateway. In VirtualService, you can define rules on how a traffic/route can be matched and, if matched, then where it should be routed to.

The following source code block defines VirtualService that matches any traffic handled by online-boutique-ingress-gateway with a hostname of onlineboutique.com. If matched, the traffic is routed to subset v1 of the destination service named frontend:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: onlineboutique-frontend-vs
  namespace: online-boutique
spec:
  hosts:
  - "onlineboutique.com"
  gateways:
  - online-boutique-ingress-gateway
  http:
  - route:
    - destination:
        host: frontend
        subset: v1

The configuration is available in Chapter12/trafficmanagement/02-virtualservice-frontend.yaml on GitHub.

Next, we will configure DestinationRule, which defines how the request will be handled by the destination.

Configuring DestinationRule

Though they might appear unnecessary, when you have more than one version of the workload, then DestinationRule is used for defining traffic policies such as a load balancing policy, connection pool policy, outlier detection policy, and so on. The following code block configures DestinationRule for the frontend service:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: frontend
  namespace: online-boutique
spec:
  host: frontend
  subsets:
  - name: v1
    labels:
      app: frontend

The configuration is available along with the VirtualService configuration in Chapter12/trafficmanagement/02-virtualservice-frontend.yaml on GitHub.

Next, let’s create VirtualService and DestinationRule by using the following commands:

$ kubectl apply -f Chapter12/trafficmanagement/02-virtualservice-frontend.yaml
virtualservice.networking.istio.io/onlineboutique-frontend-vs created
destinationrule.networking.istio.io/frontend created

You should now be able to access the Online Boutique store site from the web browser. You need to find the public IP of the AWS load balancer exposing the Ingress gateway service – do not forget to add a Host header using the ModHeader extension to Chrome, as discussed in Chapter 4 and as seen in the following screenshot:

Figure 12.2 – ModHeader extension with Host header

Figure 12.2 – ModHeader extension with Host header

Once the correct Host header is added, you can access the Online Boutique from Chrome using the AWS load balancer public DNS:

Figure 12.3 – Online Boutique landing page

Figure 12.3 – Online Boutique landing page

So far, we have created only one virtual service to route traffic from the Ingress gateway to the frontend service in the mesh. By default, Istio will send traffic to all respective microservices in the mesh, but as we discussed In the previous chapter, the best practice is to define routes via VirtualService and how the request should be routed via destination rules. Following the best practice, we need to define VirtualService and DestinationRule for the remaining microservices. Having VirtualService and DestinationRule helps you manage traffic when there is more than one version of underlying workloads.

For your convenience, VirtualService and DestinationRule are already defined in the Chapter12/trafficmanagement/03-virtualservicesanddr-otherservices.yaml file on GitHub. You can apply the configuration using the following command:

$ kubectl apply -f Chapter12/trafficmanagement/03-virtualservicesanddr-otherservices.yaml

After applying the configuration and generating some traffic, check out the Kiali dashboard:

Figure 12.4 – Versioned app graph for the Online Boutique shop

Figure 12.4 – Versioned app graph for the Online Boutique shop

In the Kiali dashboard, you can observe the Ingress gateway, all virtual services, and underlying workloads.

Configuring access to external services

Next, we quickly revise concepts on routing traffic to destinations outside the mesh. In Chapter 4, we learned about ServiceEntry, which enables us to add additional entries to Istio’s internal service registry so that service in the mesh can route traffic to these endpoints that are not part of the Istio service registry. The following is an example of ServiceRegistry adding xyz.com to the Istio service registry:

apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: allow-egress-to-xyv.com
spec:
  hosts:
  - "xyz.com"
  ports:
  - number: 80
    protocol: HTTP
    name: http
  - number: 443
    protocol: HTTPS
    name: https

This concludes the section on managing application traffic, in which we exposed onlineboutique.com via Istio Ingress Gateway and defined VirtualService and DestinationRule for routing and handling traffic in the mesh.

Configuring Istio to manage application resiliency

Istio provides various capabilities to manage application resiliency, and we discussed them in great detail in Chapter 5. We will apply some of the concepts from that chapter to the Online Boutique application.

Let’s start with timeouts and retries!

Configuring timeouts and retries

Let’s assume that the email service suffers from intermittent failures, and it is prudent to timeout after 5 seconds if a response is not received from the email service, and then retry sending the email a few times rather than aborting it. We will configure retries and timeout for the email service to revise application resiliency concepts.

Istio provides a provision to configure timeouts, which is the amount of time that an Istio-proxy sidecar should wait for replies from a given service. In the following configuration, we have applied a timeout of 5 seconds for the email service:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  namespace: online-boutique
  name: emailvirtualservice
spec:
  hosts:
  - emailservice
  http:
  - timeout: 5s
    route:
    - destination:
        host: emailservice
        subset: v1

Istio also provides provision for automated retries that are implemented as part of the VirtualService configuration. In the following source code block, we have configured Istio to retry the request to the email service twice, with each retry to timeout after 2 seconds and a retry to happen only if 5xx,gateway-error,reset,connect-failure,refused-stream,retriable-4xx errors are returned from downstream:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  namespace: online-boutique
  name: emailvirtualservice
spec:
  hosts:
  - emailservice
  http:
  - timeout: 5s
    route:
    - destination:
        host: emailservice
        subset: v1
    retries:
      attempts: 2
      perTryTimeout: 2s
      retryOn: 5xx,gateway-error,reset,connect-failure,refused-stream,retriable-4xx

We have configured timeout and retries via the VirtualService configuration. With the assumption that the email service is fragile and suffers interim failure, let’s try to alleviate this issue by mitigating any potential issue caused by a traffic surge or spike.

Configuring rate limiting

Istio provides controls to handle a surge of traffic from consumers, as well as to control the traffic to match consumers’ capability to handle the traffic.

In the following destination rule, we are defining rate-limiting controls for the email service. We have defined that the number of active requests to the email service will be 1 (as per http2MaxRequests), there will be only 1 request per connection (as defined in maxRequestsPerConnection), and there will be 0 requests queued while waiting for connection from the connection pool (as defined in http1MaxPendingRequests):

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  namespace: online-boutique
  name: emaildr
spec:
  host: emailservice
  trafficPolicy:
      connectionPool:
        http:
          http2MaxRequests: 1
          maxRequestsPerConnection: 1
          http1MaxPendingRequests: 0
  subsets:
  - name: v1
    labels:
      version: v1
      app: emailservice

Let’s make some more assumptions and assume that there are two versions of the email service, with v1 being more rogue than the other, v2. In such scenarios, we need to apply outlier detection policies to perform circuit breakers. Istio provides good control for outlier detection. The following code block describes the config you need to add to trafficPolicy in the corresponding destination rule for the email service:

      outlierDetection:
        baseEjectionTime: 5m
        consecutive5xxErrors: 1
        interval: 90s
        maxEjectionPercent: 50

In the outlier detection, we have defined baseEjectionTime with a value of 5 minutes, which is the minimum duration per ejection. It is then also multiplied by the number of times an email service is found to be unhealthy. For example, if the v1 email service is found to be an outlier 5 times, then it will be ejected from the connection pool for baseEjectionTime*5. Next, we have defined consecutive5xxErrors with a value of 1, which is the number of 5x errors that need to occur to qualify the upstream to be an outlier. Then, we have defined interval with a value of 90s, which is the time between the checks when Istio scans the upstream for the health status. Finally, we have defined maxEjectionPercent with a value of 50%, which is the maximum number of hosts in the connection pool that can be ejected.

With that, we revised and applied various controls for managing application resiliency for the Online Boutique application. Istio provides various controls for managing application resiliency without needing to modify or build anything specific in your application. In the next section, we will apply the learning of Chapter 6 to our Online Boutique application.

Configuring Istio to manage application security

Now that we have created Ingress via Istio Gateway, routing rules via Istio VirtualService, and DestinationRules to handle how traffic will be routed to the end destination, we can move on to the next step of securing the traffic in the mesh. The following policy enforces that all traffic in the mesh should strictly happen over mTLS:

apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: strictmtls-online-boutique
  namespace: online-boutique
spec:
  mtls:
    mode: STRICT

The configuration is available in the Chapter12/security/strictMTLS.yaml file on GitHub. Without this configuration, all the traffic in the mesh is happening in PERMISSIVE mode, which means that the traffic can happen over mTLS as well as plain text. You can validate that by deploying a curl Pod and making an HTTP call to any of the microservices in the mesh. But once you apply the policy, Istio will enforce STRICT mode, which means mTLS will be strictly enforced for all traffic. Apply the configuration using the following:

$ kubectl apply -f Chapter12/security/strictMTLS.yaml
peerauthentication.security.istio.io/strictmtls-online-boutique created

You can check in Kiali that all traffic in the mesh is happening over mTLS:

Figure 12.5 – App graph showing mTLS communication between services

Figure 12.5 – App graph showing mTLS communication between services

Next, we will be securing Ingress traffic using https. This step is important to revise but the outcome of it creates a problem in accessing the application, so we will perform the steps to revise the concepts and then revert them back so that we can continue accessing the application.

We will use the learning from Chapter 4’s Exposing Ingress over HTTPS section. The steps are much easier if you have a Certificate Authority (CA) and registered DNS name, but if not, simply follow these steps to create a certificate to be used for the onlineboutique.com domain:

  1. Create a CA. Here, we are creating a CA with a Common Name (CN) as onlineboutique.inc:
    $ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=Online Boutique./CN=onlineboutique.inc' -keyout onlineboutique.inc.key -out onlineboutique.inc.crt
    Generating a 2048 bit RSA private key
    writing new private key to 'onlineboutique.inc.key'
  2. Generate a Certificate Signing Request (CSR) for the Online Boutique. Here, we are generating a CSR for onlineboutique.com, which also generates a private key:
    $ openssl req -out onlineboutique.com.csr -newkey rsa:2048 -nodes -keyout onlineboutique.com.key -subj "/CN=onlineboutique.com/O=onlineboutique.inc"
    Generating a 2048 bit RSA private key
    ...........................................................................+++
    .........+++
    writing new private key to 'onlineboutique.com.key'
  3. Sign the CSR using the CA using the following command:
    $ openssl x509 -req -sha256 -days 365 -CA onlineboutique.inc.crt -CAkey onlineboutique.inc.key -set_serial 0 -in onlineboutique.com.csr -out onlineboutique.com.crt
    Signature ok
    subject=/CN= onlineboutique.com/O= onlineboutique.inc
  4. Load the certificate and private key as a Kubernetes Secret:
    $ kubectl create -n istio-system secret tls onlineboutique-credential --key=onlineboutique.com.key --cert=onlineboutique.com.crt
    secret/onlineboutique-credential created

We have created the certificate and stored them as Kubernetes Secret. In the next steps, we will modify the Istio Gateway configuration to expose the traffic over HTTPS using the certificates.

  1. Create the Gateway configuration as described in the following command:
    apiVersion: networking.istio.io/v1alpha3
    kind: Gateway
    metadata:
      name: online-boutique-ingress-gateway
      namespace: online-boutique
    spec:
      selector:
        istio: ingressgateway
      servers:
      - port:
          number: 443
          name: https
          protocol: HTTPS
        tls:
          mode: SIMPLE
          credentialName: onlineboutique-credential
        hosts:
        - "onlineboutique.com"

Apply the following configuration:

$ kubectl apply -f Chapter12/security/01-istio-gateway.yaml

You can access and check the certificate using the following commands. Please note that the output is truncated to highlight relevant sections only:

$ curl -v -HHost:onlineboutique.com --connect-to "onlineboutique.com:443:aced3fea1ffaa468fa0f2ea6fbd3f612-390497785.us-east-1.elb.amazonaws.com" --cacert onlineboutique.inc.crt --head  https://onlineboutique.com:443/
..
* Connected to aced3fea1ffaa468fa0f2ea6fbd3f612-390497785.us-east-1.elb.amazonaws.com (52.207.198.166) port 443 (#0)
--
* Server certificate:
*  subject: CN=onlineboutique.com; O=onlineboutique.inc
*  start date: Feb 14 23:21:40 2023 GMT
*  expire date: Feb 14 23:21:40 2024 GMT
*  common name: onlineboutique.com (matched)
*  issuer: O=Online Boutique.; CN=onlineboutique.inc
*  SSL certificate verify ok.
..

The configuration will secure the Ingress traffic to the online-boutique store, but it also means that you will not be able to access it from the browser because of a mismatch of the FQDN being used in the browser and the CN configured in the certificates. You can alternatively register DNS names against the AWS load balancer but for now, you might find it easier to remove the HTTPS configuration and revert to using the Chapter12/trafficmanagement/01-gateway.yaml file on GitHub.

Let’s dive deeper into security and perform RequestAuthentication and authorization for the Online Boutique store. In Chapter 6, we did an elaborate exercise of building authentication and authorization using Auth0. Along the same lines, we will be building an authentication and authorization policy for the frontend service but this time, we will use a dummy JWKS endpoint, which is shipped in with Istio.

We will start with creating a RequestAuthentication policy to define the authentication method supported by the frontend service:

apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
 name: frontend
 namespace: online-boutique
spec:
  selector:
    matchLabels:
      app: frontend
  jwtRules:
  - issuer: "[email protected]"
    jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.17/security/tools/jwt/samples/jwks.json"

We are making use of dummy jwksUri, which comes along with Istio for testing purposes. Apply the RequestAuthentication policy using the following:

$ kubectl apply -f Chapter12/security/requestAuthentication.yaml
requestauthentication.security.istio.io/frontend created

After applying the RequestAuthentication policy, you can test that by providing a dummy token to the frontend service:

  1. Fetch the dummy token and set it as an environment variable to be used in requests later:
    TOKEN=$(curl -k https://raw.githubusercontent.com/istio/istio/release-1.17/security/tools/jwt/samples/demo.jwt -s); echo $TOKEN
    eyJhbGciOiJSUzI1NiIsImtpZCI6IkRIRmJwb0lVcXJZOHQyen BBMnFYZkNtcjVWTzVaRXI0UnpIVV8tZW52dlEiLCJ0eXAiOiJKV1QifQ.eyJleHAiOjQ2ODU5ODk3MDAsImZvbyI6ImJhciIsImlhdCI6MTUzMjM4OTcwMCwiaXNzIjoidGVzdGluZ0BzZWN1cmUuaXN0aW8uaW8iLCJzdWIiOiJ0ZXN0aW5nQHNlY3VyZS5pc3Rpby5pbyJ9. CfNnxWP2tcnR9q0vxyxweaF3ovQYHYZl82hAUsn21bwQd9zP7c-LS9qd_vpdLG4Tn1A15NxfCjp5f7QNBUo-KC9PJqYpgGbaXhaGx7bEdFWjcwv3nZzvc7M__ZpaCERdwU7igUmJqYGBYQ51vr2njU9ZimyKkfDe3axcyiBZde7G6dabliUosJvvKOPcKIWPccCgefSj_GNfwIip3-SsFdlR7BtbVUcqR-yv-XOxJ3Uc1MI0tz3uMiiZcyPV7sNCU4KRnemRIMHVOfuvHsU60_GhGbiSFzgPTAa9WTltbnarTbxudb_YEOx12JiwYToeX0DCPb43W1tzIBxgm8NxUg
  2. Test using curl:
    $ curl -HHost:onlineboutique.com http://aced3fea1ffaa468fa0f2ea6fbd3f612-390497785.us-east-1.elb.amazonaws.com/ -o /dev/null --header "Authorization: Bearer $TOKEN" -s -w '%{http_code}
    '
    200

Notice that you received a 200 response.

  1. Now try testing with an invalid token:
    $ curl -HHost:onlineboutique.com http://aced3fea1ffaa468fa0f2ea6fbd3f612-390497785.us-east-1.elb.amazonaws.com/ -o /dev/null --header "Authorization: Bearer BLABLAHTOKEN" -s -w '%{http_code}
    '
    401

The RequestAuthentication policy plunged into action and denied the request.

  1. Test without any token:
    % curl -HHost:onlineboutique.com http://aced3fea1ffaa468fa0f2ea6fbd3f612-390497785.us-east-1.elb.amazonaws.com/ -o /dev/null  -s -w '%{http_code}
    '
    200

The outcome of the request is not desired but is expected because the RequestAuthentication policy is only responsible for validating a token if a token is passed. If there is no Authorization header in the request, then the RequestAuthentication policy will not be invoked. We can solve this problem using AuthorizationPolicy, which enforces an access control policy for workloads in the mesh.

Let’s build AuthorizationPolicy, which enforces that a principal must be present in the request:

apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: require-jwt
  namespace: online-boutique
spec:
  selector:
    matchLabels:
      app: frontend
  action: ALLOW
  rules:
  - from:
    - source:
       requestPrincipals: ["[email protected]/[email protected]"]

The configuration is available in the Chapter12/security/requestAuthorizationPolicy.yaml file in GitHub. Apply the configuration using the following command:

$ kubectl apply -f Chapter12/security/requestAuthorizationPolicy.yaml
authorizationpolicy.security.istio.io/frontend created

After applying the configuration test using Steps 1 to 4, which we performed after applying the RequestAuthentication policy, you will notice that all steps work as expected, but for Step 4, we are getting the following:

$ curl -HHost:onlineboutique.com http://aced3fea1ffaa468fa0f2ea6fbd3f612-390497785.us-east-1.elb.amazonaws.com/ -o /dev/null  -s -w '%{http_code}
'
403

That is because the authorization policy enforces the required presence of a JWT with the ["[email protected]/[email protected]"] principal.

This concludes the security configuration for our Online Boutique application. In the next section, we will read about various resources that will help you become an expert and certified in using and operating Istio.

Certification and learning resources for Istio

The primary resource for learning Istio is the Istio website (https://istio.io/latest/). There is elaborative documentation on performing basic to multi-cluster setups. There are resources for beginners and advanced users, and various exercises on performing traffic management, security, observability, extensibility, and policy enforcement. Outside of the Istio documentation, the other organization providing lots of supportive content on Istio is Tetrate (https://tetrate.io/), which also provides labs and certification courses. One such certification provided by Tetrate Academy is Certified Istio Administrator. Details about the course and exam are available at https://academy.tetrate.io/courses/certified-istio-administrator. Tetrate Academy also provides a free course to learn about Istio fundamentals. You can find the details of the course at https://academy.tetrate.io/courses/istio-fundamentals. Similarly, there is a course from Solo.io named Get Started with Istio; you can find details of the course at https://academy.solo.io/get-started-with-istio. Another good course from The Linux Foundation is named Introduction to Istio, and you can find the details of the course at https://training.linuxfoundation.org/training/introduction-to-istio-lfs144x/.

I personally enjoy the learning resources available at https://istiobyexample.dev/; the site explains various use cases of Istio (such as canary deployment, managing Ingress, managing gRPC traffic, and so on) in great detail, along with configuration examples. For any technical questions, you can always head to StackOverflow at https://stackoverflow.com/questions/tagged/istio. There is an energetic and enthusiastic community of Istio users and builders who are discussing various topics about Istio at https://discuss.istio.io/; feel free to sign up for the discussion board.

Tetrate Academy also provides a free course on Envoy fundamentals; the course is very helpful to understand the fundamentals of Envoy and, in turn, the Istio data plane. You can find the details of this course at https://academy.tetrate.io/courses/envoy-fundamentals. The course is full of practical labs and quizzes that are very helpful in mastering your Envoy skills.

The Istio website has compiled a list of helpful resources to keep you updated with Istio and engage with the Istio community; you can find the list at https://istio.io/latest/get-involved/. The list also provides you with details on how to report bugs and issues.

To summarize, there are not many resources except a few books and websites, but you will find most of the answers to your questions at https://istio.io/latest/docs/. It is also a great idea to follow IstioCon, which is the Istio Community conference and happens on a yearly cadence. You can find a session of IstioCon 2022 at https://events.istio.io/istiocon-2022/sessions/ and 2021 at https://events.istio.io/istiocon-2021/sessions/.

Understanding eBPF

As we are at the end of this book, it is important to also look at other technologies that are relevant to Service Mesh. One such technology is the Extended Berkeley Packet Filter (eBPF). In this section, we will read about eBPF and its role in Service Mesh evolution.

eBPF is a framework that allows users to run custom programs within the kernel of the operating system without needing to change kernel source code or load kernel modules. The custom programs are called eBPF programs and are used to add additional capabilities to the operating system at runtime. The eBPF programs are safe and efficient and, like the kernel modules, they are like lightweight sandbox virtual machines run in a privileged context by the operating system.

eBPF programs are triggered based on events happening at the kernel level, which is achieved by associating them to hook points. Hooks are predefined at kernel levels and include system calls, network events, function entry and exit, and so on. In scenarios where an appropriate hook doesn’t exist, then users can make use of kernel probes, also called kprobes. The kprobes are inserted into the kernel routine; ebPF programs are defined as a handler to kprobes and are executed whenever a particular breakpoint is hit in the kernel. Like hooks and kprobes, eBPF programs can also be attached to uprobes, which are probes at user space levels and are tied to an event at the user application level, thus eBPF programs can be executed at any level from the kernel to the user application. When executing programs at the kernel level, the biggest concern is the security of the program. In eBPF, that is assured by BPF libraries. The BPF libraries handle the system call to load the eBPF programs in two steps. The first step is the verification step, during which the eBPF program is validated to ensure that it will run to completion and will not lock up the kernel, the process loading the eBPF program has correct privileges, and the eBPF program will not harm the kernel in any way. The second step is a Just-In-Time (JIT) compilation step, which translates the generic bytecode of the program into the machine-specific instruction and optimizes it to get the maximum execution speed of the program. This makes eBPF programs run as efficiently as natively compiled kernel code as if it was loaded as a kernel module. Once the two steps are complete, the eBPF program is loaded and compiled into the kernel waiting for the hook or kprobes to trigger the execution.

BPF has been widely used as an add-on to the kernel. Most of the applications have been at the network level and mostly in observability space. eBPF has been used to provide visibility into system calls at packet and socket levels, which are then used for building security solution systems that can operate with low-level context from the kernel. eBPF programs are also used for introspection of user applications along with the part of the kernel running the application, which provides a consolidated insight to troubleshoot application performance issues. You might be wondering why we are discussing eBPF in the context of Service Mesh. The programmability and plugin model of eBPF is particularly useful in networking. eBPF can be used to perform IP routing, packet filtering, monitoring, and so on at native speeds of kernel modules. One of the drawbacks of the Istio architecture is its model of deploying a sidecar with every workload, as we discussed in Chapter 2 – the sidecar basically works by intercepting network traffic, making use of iptables to configure the kernel’s netfilter packet filter functionality. The drawback of this approach is less optimal performance, as the data path created for service traffic is much longer than what it would have been if the workload was just by itself without any sidecar traffic interception. With eBPF socket-related program types, you can filter socket data, redirect socket data, and monitor socket events. These programs have the potential for replacing the iptables-based traffic interception; using eBPF, there are options to intercept and manage network traffic without incurring any negative impacts on the data path performance.

Isovalent (at https://isovalent.com/) is one such organization that is revolutionizing the architecture of API Gateway and Service Mesh. Cilium is a product from Isovalent, and it provides a variety of functionality, including API Gateway function, Service Mesh, observability, and networking. Cilium is built with eBPF as its core technology where it injects eBPF programs at various points in the Linux kernel to achieve application networking, security, and observability functions. Cilium is getting adopted in Kubernetes networking to solve performance degradation caused by packets needing to traverse the same network stack multiple times between the host and the Pod. Cilium is solving this problem by bypassing iptables in the networking stacking, avoiding net filters and other overheads caused by iptables, which has led to significant gains in network performance. You can read more about the Cilium product stack at https://isovalent.com/blog/post/cilium-release-113/; you will be amazed to see how eBPF is revolutionizing the application networking space.

Istio has also created an open source project called Merbridge, which replaces iptables with eBPF programs to allow the transporting of data directly between inbound and outbound sockets of sidecar containers and application containers to shorten the overall data path. Merbridge is in its early days but has produced some promising results; you can find the open source project at https://github.com/merbridge/merbridge.

With eBPF and products like Cilium, it is highly likely that there will be an advancement in how network proxy-based products will be designed and operated in the future. eBPF is being actively explored by various Service Mesh technologies, including Istio, on how it can be used to overcome drawbacks and improve the overall performance and experience of using Istio. eBPF is a very promising technology and is already being used for doing awesome things with products such as Cilium and Calico.

Summary

I hope this book has provided you with a good insight into Istio. Chapters 1 to 3 set the context on why Service Mesh is needed and how Istio the control and data planes operate. The information in these three chapters is important to appreciate Istio and to build an understanding of Istio architecture. Chapters 4 to 6 then provided details on how to use Istio for building the application network that we discussed in the earlier chapters.

Then, in Chapter 7, you learned about observability and how Istio provides integration into various observation tools, as the next steps you should explore integration with other observability and monitoring tools such as Datadog. Following that, Chapter 8 showed practices on how to deploy Istio across multiple Kubernetes clusters, which should have given you confidence on how to install Istio in production environments. Chapter 9 then provided details on how Istio can be extended using WebAssembly and its applications, while Chapter 10 discussed how Istio helps bridge the old world of virtual machines with the new world of Kubernetes by discussing how the Service Mesh can be extended to include workloads deployed on virtual machines. Lastly, Chapter 11 covered the best practices for operating Istio and how tools such as OPA Gatekeeper can be used to automate some of the best practices.

In this chapter, we managed to revise the concepts of Chapters 4 to 6 by deploying and configuring another open source demo application, which should have provided you with the confidence and experience to take on the learnings from the book to real-life applications and to take advantage of application networking and security provided by Istio.

You also read about eBPF and what a game-changing technology it is, making it possible to write code at the kernel level without needing to understand or experience the horrors of the kernel. eBPF will possibly bring lots of changes to how Service Mesh, API Gateway, and networking solutions in general operate. In the Appendix of this book, you will find information about other Service Mesh technologies: Consul Connect, Kuma Mesh, Gloo Mesh, and Linkerd. The Appendix provides a good overview of these technologies and helps you appreciate their strength and limitations.

I hope you enjoyed learning about Istio. To establish your knowledge of Istio, you can also explore taking the Certified Istio Administrator exam provided by Tetrate. You can also explore the other learning avenues provided in this chapter. I hope reading this book was an endeavor that will take you to the next level in your career and experience of building scalable, resilient, and secure applications using Istio.

BEST OF LUCK!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset