Chapter 13. Advanced Scenarios

Single-cluster service mesh deployments might be all that is required in some environments. But other environments might call on the need for multiple clusters and single, global service mesh deployment or federation of independent service mesh deployments. Such environments can include consideration for existing monoliths or other external services. And although we’re all excited to see the day when microservices rule the world and monolithic applications are relegated to the dusty pages of history, we’re not there yet. The Istio project understands this and supports a variety of deployment and configuration models.

This chapter reviews a handful of the more common advanced topologies. Advanced topologies are useful in environments in which the focus strictly comprises geographically proximate microservices or environments that distribute service meshes across regions or providers.

Types of Advanced Topologies

Although numerous topology configurations are possible, here we discuss a core few that you can morph into other configurations. Let’s categorize these few into two foundational topologies deployed across a single cluster or multiple clusters.

Single-Cluster Meshes

An advanced single-cluster topology is that of mesh expansion. Mesh expansion is a topology that includes running traditional, nonmicroservice workloads (so, monolithic apps) on bare metal or VMs (or both) on your Istio service mesh. Though these apps don’t receive all of the benefits offered by the service mesh, incorporating them into the service mesh does allow you to begin to gain insight into and control over how these services are communicating with one another. It lays the groundwork for a migration into a cloud native architecture or for divvying workload across Kubernetes and non-Kubernetes nodes.

We dig into that a bit more, later in this chapter. The point here is to associate mesh expansion with onboarding brownfield applications onto a mesh where you can observe the traffic, begin breaking up a monolith by “strangling” it by siphoning off traffic through route rules, or test these services when they’re not running on the VM or host machine anymore.

Multiple-Cluster Meshes

Two other types of advanced topologies are that of Istio multicluster and cross-cluster models. We put these two topologies into the same multiple-cluster-mesh (federation) category because, in essence, they intend to solve the same problem of intercluster communication. They provide for communication between disparate Kubernetes clusters running individual service meshes, unifying them either under one control plane or across two control planes. But this is where language becomes a bit tricky and confusion can creep in.

To keep it simple, we’ll summarize these two approaches now and examine them further later in the chapter.

Istio multicluster (single mesh)

Istio multicluster (single mesh) is a centralized approach for connecting multiple service meshes into a single service mesh. You do this by selecting a cluster that serves as the master cluster, with the others being remote clusters. We use local and remote as the terms to label which cluster’s data plane is local to the centralized control plane and which cluster’s data plane is remote from the centralized control plane.

A single-control-plane Istio deployment can span across multiple clusters as long as there is network connectivity between them and no IP address range overlap. Figure 13-1 illustrates how Istio v1.0 supports this flat-network model across clusters. You can extend Istio v1.0 to nonflat networks, where there is IP address conflict between clusters.

Using Network Address Translation (NAT) and a combination of VPNs, Istio Gateways, or other network services, you can conjoin clusters into the same administrative domain (under the same single control plane). It’s possible to federate multiple control planes in v1.0, but this requires a lot of manual tweaking and configuration within Istio. Irrespective of the approach to networking clusters together, to enable service name resolution and verifiable identities, all namespaces, services, and service accounts need to be identically defined in each cluster.

iuar 1301
Figure 13-1. The Istio multicluster approach: a single-control-plane Istio deployment with direct connection (flat networking) across clusters

In Istio v1.1, flat networking is no longer required. Two additional features have emerged that enable different multicluster scenarios:

Split horizon (EDS)

Pilot implements Envoy’s EDS API and uses it to configure service proxies in the data plane with information about services and endpoints local to a given service proxy’s cluster only. With split-horizon EDS, Pilot presents endpoints relevant to the cluster where the connected service proxy is, enabling Istio to route requests to different endpoints, depending on the location of the requested source. Istio Gateways intercept and parse TLS handshakes and use SNI data to decide destination service endpoints.

SNI-based routing

This uses the SNI TLS extension to make routing decisions for intercluster connectivity and communications.

As you saw in Chapter 5, EDS is part of Envoy’s API. Split horizon is a networking concept in which routing loops are avoided by prohibiting a router from advertising a route back onto the interface from which it was learned. So, in Istio’s case, as Pilot configures the service proxies in the data plane with service and endpoint information, it does so with information about endpoints relevant to the cluster where the connected service proxies run.

While naming of services, ServiceAccounts, and so on needs to be consistent across clusters, v1.1 improves Istio’s awareness of clusters and locality. Kubernetes labels and annotations are used to facilitate cluster awareness both on a per-network basis (associating each network with a given cluster) and a geographic basis for more intelligent, locality-based load balancing.

Upon initialization, the service proxy assigns each network a cluster label, inherently associating a given service instance to a cluster. Usually, you’d use a different label value for each cluster; however, this is configurable to the extent that perhaps multiple clusters belong to the same logical network, and so should be directly routable and have low latency (ideally). Each cluster will also have an ingress Gateway, which will share the same network label value as other workloads in the cluster. Matching label values associates in-cluster service endpoints with that cluster’s ingress Gateway. Because it’s used only for intercluster communication, the ingress Gateway is ideally separated from the cluster ingress and not exposed to end users.

Multicluster deployments combine multiple clusters into one logical unit, managed by one Istio control plane, which has a single logical view of all the services across the clusters. The implementation of this could be a set of control planes all synchronized with replicated configuration (typically using additional tooling, driven by shared CI/CD pipelines and/or GitOps practices) or one “physical” control plane that operates multiple data planes as one single service mesh. Either way, you have a set of clusters that are part of the same mesh.

Istio needs to know which cluster these workloads belong to. Akin to the use of Kubernetes labels for locality-based load balancing, Istio assigns each cluster a “network” label. These are again used to make Istio cluster-aware in terms of which cluster a given network is associated with. Typically, we would use a different label value for each cluster, but this can be tweaked if you know that multiple clusters are part of the same logical network (e.g., directly routable, low latency).

Istio cross-cluster (mesh federation)

Istio cross-cluster (mesh federation) is a decentralized approach to unifying service meshes. Each cluster runs its own control plane and data plane. You can have two or more clusters participating in the service mesh regardless of their region or cloud provider. Cross-cluster deployments facilitate for relatively different configurations per service mesh under different administrative domains running in different regions. With those individual administrative domains in mind, an advantage to the mesh federation pattern is that you can selectively make connections between clusters and, in turn, so can the exposure of one cluster’s services to other clusters.

Use Cases

As you can see by the advanced configurations we’ve touched on, there are a slew of use cases. While keeping in mind our mantra that you should be able to secure, observe, control, and connect your services regardless of where they are running or what they are running on (public, private, or hybrid cloud), let’s outline what use cases these advanced models of deployment enable.

HA (cross-region)

With both multicluster and cross-cluster, you can enable a cross-region story. This means that you can have a Kubernetes cluster deployed in two separate regions with service traffic being routed, securely, across those two regions. It is also possible to do failover between these regions with cross-cluster setups so that when one region drops, your application doesn’t necessarily drop.

Cross-provider

Expanding on cross-region, both topologies can support multicloud setups between providers; however, the requirements to do this, and also cross-region, differ significantly. We dig into that further in a later section. But, simply put, Istio allows you to achieve multicloud service mesh deployments.

Deployment strategies

With a multicluster setup, you can put a canary online on lower-cost instances at a lower-cost provider. Imagine spinning up a canary at DigitalOcean to which you can pass 1% of traffic from your production environment running on IBM Cloud. This is possible with both topologies. Similar logic follows for strategies like A/B testing and blue/green deployments.

Distributed tracing for the monolith

Using service mesh expansion, your monolithic app becomes less opaque. After you’ve expanded your mesh to include traditional applications running on VMs or metal, you can gather tracing data and more.

Migration

Using cross-cluster, you can move your service across regions or across providers using Istio to control the routing of your service traffic. One of the more interesting migration scenarios is the ability to take brownfield applications and transition them piecemeal into Kubernetes. This helps to give your brownfield application a bit of cloud native varnish to it. Now, your brownfield can communicate, if you wanted it to, with new services that you’re deploying within the cluster.

All of these scenarios will begin to make sense as we dig deeper into each one later in the chapter. After you have finished the exercises, you should have a basic understanding of how to set up each and how each works. We will be using the Bookinfo sample application throughout to illustrate the differences.

Choosing a Topology

Each deployment topology design comes with implementation concessions. It’s highly likely that the approach you select will be and probably should be dictated by where your data (or compute) lives. If you’re using only clusters in the public cloud, cross-cluster might make better sense. If you’re running some on-premises clusters with a cluster or two in the public clouds, multicluster might make sense. This is not to say that you can’t use cross-cluster between on-premises and the public cloud providers, especially as on-premises begins to model itself on how public cloud is delivered—for example, solutions like NetApp HCI, Azure Stack, GKE On-Prem, and more.

Cross-Cluster or Multicluster?

Let’s dive into the deep end of the pool. As we explained previously, the best way to think about Istio multicluster versus cross-cluster is to think centralized versus decentralized control planes, respectively. Over time, these have become the two dominant approaches to connecting multiple Kubernetes clusters running a service mesh together. Both have their advantages and disadvantages. Let’s walk through those pros and cons, beginning with multicluster.

Each data plane, whether local to or remote from the central control plane, must have connectivity with management components like Pilot and Mixer. Local and remote data planes also need to be able to push telemetry to the central control plane. All clusters participating in the service mesh must have unique network ranges and be routable among themselves. A common approach to facilitating connectivity across providers or across regions is to use private tunnels between clusters. Depending on your environment, this could be a VPN between on-premises cluster(s) and provider or across provider regions. Rancher’s Submariner or Amazon VPC peering are two example technologies that you might use. More and more people are using capabilities inherent to Istio itself by taking advantage of secure, gateway-to-gateway communication.

An Istio v1.1 multicluster environment necessitates that only one cluster run the majority of control-plane components with communication being routed to those components from the remote installations. The remote clusters would be set up with automatic sidecar injection and Citadel (must share a root CA). You can extend networking to support nonflat networks using NAT or VPN.

One use case for this type of deployment would be to bridge an on-premises Istio service mesh into the public cloud via a VPN. This allows developers to verify, through things like canaries or A/B testing, or something custom, their service on a cloud provider, reducing the need to have that traffic reach the on-premises production environment. It would allow for an easier migration—step by step—into the public cloud or a more hybrid approach if regulatory requirements bind you to locality. This is one of many stories that open up when you have hybrid connectivity into the public cloud from a private environment.

It also isn’t a requirement that you colocate the workload where your control plane runs. For some use cases, a centralized control plane dedicated to those components and remote data planes where workload runs might make sense, although this style topology does increase the risk of partition between control and data planes to the extent that the network between them isn’t highly resilient.

Our second deployment option is Istio cross-cluster, as illustrated in Figure 13-2.

iuar 1302
Figure 13-2. Istio v1.1 cross-cluster topology with cluster-aware service routing: a single control plane spanning multiple Kubernetes clusters

Cross-cluster deployments allow for a decentralized group of Istio service meshes to be federated using Istio route rules deployed within each service mesh. In this scenario, each Kubernetes cluster is running its own instance of the control plane. Both are being used to run workloads.

Let’s walk through the flow for a cross-cluster call to be made. Understanding what systems are invoked and what steps support this flow aids in understanding cross-cluster behavior:

  1. Client workload resolves remote services’ name to a network endpoint (i.e., using Kubernetes DNS or other service registry such as Consul). Inherently, this means that as a prerequisite, the remote service must be registered in the client’s local name server (DNS) or service registry in order for the client workload to successfully resolve its name to an endpoint.

  2. With the network endpoint resolved, the client calls the remote service. These requests are intercepted by the local service proxy. The request is then mapped to an upstream and specific endpoint and then routed. Depending on the topology and security configuration, the client service proxy might connect directly to the remote endpoint. Or it might connect via an engress and/or ingress gateway.

  3. The remote service proxy accepts the connection and validates the identities using mTLS exchange (herein lies the implicit requirement for each clusters’ service certificates to share a common root of trust—whether signed by the same or different Citadels).

  4. If an authorization policy is to be consulted, a check might need to be sent. Both the client and remote service identities (from each of the different clusters) are sent to Mixer for evaluation.

From an operator point of view, the requirements are simpler for cross-cluster than they are for multicluster—considering that you don’t need to set up a direct connect, VPN, VPC peering, or something similar, that is. That said, you do have the requirement of some type of ingress endpoint with which the cluster can communicate and the port open to communicate across that tunnel.

Each cluster must be able to communicate with each destination cluster on the port you’ve chosen; for example, 80 or 443. On a public cloud provider this would translate to a public ingress on each side. For example, you would need to ensure your source cluster can communicate via elastic load balancing to the target cluster, and vice versa. The ingress is used as a ServiceEntry within Istio.

Let’s pause here and recap the definition of a ServiceEntry from Chapter 8. A ServiceEntry contains a variety of properties that define it—host, address, port, protocol, and endpoint. A ServiceEntry is used to inform Istio about a service that hasn’t been autodiscovered by Istio already. ServiceEntrys can identify services either outside of the mesh or services internal to the mesh.

A cross-cluster topology can benefit you by removing the need for a VPN to connect each cluster together as you must do with multicluster (unless you’re using v1.1’s cluster-aware routing using Istio Gateways). It also protects you from having a single point of failure. The downside will be policies, which, for now, remain unique to each environment. If you want to apply the same policy globally, you’ll need to do so individually in each cluster. Solutions such as Galley will hopefully provide configuration management services for Istio.

Additionally, since these are disparate Kubernetes clusters you still need to find a solution for replicating objects that you need to be in both environments. In Kubernetes, you can solve this through Kubernetes Federation, which would allow you to ensure that most objects are federated and created on each cluster participating as members in the federation.

Note

Given the early state of Kubernetes Federation v2, it might be prudent to utilize alternative GitOps-based approaches as v2 matures.

Another consideration to address is that of cloud load balancing if you’ve chosen to go down the path of running your services across multiple cloud providers and want true redundancy.

Configuring Cross-Cluster

Let’s walk through an example of deploying two or more clusters using a cross-cluster topology. In this exercise, we do not need to ensure unique networks exist across all clusters that will participate in each mesh. Their Gateways must be able to route to one another without issue. With a cross-cluster topology, service proxies will communicate with their local control plane for management, authz, telemetry, and so on. We assume that you are familiar with how to install and build a Kubernetes cluster. If you don’t want to go through the hassle, you can opt to deploy two or more clusters on a managed Kubernetes cluster for speediness of walking through this exercise (not necessarily for how to run Kubernetes at large).

A few prerequisites are needed for this exercise:

  1. You’ll need ClusterAdmin access on each Kubernetes cluster, and ensure you have kubectl access to both clusters. You won’t need shell access.

  2. Gateways in each cluster provide cluster-to-cluster connectivity needed for cross-cluster service communication, via TLS. The istio-ingressgateway service’s IP address in each cluster must be accessible from all other clusters.

  3. The aforementioned cross-cluster communication requires use of mTLS between services, which unto its own requires a shared root CA. To satisfy this requirement, each cluster’s Citadel needs to be configured with intermediate CA credentials generated by a shared root CA.

  4. Each Kubernetes cluster running Istio should be running at the same version (1.12 or higher) with Istio installed running 1.0 or higher. You can use a managed Kubernetes service like NetApp Kubernetes Services (NKS), Google Kubernetes Engine (GKE), or Azure Kubernetes Service (AKS)—it might shave some time off the exercise. Of course, you don’t need to use those services; however, any CNCF–conformant Kubernetes distribution will suffice.

Handily, Helm charts exist that automate much of this setup. Let’s walk through what each step does to see what’s happening under the hood:

  1. Deploy two or more Kubernetes clusters. Following the installation process outlined in Chapter 4 will work.

  2. As shown in the Istio documentation, generate a multicluster-gateways Istio configuration file using Helm, using Helm’s templating feature. Ensure that helm is installed locally and from the stio package directory by running the following command:

    $ helm template install/kubernetes/helm/istio --name istio
         --namespace istio-system -f  install/kubernetes/helm/istio
         /example-values/values-istio-multicluster-gateways.yaml >
         $HOME/istio.yaml

    Implicit in this setup is the need for certificates to share a common root of trust, even when signed by different Citadels. To have mTLS working correctly across clusters, we must use a shared root CA. So long as these same certificates exist in each cluster and Citadel can issue and provide identities to those service proxies, cross-cluster communication can be secured with mTLS. We describe Citadel in-depth in Chapter 6.

  3. Install Istio’s CRDs on each cluster:

    $ kubectl apply -f install/kubernetes/helm/istio-init/files/crd/
  4. You will need to create a control plane in each cluster. Each cluster’s control plane needs to be identically configured. Begin by creating the istio-system namespace manually on your Kubernetes cluster:

    $ kubectl create ns istio-system
  5. On each cluster, instantiate the secrets by running the following:

    $ kubectl create namespace istio-system
    $ kubectl create secret generic cacerts -n istio-system 
        --from-file=samples/certs/ca-cert.pem 
        --from-file=samples/certs/ca-key.pem 
        --from-file=samples/certs/root-cert.pem 
        --from-file=samples/certs/cert-chain.pem
  6. Next, apply the Helm template output you generated earlier:

    $ kubectl apply -f $HOME/istio.yaml
  7. You’ll want to ensure that the clusters all have automatic sidecar injection enabled (which should already be the case):

    $ kubectl label namespace default istio-injection=enabled

Configure DNS and Deploy Bookinfo

In our example, we’ll be deploying Bookinfo, the default Istio demo app, to both clusters. Ensure that CoreDNS is configured for cross-cluster name resolution and apply the ConfigMap update shown in Example 13-1 on each cluster (see istiocoredns.yaml in this book’s GitHub repository).

Example 13-1. Cross-cluster CoreDNS ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           upstream
           fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        proxy . /etc/resolv.conf
        cache 30
        loadbalance
        loop
        reload

    }
    global:53 {
        errors
        cache 30
        proxy . $(kubectl -n istio-system get svc istiocoredns
             -o jsonpath={.spec.clusterIP})
    }

Let’s set up and configure Istio rules for Gateways, ServiceEntries, and VirtualServices. Refer to Chapter 8 for a detailed explanation of these core networking constructs in Istio. First, you need to create your Gateway on each cluster, Cluster A and Cluster B, as shown in Example 13-2. Use gateway configuration against both clusters (see ingress-gw.yaml in this book’s GitHub repository).

Example 13-2. Creating a Gateway per cluster
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  generation: 1
  name: ingress-gateway
  namespace: "default"
  resourceVersion: ""
  selfLink: /apis/networking.istio.io/v1alpha3/namespaces/default/gateways
            /ingress-gateway
  uid: ""
spec:
  selector:
    istio: ingressgateway
  servers:
  - hosts:
    - '*'
    port:
      name: http
      number: 80
      protocol: HTTP
  - hosts:
    - '*'
    port:
      name: https
      number: 443
      protocol: HTTPS
    tls:
      caCertificates: /etc/istio/ingressgateway-ca-certs/ca-chain.cert.pem
      mode: MUTUAL
      privateKey: /etc/istio/ingressgateway-certs/tls.key
      serverCertificate: /etc/istio/ingressgateway-certs/tls.crt

Use of a public cloud provider as infrastructure for this example might (or might not) facilitate exposing a public ingress gateway for both clusters more quickly than on-premises deployment. Cross-cluster requires an external IP to allow service traffic to transit the public internet. It also illustrates how Istio can help you elegantly solve multicloud, cluster communication.

With the external IP address in hand, set your context to Cluster A and apply the following again with kubectl. You will need to populate the endpoints entry with Cluster B’s ingress and the hosts entry with the remote cluster service name, as shown in Example 13-3 (see egress-serviceentry-a.yaml in this book’s GitHub repository).

Example 13-3. Egress ServiceEntry pointing to Cluster B
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  generation: 1
  name: egress-service-entry
  namespace: "default"
  resourceVersion: ""
  selfLink: /apis/networking.istio.io/v1alpha3/namespaces/default
            /serviceentries/egress-service-entry
  uid: ""
spec:
  endpoints:
  - address: # <external IP address here>
  hosts:
  - svc.cluster-b.remote
  location: MESH_EXTERNAL
  ports:
  - name: https
    number: 443
    protocol: HTTPS
  resolution: DNS

Now, switch your context to Cluster B and create the ServiceEntry in Example 13-4 on the other cluster. Before you do this, ensure you’ve changed the endpoints and hosts entry to point to Cluster A. Remember, you must be able to reach the endpoints from within the cluster (see egress-serviceentry-b.yaml in this book’s GitHub repository).

Example 13-4. Egress ServiceEntry pointing to Cluster A
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  generation: 1
  name: egress-service-entry
  namespace: "default"
  resourceVersion: ""
  selfLink: /apis/networking.istio.io/v1alpha3/namespaces/default/serviceentries
            /egress-service-entry
  uid: ""
spec:
  endpoints:
  - address: # <external IP address here>
  hosts:
  - svc.cluster-a.remote
  location: MESH_EXTERNAL
  ports:
  - name: https
    number: 443
    protocol: HTTPS
  resolution: DNS

At this point, you’re ready to split traffic up across both clusters. In this example, we’re going to split traffic from Cluster A to Cluster B. You do this by using a DestinationRule. We are using Bookinfo’s Reviews service as our example. In this setup, both services are running on both clusters (but not necessary considering that we updated the CoreDNS configuration earlier).

A DestinationRule tells Istio where to send the traffic. These rules can specify various configuration options. In the following example we’re creating a DestinationRule that allows for mTLS origination for egress traffic on port 443.

You will want to switch to the context of Cluster A and then apply the rule shown in Example 13-5. Our host attribute defines our remote Cluster B. Our traffic will route across 443 to the external cluster, too (see reviews-destinationrule.yaml in this book’s GitHub repository).

Example 13-5. DestinationRule routing Reviews traffic to Cluster B
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  generation: 1
  name: reviews-tls-origination
  namespace: "default"
  resourceVersion: ""
  selfLink: /apis/networking.istio.io/v1alpha3/namespaces/default/destinationrules
            /reviews-tls-origination
  uid: ""
spec:
  host: svc.cluster-b.remote
  trafficPolicy:
    portLevelSettings:
    - port:
        number: 443
      tls:
        caCertificates: /etc/certs/cert-chain.pem
        clientCertificate: /etc/certs/cert-chain.pem
        mode: MUTUAL
        privateKey: /etc/certs/key.pem

We then create a VirtualService on Cluster A, too. Remember, we’re looking to split traffic from A to B. In Example 13-6, we route 50% of our traffic destined to our Reviews service to Cluster B while leaving the remaining 50% going to the service running locally in Cluster A (see reviews-virtualservice.yaml in this book’s GitHub repository).

Example 13-6. VirtualService for Reviews traffic splitting between clusters
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  generation: 1
  name: reviews-egress-splitter-virtual-service
  namespace: "default"
  resourceVersion: ""
  selfLink: /apis/networking.istio.io/v1alpha3/namespaces/default/virtualservices
            /reviews-egress-splitter-virtual-service
  uid: ""
spec:
  hosts:
  - reviews.default.svc.cluster.local
  http:
  - rewrite:
      authority: reviews.default.svc.cluster-b.remote
    route:
    - destination:
        host: svc.cluster-b.remote
        port:
          number: 443
      weight: 50
    - destination:
        host: reviews
      weight: 50

We then need to add some additional VirtualServices to allow ingress traffic to actually reach the Reviews service; in other words, users browsing the app from the public internet. Example 13-7 creates this and should be done on Cluster A. We define the previous gateway we created earlier on in the chapter as well as the URI for the service (see bookinfo-vs.yaml in this book’s GitHub repository).

Example 13-7. A VirtualService on Cluster A to map inbound traffic to ProductPage
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  generation: 1
  name: bookinfo-vs
  namespace: "default"
  resourceVersion: ""
  selfLink: /apis/networking.istio.io/v1alpha3/namespaces/default
            /virtualservices/bookinfo-vs
  uid: ""
spec:
  gateways:
  - ingress-gateway
  hosts:
  - '*'
  http:
  - match:
    - uri:
        prefix: /productpage
    route:
    - destination:
        host: productpage

Finally, on Cluster B, we create another VirtualService to allow traffic to hit the service directly. In Example 13-8, we’re defining the remote service in our hosts attribute and the route destination (see reviews-ingress-virtual-service.yaml in this book’s GitHub repository).

Example 13-8. A VirtualService on Cluster B to map traffic to Reviews
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  generation: 1
  name: reviews-ingress-virtual-service
  namespace: "default"
  resourceVersion: ""
  selfLink: /apis/networking.istio.io/v1alpha3/namespaces/default/virtualservices
            /reviews-ingress-virtual-service
  uid: ""
spec:
  gateways:
  - ingress-gateway
  hosts:
  - reviews.default.svc.cluster-b.remote
  http:
  - route:
    - destination:
        host: reviews

By this point, you should be able to see 50% of your traffic hitting the service on A and 50% hitting the service on B. And we’ve unified the service mesh across two different clusters.

Using this chapter’s example, you can play around with various scenarios beyond splitting traffic equally. Cross-cluster enables other use cases—most prominent that of failover—potentially including circuit breaking across providers or regions, performing canaries on lower-cost clusters elsewhere, and more.

Networking comes to the forefront of concerns when planning advanced topologies. It’s worth noting that Istio multicluster does not equal multicloud out of the box. Remember, the requirement is that all Kubernetes clusters must be able to route traffic to one another with no network overlap. If not planned properly, operation teams could run into headaches when they do choose to move toward unifying their infrastructure under a single service mesh like Istio. So, before you begin to look at adopting Istio multicluster, first inspect your current network setup and topology. Were you using the same network address space each time you built a new cluster? If so, you might need to re-IP your clusters. Can your service traffic reach all services in both clusters? If not, you need to ensure that you can route traffic.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset