Pilot is responsible for programming the data plane, ingress and egress gateways, and service proxies in an Istio deployment. Pilot models the environment of a deployment by combining the Istio configuration from Galley and service information from a service registry such as the Kubernetes API server or Consul. Pilot uses this model to generate a configuration for the data plane and pushes that new configuration to the fleet of service proxies connected to it.
To better understand all aspects of the mesh that concern Pilot, let’s explore the surface area of Pilot’s configuration. As we digest this, understand that Pilot’s dependency on Galley for underlying platform and environment information will continue to increase as the Istio project advances in releases. Pilot has three main sources of configuration:
A set of configurations global to the service mesh
Th configuration for ServiceEntrie
s, DestinationRule
s, VirtualServices VirtualService
s, Gateway
s, and service proxies
The location and metadata information from registries about the catalog of services resident in one or more underlying platforms
Mesh configuration is a set of global configurations that is static for the installation of the mesh. Mesh configuration is split over three API objects:
MeshConfig (mesh.istio.io/v1alpha1.MeshConfig)
MeshConfig
covers configuring how Istio components communicate with one another, where configuration sources are located, and so on.
ProxyConfig (mesh.istio.io/v1alpha1.ProxyConfig)
ProxyConfig
covers options that are associated with initializing Envoy by tracking where its bootstrap configuration is located, which ports to bind to, and so on.
MeshNetworks (mesh.istio.io/v1alpha1.MeshNetworks)
MeshNetworks
describes a set of networks that the mesh is deployed across with the addresses of the ingress gateways of each network.
MeshConfig
is primarily used to configure whether policy and/or telemetry are enabled, where to load configuration, and locality-based load-balancing settings. MeshConfig
contains the following exhaustive set of concerns:
How to use Mixer:
Whether policy checks are enabled at runtime
Whether to fail open or closed when Mixer Policy is inaccessible or returns an error
Whether to perform policy checks on the client side
Whether to use session affinity to target the same Mixer Telemetry instance. Session affinity is always enabled for Mixer Policy (performance of the system depends on it!).
How to configure service proxies for listening:
The ports to which to bind to accept traffic (i.e., the port iptables redirects to) and to accept HTTP PROXY requests
TCP connection timeout and keepalive settings
Access log format, output file, and encoding (JSON or text)
Whether to allow all outbound traffic, or restrict outbound traffic to only services that Pilot knows about
Where to listen for secrets from Citadel (the SDS API), and how to bootstrap trust (in environments with local machine tokens)
The set of configuration sources for all Istio components (e.g., the local filesystem, or Galley) and how to communicate with them (the address, whether to use (Transport Layer Security [TLS], which secrets, etc.)
Locality-based load-balancing settings—configuration about failover and traffic splits between zones and regions (more on that in Chapter 8)
ProxyConfig
is primarily used to provide a custom bootstrap configuration for Envoy. ProxyConfig
contains the following exhaustive set of concerns:
The location of the file with Envoy’s bootstrap configuration as well as the location of the Envoy binary itself
Envoy’s service cluster, meaning the name of the service for which this Envoy is sidecar
Shutdown settings (both connection draining and hot restart)
The location of Envoy’s xDS server (Pilot) and how to communicate with it
Which ports should host the proxy’s admin server and statsd listener
Envoy’s concurrency (number of worker threads)
How Envoy binds the socket to intercept traffic (either via iptables REDIRECT or TPROXY)
The location of the trace collector (i.e., where to send trace data)
MeshNetworks
defines a set of named networks, the way to send traffic into that network (its ingress), and that network’s locality. Each network is either a Classless Inter-Domain Routing (CIDR) range or a set of endpoints returned by a service registry (e.g., the Kubernetes API server). ServiceEntry
, the API object used to define services in Istio, has a set of endpoints. Each endpoint can be labeled with a network so that a ServiceEntry
can describe a service deployed across several networks (or clusters). We discuss this momentarily in “Service Discovery”.
Most values in MeshConfig
cannot be updated dynamically, and you must restart the control plane for them to take effect. Similarly, updates to values in ProxyConfig
only take effect when you redeploy Envoy (e.g., in Kubernetes, when the pod is rescheduled). MeshNetworks
can be updated dynamically at runtime without restarting any control-plane components.
On Kubernetes, most of the configuration in MeshConfig
and ProxyConfig
is hidden behind options in the Helm installation, although not all of it is exposed via Helm. To fully control the installation, you’ll need to postprocess the file output by Helm.
Networking configuration is Istio’s bread and butter—it’s the configuration used to manage how traffic flows through the mesh. We cover each object of the API in depth in Chapter 8 and discuss how these constructs are used together to affect how traffic flows through the mesh. Here we introduce each object but only at a high level so that you can relate Istio’s configuration to Envoy’s xDS APIs (discussed in Chapter 5) to help you understand Pilot’s configuration server and to enable you to debug the system (we talk about both in subsequent sections).
ServiceEntry
is the centerpiece of Istio’s networking APIs. ServiceEntry
defines a service by its names—the set of hostnames that clients use to call the service. We cover this in more detail in the next section. DestinationRules
configure how clients communicate with a service: what load-balancing, outlier-detection, circuit-breaking, and connection-pooling strategies to use; which TLS settings to use; and so on. VirtualServices
configure how traffic flows to a service: L7 and L4 routing, traffic shaping, retries, timeouts, and so forth. Gateways configure how services are exposed outside of the mesh: what hostnames are routed to which services, how to serve certificates for those hostnames, and more. Service proxies configure how services are exposed inside of the mesh: which services are available to which clients.
Pilot integrates with various service discovery systems, like the Kubernetes API server, Consul, and Eureka, to discover service and endpoint information about the local environment. Adapters in Pilot work by ingesting service discovery information from their source and synthesizing ServiceEntry
objects from that data. For example, the integration with Kubernetes uses the Kubernetes SDK to watch the API server for service creation and service endpoint update events. Using this data, Pilot’s registry adapter synthesizes a ServiceEntry
object. That ServiceEntry
is used to update Pilot’s internal model and generate updated configuration for the data plane.
Historically, Pilot registry adapters were implemented in-process in Pilot using Golang. With the introduction of Galley, you now can separate these adapters from Pilot. A service discovery adapter can run as a separate job (or an offline process executed by a CI system, for example) that reads an existing service registry and produces a set of ServiceEntry
objects from it. You then can feed those ServiceEntry
s to Galley by providing them as files, pushing them into the Kubernetes API server, or you can implement a Mesh Config Protocol server yourself and feed the ServiceEntry
s to Galley. The Mesh Config Protocol and configuration ingestion in general is covered in Chapter 11. For largely static environments (e.g., legacy VM-based deployments with rarely changing IP addresses), generating static ServiceEntry
s can be an effective way to enable Istio.
ServiceEntry
s create a Service by tying a set of hostnames together with a set of endpoints. Those endpoints can be IP addresses or DNS names. Each endpoint can be individually labeled and tagged with a network, locality, and weight. This allows ServiceEntry
s to describe complex network topologies. For example, a service deployed across separate clusters (with different networks) that are geographically disparate (have different localities) can be created and have traffic split among its members by percentage (weights)—or in fact by nearly any feature of the request (see Chapter 8). Because Istio knows the ingress points of remote networks, when selecting a service endpoint in a remote network the service proxy will forward traffic to the remote network’s ingress. We can even write policies to prefer local endpoints over endpoints in other localities, but automatically failover to other localities if local endpoints are unhealthy. We talk about locality-based load balancing a bit more in Chapter 13.
From these three configuration sources—mesh configuration, networking configuration, and service discovery—Pilot creates a model of the environment and state of a deployment. Asynchronously, as service proxy instances are deployed into the cluster, they connect to Pilot. Pilot groups the service proxies together based on their labels and the service to which the service proxy is sidecarred. Using this model, Pilot generates Discovery Service (xDS) responses for each group of connected service proxies (more on the Discovery Service APIs shortly). When a service proxy connects, Pilot sends the current state of the environment and configuration reflecting the environment. Given the generally, dynamic nature of the underlying platform(s), the model is updated with some frequency. Updates to the model require an update of the current set of xDS configurations. When the xDS configuration is changed, Pilot computes the groups of affected service proxies and pushes the updated configuration to them.
Chapter 5 examines the xDS APIs, but let’s take a moment to recap and introduce the concepts at a high level so that we can describe how Istio networking configuration manifests as xDS. We can divide service proxy (Envoy) configuration into two main groups:
Listeners and routes
Clusters and endpoints
Listeners configure a set of filters (e.g., Envoy’s HTTP functionality is delivered by an HTTP filter) and how Envoy attaches those filters to a port. They have two flavors: physical and virtual. A physical listener is one where Envoy binds to the specified port. A virtual listener accepts traffic from a physical listener, but does not bind to a port (instead some physical listener must direct traffic to it). Routes go alongside listeners and configure how that listener directs traffic to a specific cluster (e.g., by matching on HTTP path or Service Name Indication, or SNI). A cluster is a group of endpoints along with information about how to contact these endpoints (TLS settings, load-balancing strategy, connection-pool settings, etc.). A cluster is analogous to a “service” (as an example, one Kubernetes service might manifest as a single cluster). Finally, endpoints are individual network hosts (IP addresses or DNS names) to which Envoy will forward traffic.
Within this configuration, the elements refer to each other by name. So, a listener directs traffic to a named route, a route directs traffic to a named cluster, and the cluster directs traffic to a set of endpoints. Pilot does the bookkeeping to keep these names consistent throughout. We see how these names are useful for debugging the system in the next section.
A Note on “x”
We refer to the Envoy APIs as the xDS APIs because each configuration primitive—listener, route, cluster, endpoint—has its own Discovery Service named after it. Each Discovery Service allows for the updating of its resource. Rather than referring individually to the LDS, RDS, CDS, and EDS, we group them together as the xDS APIs.
Istio’s networking configuration maps to Envoy’s API nearly directly:
Gateways configure physical listeners.
VirtualService
s configure both virtual listeners (hostname matches are encoded as separate listeners, and protocol processing is configured via listeners with specific filters per protocol) and routes (HTTP/TLS match conditions, retry and timeout configuration, etc.).
DestinationRules
configure how to communicate with clusters (secrets, load-balancing strategy, circuit breaking and connection pooling, etc.), and create new clusters when they’re used to define subsets.
The final piece of Istio networking configuration is the sidecar. It doesn’t relate directly to an Envoy configuration primitive itself; instead, Istio uses it to filter what configuration is sent to each group of Envoys.
With this mapping in-hand, let’s consider how a commonplace Istio configuration manifests as an Envoy xDS configuration and layout some tips for debugging Istio network configuration.
This section focuses on troubleshooting Pilot, as a complement to Chapter 11, which is dedicated to debugging. Istio is a complex system with a lot of moving parts. Until you develop a deep understanding of Istio, it can be difficult to understand why the system behaves in a certain way. (What exacerbates this issue is that often the system is behaving by not serving any traffic!) Fortunately, there’s an ever-growing set of tools to help you understand and debug system state. In this section, we give an overview of some tools that are particularly useful for understanding and troubleshooting networking in Istio.
istioctl
has a slew of useful tools for helping understand the state of an Istio deployment, including istioctl authn
for inspecting the state of mTLS in the mesh, retrieving per-pod metrics, and for inspecting Pilot and Envoy configuration. These last two, istioctl proxy-config
and istioctl proxy-status
, are invaluable for understanding the state of network configuration in a deployment.
Unfortunately, many of the following tools (specifically, proxy-config
and proxy-status
) are Kubernetes-specific because they currently rely on Kubernetes for their implementation. For example, istioctl proxy-config
works by using kubectl exec
to retrieve data from the remote machine.
In the future, equivalent tools will be built for other platforms. Where possible, we describe how the tool is implemented to make it possible for those on non-Kubernetes platforms to follow along. See Chapter 11 for a deeper dive on how istioctl proxy-config
interacts with Kubernetes.
In support of other platforms (and other service meshes), we also point out where other tools can fill these gaps. Meshery is one example, in which the same istioctl proxy-config
and proxy-status
information is provided but graphically presented (for Istio and other service meshes) so that you can see the status of your mesh. Meshery can validate the current state against the planned state of your Istio configuration, making deployment dry runs easier to manage, and validate that your configuration changes will have the desired effect.
istioctl proxy-config <bootstrap | listener | route | cluster>
<kubernetes pod>
::
Connects to the specified pod and queries the service proxy’s administrative interface to retrieve the current state of the service proxy’s configuration. We can retrieve the service proxy’s bootstrap configuration (which typically just configures it to talk to Pilot), its listeners, routes, and clusters. proxy-config
supports an output flag (--output
or just -o
), which you can use to print the full body of Envoy’s configuration in JSON. In “Tracing Configuration”, we use this to understand how an Istio configuration shows up in the service proxy.
istioctl proxy-status <Istio service>
Connects to Pilot’s debug interface and retrieves the xDS status of each connected service proxy instance (if a service name is provided, just the service proxies for that service). This shows whether each service proxy’s configuration is up to date with the latest configuration in Pilot, and if not, how far behind the proxy is. This is particularly useful for identifying a configuration that affects only a subset of proxies as the culprit for a problem when troubleshooting.
Pilot exposes a variety of endpoints for understanding its state of the world. Unfortunately, they’re woefully underdocumented. As of this writing, there are no public docs describing them. These endpoints, all exposed on Pilot with the prefix /debug/
, return JSON blobs of the various configurations that Pilot holds.
To examine the state of service proxies connected to Pilot, see these endpoints:
/debug/edsz
Prints all of Pilot’s set of precomputed EDS responses (i.e., the endpoints it sends to each connected service proxy).
/debug/adsz
Prints the set of listeners, routes, and clusters pushed to each service proxy connected to Pilot.
/debug/cdsz
Prints the set of clusters pushed to each service proxy connected to Pilot.
/debug/synz
Prints the status of ADS, CDS, and EDS connections of all service proxies connected to pilot. In particular, this shows the last nonce Pilot is working with versus the last nonce Envoy has ACK
’d, showing which Envoys are not accepting configuration updates.
To examine Pilot’s understanding of the state of the world (its service registries), see these endpoints:
/debug/registryz
Prints the set of services that Pilot knows about across all registries.
/debug/endpointz[?brief=1]
Prints the endpoints for every service that Pilot knows about, including their ports, protocols, service accounts, labels, and so on. If you provide the brief
flag, the output will be a human-readable table (as opposed to a JSON blob for the normal version). This is a legacy endpoint and /debug/endpointShardz
provides strictly more information.
/debug/endpointShardz
Prints the endpoints for every service that Pilot knows about, grouped by the registry that provided the endpoint (the “shard,” from Pilot’s point of view). For example, if the same service exists in both Consul and Kubernetes, endpoints for the service will be grouped into two shards, one each for Consul and Kubernetes. This endpoint provides everything from /debug/endpoint
and more, including data like the endpoint’s network, locality, load-balancer weight, representation in the Envoy xDS configuration, and more.
/debug/workloadz
Prints the set of endpoints (“workloads”) connected to Pilot, and their metadata (like labels).
/debug/configz
Prints the entire set of Istio configuration Pilot knows about. Only validated configurations that Pilot is using to construct its model will be returned. This is useful for understanding situations in which Pilot is not processing a new configuration itself.
You can also find miscellaneous endpoints with higher-level debug information by wading through these endpoints:
/debug/authenticationz[?proxyID=pod_name.namespace]
Prints the Istio authentication policy status of the target proxy for each host and port that it’s serving, including the name of the authentication policy affecting it; the name of the DestinationRule
affecting it; whether the port expects mTLS, standard TLS, or plain text; and whether settings across the configuration cause a conflict for this port (a common cause of 500 errors in new Istio deployments).
/debug/config_dump[?proxyID=pod_name.namespace]
Prints the listeners, routes, and clusters for the given node; this can be diff
’d directly against the output of istioctl proxy-config
.
/debug/push_status
Prints the status of each connected endpoint as of Pilot’s last push period; includes the status of each connected proxy, when the push period began (and ended), and the identities assigned to each port of each host.
Each Istio control-plane component exposes an administrative interface that you can use to configure fine-grained logging, see information about the process and environment, and view metrics about that instance. Most often used for adjusting log levels, ControlZ allows you to independently and dynamically modify logging levels for each scope at runtime. Istio components use a common logging system with a notion of scopes. As an example, Pilot defines scopes for logging about Envoy API connections; so one scope for ADS connections, another for EDS connections, and a third for CDS. For more on ControlZ, see “Introspecting Istio Components” in Chapter 11.
Pilot, along with the other Istio control-plane components, hosts a Prometheus endpoint with detailed metrics about their internal state. Istio’s default Grafana deployment includes dashboards that use these metrics to chart the state of each Istio control-plane component. You can use these metrics to help debug Pilot’s internal state. By default, Pilot hosts its Prometheus endpoint on port 8080 at /metrics
(e.g. kubectl exec -it PILOT_POD -n istio-system -c discovery — curl localhost:8080/metrics
).
It can be a difficult task to trace the steps involved in the creation and dispersal of configuration, starting with Pilot and mapping to service proxies. Pilot’s debug endpoints (previously described), together with istioctl
, are at-hand tools to understand Pilot and any changes within it. In this section, we use these tools to understand the before-and-after of Istio configuration and the resultant xDS configuration pushed to service proxies.
There are far too many permutations of configuration for us to show how they all manifest. Instead, for each major type of configuration we show you some Istio configuration and the Envoy configuration it results in, highlight the main similarities, and outline how other changes to the same Istio configuration will manifest in Envoy so that you can test and see for yourself and use this knowledge to diagnose and solve the majority of Istio issues that you’ll come across.
Gateway
s and VirtualService
s results in Listeners for Envoy. Gateway
s result in physical listeners (listeners that bind to a port on the network), whereas VirtualService
s result in virtual listeners (listeners that do not bind to a port, but instead receive traffic from physical listeners). Examples 7-1 and 7-2 demonstrate how the Istio configuration manifests into an xDS configuration by creating a Gateway
(see foo-gw.yaml in this book’s GitHub repository).
apiVersion
:
networking.istio.io/v1alpha3
kind
:
Gateway
metadata
:
name
:
foo-com-gateway
spec
:
selector
:
istio
:
ingressgateway
servers
:
-
hosts
:
-
"*.foo.com"
port
:
number
:
80
name
:
http
protocol
:
HTTP
Creation of this Istio Gateway
results in a single HTTP listener on port 80 on our ingress Gateway
(see Example 7-2).
$ istioctl proxy-config listener istio-ingressgateway_PODNAME -o json -n istio-system [ { "name": "0.0.0.0_80", "address": { "socketAddress": { "address": "0.0.0.0", "portValue": 80 } }, "filterChains": [ { "filters": [ { "name": "envoy.http_connection_manager", ... "rds": { "config_source": { "ads": {} }, "route_config_name": "http.80" }, ...
Notice that the newly created filter is listening on address 0.0.0.0. This is the listener used for all HTTP traffic on port 80, no matter what host it’s addressed to. If we set up TLS termination for this Gateway
, we would then see a new listener created just for the hosts for which we’re terminating TLS, whereas the rest would fall into this catchall listener. Let’s bind a VirtualService
to this Gateway
as demonstrated in Example 7-3 (see foo-vs.yaml in this book’s GitHub repository).
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
name
:
foo-default
spec
:
hosts
:
-
bar.foo.com
gateways
:
-
foo-com-gateway
http
:
-
route
:
-
destination
:
host
:
bar.foo.svc.cluster.local
To see how it manifests as virtual listeners, see Example 7-4:
$ istioctl proxy-config listener istio-ingressgateway_PODNAME -o json [ { "name": "0.0.0.0_80", "address": { "socketAddress": { "address": "0.0.0.0", "portValue": 80 } }, "filterChains": [ { "filters": [ { "name": "envoy.http_connection_manager", ... "rds": { "config_source": { "ads": {} }, "route_config_name": "http.80" }, ...
Looking at the configuration in Example 7-4, we don’t see any change in the listener. That’s because the listener on IP 0.0.0.0 is a catchall—all HTTP traffic on port 80. That’s not how TLS will be configured in the listener, however. If we created a Gateway
that configures TLS, instead, we’d see a new listener created for just the hosts in the section with TLS. The rest would fall through to the default listener. Instead, for HTTP, all of the action happens in routes. Other protocols—for example, TCP—push more of the logic to the listener. Experiment by defining a few Gateway
s with different protocols to see how they manifest as listeners. For ideas and examples, see this book’s GitHub repository.
You should also notice the Mixer configuration in the listeners. The Mixer configuration in Envoy appears in both listeners (where we set source attributes) and also in routes (where we set destination attributes). Using the MeshConfig
to disable Mixer checks will result in a slightly different configuration, as will the disabling of Mixer reports. If you disable both checks and reports, you’ll see the Mixer configuration disappear entirely from Envoy.
We also recommend that you try different protocols for the ports (or list a single Gateway
with many ports with various protocols) to see how this results in different filters. Configuring different TLS settings within the Gateway
also results in changes to the generated listener configuration. You’ll always see a protocol-specific filter configured in the listener for each protocol you use (for HTTP, this is the http_connection_manager
and its router
; for MongoDB, it’s another; for TCP, yet another one; etc.). We also recommend trying different combinations of hosts in the Gateway
and VirtualService
to see how they interact. We cover at length how the two work together—how you bind VirtualService
s to Gateway
s—in Chapter 8.
We’ve seen how VirtualService
s result in the creation of listeners (or don’t, as in our example!). Most of the configuration you specify in VirtualService
s actually manifest as Routes in Envoy. Routes come in different flavors with a set of routes per protocol that Envoy supports.
We can list the routes Envoy currently has by using our existing VirtualService
from Example 7-3. This route is pretty simple, because our VirtualService
just forwards traffic to a single destination service, as shown in Example 7-5. This example shows the default Retry Policy and the embedded Mixer configuration (which is used for reporting telemetry back to Mixer).
$ istioctl proxy-config route istio-ingressgateway_PODNAME -o json [ { "name": "0.0.0.0_80", "virtualHosts": [ { "name": "bar.foo.com:80", "domains": [ "bar.foo.com", "bar.foo.com:80" ], "routes": [ { "match": { "prefix": "/" }, "route": { "cluster": "outbound|8000||bar.foo.svc.cluster.local", "timeout": "0s", "retryPolicy": { "retryOn": "connect-failure,refused-stream, unavailable,cancelled,resource-exhausted, retriable-status-codes","numRetries": 2, "retryHostPredicate": [ { "name": "envoy.retry_host_predicates.previous_hosts" } ], "hostSelectionRetryMaxAttempts": "3", "retriableStatusCodes": [ 503 ] }, ...
We can update our route to include some match conditions to see how this results in different routes for Envoy, as shown in Example 7-6 (see foo-routes.yaml in this book’s GitHub repository).
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
name
:
foo-default
spec
:
hosts
:
-
bar.foo.com
gateways
:
-
foo-com-gateway
http
:
-
match
:
-
uri
:
prefix
:
/whiz
route
:
-
destination
:
host
:
whiz.foo.svc.cluster.local
-
route
:
-
destination
:
host
:
bar.foo.svc.cluster.local
Similarly, we can add retries, split traffic among several destinations, inject faults, and more. All of these options in VirtualService
s manifest as routes in Envoy (see Example 7-7).
$ istioctl proxy-config route istio-ingressgateway_PODNAME -o json [ { "name": "http.80", "virtualHosts": [ { "name": "bar.foo.com:80", "domains": [ "bar.foo.com", "bar.foo.com:80" ], "routes": [ { "match": { "prefix": "/whiz" }, "route": { "cluster": "outbound|80||whiz.foo.svc.cluster.local", ... { "match": { "prefix": "/" }, "route": { "cluster": "outbound|80||bar.foo.svc.cluster.local", ...
Now, we see how our URI match manifests as a route with a prefix match. The route for “/” that we had before remains, as well, but it comes after our new match. Matches in Envoy are performed in order, and that order matches the order in your VirtualService
.
If we use istioctl
to look at clusters, as well, we can see that Istio generates a cluster for each service and port in the mesh. We can create a new ServiceEntry
like the one in Example 7-8 ourselves to see a new cluster appear in Envoy, as shown in Example 7-9 (see some-domain-se.yaml in the GitHub repository for this book).
apiVersion
:
networking.istio.io/v1alpha3
kind
:
ServiceEntry
metadata
:
name
:
http-server
spec
:
hosts
:
-
some.domain.com
ports
:
-
number
:
80
name
:
http
protocol
:
http
resolution
:
STATIC
endpoints
:
-
address
:
2.2.2.2
$ istioctl proxy-config cluster istio-ingressgateway_PODNAME -o json [ ... { "name": "outbound|80||some.domain.com", "type": "EDS", "edsClusterConfig": { "edsConfig": { "ads": {} }, "serviceName": "outbound|80||some.domain.com" }, "connectTimeout": "10s", "circuitBreakers": { "thresholds": [ { "maxRetries": 1024 } ] } }, ...
This results in a single cluster, outbound|80||some.domain.com
. Notice how Istio encodes inbound versus outbound in the cluster name as well as the port.
We can add new ports (with different protocols) to the ServiceEntry
to see how this results in new clusters being generated. The other tool we can use that generates and updates clusters in Istio is a DestinationRule
. By creating subsets, we generate new clusters (seeExamples 7-10 and 7-11), and by updating load-balancing and TLS settings, we affect the configuration within the cluster itself (see some-domain-dest.yaml in this book’s GitHub repository).
apiVersion
:
networking.istio.io/v1alpha3
kind
:
DestinationRule
metadata
:
name
:
some-domain-com
spec
:
host
:
some.domain.com
subsets
:
-
name
:
v1
labels
:
version
:
v1
-
name
:
v2
labels
:
version
:
v2
$ istioctl proxy-config cluster istio-ingressgateway_PODNAME -o json [ ... { "name": "outbound|80||some.domain.com", ... }, { "name": "outbound|80|v1|some.domain.com", ... "metadata": { "filterMetadata": { "istio": { "config": "/apis/networking/v1alpha3/namespaces/default /destination-rule/some-domain-com" } } } }, { "name": "outbound|80|v2|some.domain.com", ... }, ...
Notice that we still have our original cluster, outbound|80||some.domain.com
, but that we got a new cluster for each Subset we defined, as well. Istio annotates the Envoy configuration with the rule that resulted in it being created to help debug.
In this chapter, we covered Pilot: its basic model, the sources of configuration it consumes to produce a model of the mesh, how it uses that model of the mesh to push configuration to Envoys, how to debug it, and finally how to understand the transformation Pilot performs from Istio configuration to Envoy’s. With this information in hand you should be equipped to debug and resolve the vast majority of issues new and intermediate Istio users face.