Among the core capabilities of all service meshes is traffic management, and as such, it’s generally a deep functional area. This is certainly the case for Istio. With traffic management as our topic of study in this chapter, we begin our exploration of Istio’s capabilities in the context of how requests flow through the system, becoming familiar with Istio’s networking APIs as we go. We look at how you can use those APIs to configure traffic flow, enabling you to do things like canary new deployments, set timeout and retry policies that are consistent across all of your services, and, finally, test your application’s failure modes with controllable, repeatable fault injection.
To understand how Istio’s networking APIs work, it’s important to understand how requests actually flow through Istio. Pilot, as we learned in the previous chapter, understands the topology of the service mesh, and uses this knowledge, along with additional Istio networking configurations that you provide, to configure the mesh’s service proxies. See Chapter 7 for more on the kind of configuration that Pilot pushes to service proxies.
As the data-plane service proxy, Envoy intercepts all incoming and outgoing requests at runtime (as traffic flows through the service mesh). This interception is done transparently via iptables rules or a Berkeley Packet Filter (BPF) program that routes all network traffic, in and out through Envoy. Envoy inspects the request and uses the request’s hostname, SNI, or service virtual IP address to determine the request’s target (the service to which the client is intending to send a request). Envoy applies that target’s routing rules to determine the request’s destination (the service to which the service proxy is actually going to send the request). Having determined the destination, Envoy applies the destination’s rules. Destination rules include load-balancing strategy, which is used to pick an endpoint (the endpoint is the address of a worker supporting the destination service). Services generally have more than one worker available to process requests. Requests can be balanced across those workers. Finally, Envoy forwards the intercepted request to the endpoint.
A number of items of note are worth further illumination. First, it’s desirable to have your applications speak cleartext (communicate without encryption) to the sidecarred service proxy and let the service proxy handle transport security. For example, your application can speak HTTP to the sidecar and let the sidecar handle the upgrade to HTTPS. This allows the service proxy to gather L7 metadata about requests, which allows Istio to generate L7 metrics and manipulate traffic based on L7 policy. Without the service proxy performing TLS termination, Istio can generate metrics for and apply policy on only the L4 segment of the request, restricting policy to contents of the IP packet and TCP header (essentially, a source and destination address and port number). Second, we get to perform client-side load balancing rather than relying on traditional load balancing via reverse proxies. Client-side load balancing means that we can establish network connections directly from clients to servers while still maintaining a resilient, well-behaved system. That in turn enables more efficient network topologies with fewer hops than traditional systems that depend on reverse proxies.
Typically, Pilot has detailed endpoint information about services in the registry, which it pushes directly to the service proxies. So, unless you configure the service proxy to do otherwise, at runtime it selects an endpoint from a static set of endpoints pushed to it by Pilot and does not perform dynamic address resolution (e.g., via DNS) at runtime. Therefore, the only things Istio can route traffic to are hostnames in Istio’s service registry. There is an installation option in newer versions of Istio (set to “off” by default in 1.1) that changes this behavior and allows Envoy to forward traffic to unknown services that are not modeled in Istio, so long as the application provides an IP address.
In the next section, we discuss hostnames, which are the core of Istio’s networking model, and how Istio’s networking APIs allow you to create hostnames to describe workloads and control how traffic flows to them.
Applications address services by name (e.g., by hostname resolved via DNS) to avoid the fragility of addressing services by IP address (an address that might not be initially known, that might change at any time, is difficult to remember, and might modulate between v4 and v6 addresses depending on its environment). Consequently, Istio’s network configuration has adopted a name-centric model, in which:
Gateway
s expose names.
VirtualService
s configure and route names.
DestinationRule
s describe how to communicate with the workloads behind a name.
ServiceEntry
s enable the creation of new names.
Application requests initiate with the call to the service’s name, as shown in Figure 8-1.
ServiceEntry
s are how you manually add/remove service listings in Istio’s service registry. Entries in the service registry can receive traffic by name and be targeted by other Istio configurations. At their simplest, you can use them to give a name to an IP address, as demonstrated in Example 8-1 (see static-se.yaml in the GitHub repository for this book).
apiVersion
:
networking.istio.io/v1alpha3
kind
:
ServiceEntry
metadata
:
name
:
http-server
spec
:
hosts
:
-
some.domain.com
ports
:
-
number
:
80
name
:
http
protocol
:
http
resolution
:
STATIC
endpoints
:
-
address
:
2.2.2.2
Given the ServiceEntry
in Example 8-1, service proxies in the mesh will forward requests to some.domain.com
to the IP address 2.2.2.2. As Example 8-2 shows, you can use ServiceEntr
ys to elevate a name that’s addressable via DNS into a name addressable in Istio (see dns-se.yaml in the GitHub repository for this book).
apiVersion
:
networking.istio.io/v1alpha3
kind
:
ServiceEntry
metadata
:
name
:
external-svc-dns
spec
:
hosts
:
-
foo.bar.com
location
:
MESH_EXTERNAL
ports
:
-
number
:
443
name
:
https
protocol
:
HTTP
resolution
:
DNS
endpoints
:
-
address
:
baz.com
The ServiceEntry
defined in Example 8-2 causes service proxies to forward requests to foo.bar.com
to baz.com
to use DNS to resolve to endpoints. In this example, because we declare that the service is outside the mesh (location: MESH_EXTERNAL)
service proxies won’t attempt to use mTLS to communicate with it.
All service registries with which Istio integrates (Kubernetes, Consul, Eureka, etc.) work by transforming their data into a ServiceEntry
. For example, a Kubernetes service with one pod (and therefore one endpoint) maps directly into a ServiceEntry
with a host and an IP address endpoint, as illustrated in Example 8-3 (see svc-endpoint.yaml in this book’s GitHub repository).
apiVersion
:
v1
kind
:
Service
metadata
:
name
:
my-service
spec
:
selector
:
app
:
MyApp
ports
:
-
protocol
:
TCP
port
:
80
---
apiVersion
:
v1
kind
:
Endpoints
metadata
:
name
:
my-service
subsets
:
-
addresses
:
-
ip
:
1.2.3.4
ports
:
-
port
:
80
This becomes the ServiceEntry
shown in Example 8-4 (see k8s-se.yaml in the GitHub repository for this book).
apiVersion
:
networking.istio.io/v1alpha3
kind
:
ServiceEntry
metadata
:
name
:
k8s-my-service
spec
:
hosts
:
# The names an application can use in k8s to target this service
-
my-service
-
my-service.default
-
my-service.default.svc.cluster.local
ports
:
-
number
:
80
name
:
http
protocol
:
HTTP
resolution
:
STATIC
endpoints
:
-
address
:
1.2.3.4
ServiceEntry
s created by platform adapters don’t appear directly in Istio’s configuration (i.e., you cannot istioctl get
them). Rather, you can only istioctl get ServiceEntry
s that you have created.
Note that Istio does not populate DNS entries based on ServiceEntry
s. This means that Example 8-1, which gives the address 2.2.2.2
the name some.domain.com
, will not allow an application to resolve some.domain.com
to 2.2.2.2
via DNS. This is a departure from systems like Kubernetes, for which declaring a service also creates DNS entries for that service that an application can use at runtime. There is a core DNS plug-in for Istio that generates DNS records from Istio ServiceEntry
s, which you can use to populate DNS for Istio services in environments outside of Kubernetes or when you want to model things that are not Kubernetes services.
Finally, as Example 8-5 demonstrates, you can use ServiceEntrys
to create virtual IP addresses (VIPs), mapping an IP address to a name that you can configure via Istio’s other networking APIs.
apiVersion
:
networking.istio.io/v1alpha3
kind
:
ServiceEntry
metadata
:
name
:
http-server
spec
:
hosts
:
-
my-tcp-service.internal
addresses
:
-
1.2.3.4
ports
:
-
number
:
975
name
:
tcp
protocol
:
TCP
resolution
:
DNS
endpoints
:
-
address
:
foo.com
This declares 1.2.3.4
as a VIP with the name my-tcp-service.internal
. All traffic to that VIP on port 975
will be forwarded to an IP address for foo.com
resolved via DNS. Of course, we can configure the endpoints for a VIP just like any other ServiceEntry
, deferring to DNS or configuring a set of addresses explicitly. Other Istio configurations can use the name my-tcp-service.internal
to describe traffic for this service. Again, though, understand that Istio will not set up DNS entries external to the service mesh (or in the case of Kubernetes, external to the cluster), so that my-tcp-service.internal
resolves to the address 1.2.3.4
for applications. You must configure DNS to do that, or the application must address 1.2.3.4
directly itself.
DestinationRule
s, a little counterintuitively, are really all about configuring clients. They allow a service operator to describe how a client in the mesh should call their service, including the following:
Subsets of the service (e.g., v1 and v2)
The load-balancing strategy the client should use
The conditions to use to mark endpoints of the service as unhealthy
L4 and L7 connection pool settings
TLS settings for the server
We cover client-side load balancing, load-balancing strategy, and outlier detection in detail in the section “Resiliency” later in this chapter.
With DestinationRule
s, we can configure low-level connection pool settings like the number of TCP connections allowed to each destination host, the maximum number of outstanding HTTP1, HTTP2, or gRPC requests allowed to each destination host, and the maximum number of retries that can be outstanding across all of the destination’s endpoints. Example 8-6 shows a DestinationRule
that allows a maximum of four TCP connections per destination endpoint and a maximum of 1,000 concurrent HTTP2 requests over those four TCP connections.
apiVersion
:
networking.istio.io/v1alpha3
kind
:
DestinationRule
metadata
:
name
:
foo-default
spec
:
host
:
foo.default.svc.cluster.local
trafficPolicy
:
connectionPool
:
tcp
:
maxConnections
:
4
http
:
http2MaxRequests
:
1000
DestinationRule
s can describe how a sidecar should secure the connection with a destination endpoint. Four modes are supported:
DISABLED
Disables TLS for the TCP connection
SIMPLE
Originates a TLS connection to the destination endpoint
MUTUAL
Establishes a mTLS connection to the destination endpoint
ISTIO_MUTUAL
Asks if mTLS is using Istio-provisioned certificates
Enabling mTLS across the mesh via Istio’s mesh configuration is a shorthand for setting Istio mTLS as the value for all destinations in the mesh. For example, we can use a DestinationRule
to allow connecting to an HTTPS website outside the mesh, as shown in Example 8-7 (see egress-destrule.yaml in the GitHub repository for this book).
apiVersion
:
networking.istio.io/v1alpha3
kind
:
DestinationRule
metadata
:
name
:
google.com
spec
:
host
:
"*.google.com"
trafficPolicy
:
tls
:
mode
:
SIMPLE
Or, we can describe connecting to another server with mTLS, as illustrated in Example 8-8.
apiVersion
:
networking.istio.io/v1alpha3
kind
:
DestinationRule
metadata
:
name
:
remote-a-ingress
spec
:
host
:
ingress.a.remote.cluster
trafficPolicy
:
tls
:
mode
:
MUTUAL
clientCertificate
:
/etc/certs/remote-cluster-a.pem
privateKey
:
/etc/certs/client_private_key_cluster_a.pem
caCertificates
:
/etc/certs/rootcacerts.pem
You can use a DestinationRule
like the one in Example 8-7 together with a ServiceEntry
for ingress.a.remote.cluster
to route traffic across trust domains (e.g., separate clusters) over the internet, securely, with no VPN or other overlay networks. We cover zero-VPN networking and other topics in Chapter 13.
Finally, DestinationRule
s allow you to split a single service into subsets based on labels. You also can separately configure for each subset all of the features we described thus far of what a DestinationRule
allows you to configure. For example, we could split a service into two subsets based on the version and the use of a VirtualService
to perform a canary release to the new version, gradually shifting all of the traffic to the new version. As presented in Example 8-9, foo has two versions: v1 and v2. Each version of the foo service has its own load-balancing policy distinctly defined.
apiVersion
:
networking.istio.io/v1alpha3
kind
:
DestinationRule
metadata
:
name
:
foo-default
spec
:
host
:
foo.default.svc.cluster.local
subsets
:
-
name
:
v1
labels
:
version
:
v1
trafficPolicy
:
loadBalancer
:
simple
:
ROUND_ROBIN
-
name
:
v2
labels
:
version
:
v2
trafficPolicy
:
loadBalancer
:
simple
:
LEAST_CONN
We cover VirtualService
s in more detail in the next section.
A VirtualService
describes how traffic addressed to a name flows to a set of destinations, as shown in Example 8-10.
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
name
:
foo-identity
spec
:
hosts
:
-
foo.default.svc.cluster.local
http
:
-
route
:
-
destination
:
host
:
foo.default.svc.cluster.local
The VirtualService
in Example 8-10 forwards traffic addressed to foo.default.svc.cluster.local
to the destination foo.default.svc.cluster.local
. Pilot implicitly generates a VirtualService
(like the one in the example) to pair with every service’s ServiceEntry
.
Of course, we can do many more interesting things with VirtualService
s than that. For example, we can define HTTP endpoints for a service and have Envoy deliver 404s errors (on the client side) for invalid paths without calling the remote server, as demonstrated in Example 8-11.
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
name
:
foo-apiserver
spec
:
hosts
:
-
foo.default.svc.cluster.local
http
:
-
match
:
-
uri
:
prefix
:
"/api"
route
:
-
destination
:
host
:
apiserver.foo.svc.cluster.local
Clients calling foo.default.svc.cluster.local/api/…
are directed to a set of API servers at the destination apiserver.foo.svc.cluster.local
, and any other URI will result in Envoy not finding a destination in that request and responding to the application with a 404 error. This is why Pilot creates an implicit VirtualService
for every ServiceEntry
. So, even though an explicit catch-all destination isn’t explicitly defined, any unmatched path in the DestinationRule
results in a 404, forming an implicit catch-all.
You can use VirtualService
s to target very specific segments of traffic and direct them to different destinations. For example, a VirtualService
can match requests by header values, the port a caller is attempting to connect to, or the labels on the client’s workload (e.g., labels on the client’s pod in Kubernetes) and send matching traffic to a different destination (e.g., a new version of a service) than all of the unmatched traffic. We cover these use cases in detail in the section “Traffic Steering and Routing” later in this chapter. A simple example is sending a fraction of traffic to the new version of a service (see Example 8-12), allowing a quick rollback in the case of a bad deployment.
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
name
:
foo-apiserver
spec
:
hosts
:
-
foo.default.svc.cluster.local
http
:
-
match
:
-
uri
:
prefix
:
"/api"
route
:
-
destination
:
host
:
apiserver.foo.svc.cluster.local
subset
:
v1
weight
:
90
-
destination
:
host
:
apiserver.foo.svc.cluster.local
subset
:
v2
weight
:
10
It’s important to note that within a VirtualService
, the match conditions are checked at runtime in the order in which they appear. This means that the most specific match clauses should appear first, and less-specific clauses later. For safety, a “default” route, with no match conditions, should be provided. Because, again, a request that does not match any condition of a VirtualService
will result in a 404 for the sender (or some “connection-refused” error for non-HTTP protocols).
We say that a VirtualService
claims a name: a hostname can appear in at most one VirtualService
, though a VirtualService
can claim many hostnames. This can cause problems when a single name, like apis.foo.com
, is used to host many services that route by path—for example, apis.foo.com/bars or apis.foo.com/bazs—
because many teams must edit a single VirtualService apis.foo.com
. One solution to this problem is to use a set of tiered VirtualService
s. The top-level VirtualService
splits up requests into logical services by path prefix and is a resource shared by every team (similar to a Kubernetes Ingress resource). Then a VirtualService
for each of the logical services in the top-level VirtualService
can describe traffic for that block of requests. You can repeatedly apply this pattern to delegate management of smaller and smaller segments of traffic.
For example, consider a shared VirtualService
with business logic for multiple teams, like the one in Example 8-13.
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
name
:
foo-apiserver
spec
:
hosts
:
-
apis.foo.com
http
:
-
match
:
-
uri
:
prefix
:
"/bars/newMethod"
route
:
-
destination
:
host
:
bar.foo.svc.cluster.local
subset
:
v2
-
match
:
-
uri
:
prefix
:
"/bars"
route
:
-
destination
:
host
:
bar.foo.svc.cluster.local
subset
:
v1
-
match
:
-
uri
:
prefix
:
"/bazs/legacy/rest/path"
route
:
-
destination
:
host
:
monolith.legacy.svc.cluster.remote
retries
:
attempts
:
3
perTryTimeout
:
2s
-
match
:
-
uri
:
prefix
:
"/bazs"
route
:
-
destination
:
host
:
baz.foo.svc.cluster.local
This VirtualService
definition can be decomposed into separate VirtualService
s (shown in Example 8-14) owned by the appropriate teams.
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
name
:
foo-svc-shared
spec
:
hosts
:
-
apis.foo.com
http
:
-
match
:
-
uri
:
prefix
:
"/bars"
route
:
-
destination
:
host
:
bar.foo.svc.cluster.local
-
match
:
-
uri
:
prefix
:
"/bazs"
route
:
-
destination
:
host
:
baz.foo.svc.cluster.local
---
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
Name
:
foo-bars-svc
spec
:
hosts
:
-
bar.foo.svc.cluster.local
http
:
-
match
:
-
uri
:
prefix
:
"/bars/newMethod"
route
:
-
destination
:
host
:
bar.foo.svc.cluster.local
subset
:
v2
route
:
-
destination
:
host
:
bar.foo.svc.cluster.local
subset
:
v1
---
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
Name
:
foo-bazs-svc
spec
:
hosts
:
-
baz.foo.svc.cluster.local
http
:
-
match
:
-
uri
:
prefix
:
"/bazs/legacy/rest/path"
route
:
-
destination
:
host
:
monolith.legacy.svc.cluster.remote
retries
:
attempts
:
3
perTryTimeout
:
2s
route
:
-
destination
:
host
:
baz.foo.svc.cluster.local
As described in “Decoupling at Layer 5” in Chapter 1, service meshes dramatically facilitate the practice of decoupling service teams (developers, operators, etc.), and as such they are a key way to improve the speed at which teams can move, reduce the scope of risk teams face when managing changes, clarify responsibility between roles, and facilitate accountability over specific aspects of service delivery.
Example 8-14 is a specific example of how you can conscientiously approach clarifying lines of responsibility and thoroughly decoupling service teams within your service mesh configuration at L5.
Finally, VirtualService
s can claim a set of hostnames described by a wildcard pattern. In other words, a VirtualService
can claim a host like *.com
. When choosing the configuration, the most specific host will always apply: for a request to baz.foo.com
, the VirtualService
for baz.foo.com
applies, and the VirtualService
s for *.foo.com
and *.com
are ignored. Note, though, that no VirtualService
can claim “*” (the wildcard host).
Gateway
s are concerned with exposing names over trust boundaries. Suppose that you have a webserver.foo.svc.cluster.local
service deployed in your mesh that serves your website, foo.com
. You can expose that webserver to the public internet using a Gateway
to map from your internal name, webserver.foo.svc.cluster.local
, to your public name, foo.com
. You also need to know on what port to expose the public name, and the protocol with which to expose it, as shown in Example 8-15.
apiVersion
:
networking.istio.io/v1alpha3
kind
:
Gateway
metadata
:
name
:
foo-com-gateway
spec
:
selector
:
app
:
gateway-workloads
servers
:
-
hosts
:
-
foo.com
port
:
number
:
80
name
:
http
protocol
:
HTTP
For secure transmissions just mapping between the names isn’t enough, though. A Gateway
must be able to prove to callers that it owns the name, too. You can do this by configuring the Gateway
to serve a certificate for http://foo.com
, as shown in Example 8-16 (see gw-https.yaml in this book’s GitHub repository).
apiVersion
:
networking.istio.io/v1alpha3
kind
:
Gateway
metadata
:
name
:
foo-com-gateway
spec
:
selector
:
app
:
gateway-workloads
servers
:
-
hosts
:
-
foo.com
port
:
number
:
443
name
:
https
protocol
:
HTTPS
tls
:
mode
:
SIMPLE
# Enables HTTPS on this port
serverCertificate
:
/etc/certs/foo-com-public.pem
privateKey
:
/etc/certs/foo-com-privatekey.pem
Both foo-com-public.pem
and foo-com-privatekey.pem
in Example 8-16 are long-lived certificates for foo.com
such as you would get from a CA like Let’s Encrypt. Unfortunately, Istio doesn’t handle these types of certificates today, so you need to mount any certificates that a Gateway
must serve into the workload’s filesystem. Also, note that we updated both the port and protocol to match. We could keep serving foo.com
over HTTP on port 80 in addition to HTTPS/443, as shown in Example 8-17, if we wanted to.
apiVersion
:
networking.istio.io/v1alpha3
kind
:
Gateway
metadata
:
name
:
foo-com-gateway
spec
:
selector
:
app
:
gateway-workloads
servers
:
-
hosts
:
-
foo.com
port
:
number
:
80
name
:
http
protocol
:
HTTP
-
hosts
:
-
foo.com
port
:
number
:
443
name
:
https
protocol
:
HTTPS
tls
:
mode
:
SIMPLE
# Enables HTTPS on this port
serverCertificate
:
/etc/certs/foo-com-public.pem
privateKey
:
/etc/certs/foo-com-privatekey.pem
But based on security best practices, we’re better off configuring our Gateway
to perform an HTTPS upgrade, as shown in Example 8-18 (see gw-https-upgrade.yaml in the GitHub repository for this book).
apiVersion
:
networking.istio.io/v1alpha3
kind
:
Gateway
metadata
:
name
:
foo-com-gateway
spec
:
selector
:
app
:
gateway-workloads
servers
:
-
hosts
:
-
foo.com
port
:
number
:
80
name
:
http
protocol
:
HTTP
tls
:
httpsRedirect
:
true
# Sends 301 redirect for http requests
-
hosts
:
-
foo.com
port
:
number
:
443
name
:
https
protocol
:
HTTPS
tls
:
mode
:
SIMPLE
# Enables HTTPS on this port
serverCertificate
:
/etc/certs/foo-com-public.pem
privateKey
:
/etc/certs/foo-com-privatekey.pem
Our examples demonstrate commonly used HTTP(S) and ports 80 and 443; however, Gateway
s can expose any protocol over any port. When Istio is controlling the Gateway
implementation, the Gateway
will listen to all ports listed in its configuration.
So far, none of these Gateway
s map foo.com
to any service in our mesh! For that, we need to bind a VirtualService
to our Gateway
, as shown in Example 8-19 (see foo-vs.yaml in this book’s GitHub repository).
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
name
:
foo-com-virtual-service
spec
:
hosts
:
-
foo.com
gateways
:
-
foo-com-gateway
http
:
-
route
:
-
destination
:
host
:
webserver.foo.svc.cluster.local
We cover the rules for binding VirtualService
s to Gateway
s in the section “Binding VirtualServices to Gateways”, but this raises an important point: Gateway
s configure L4 behavior, not L7 behavior. What we mean by that is that a Gateway
describes ports to bind to, what protocols to expose on those ports, and the names (and proof of those names through certificates) to serve on those ports. But, VirtualService
s describe L7 behavior. L7 behavior here being how to map from some name (i.e., foo.com
) to different applications and workloads.
Decoupling L4 from L7 behavior was a design goal for Istio. This allows patterns like providing a single Gateway
that many teams can reuse, as shown in Example 8-20 (see gw-to-vses.yaml in the GitHub repository for this book).
apiVersion
:
networking.istio.io/v1alpha3
kind
:
Gateway
metadata
:
name
:
foo-com-gateway
spec
:
selector
:
app
:
gateway-workloads
servers
:
-
hosts
:
-
*
.foo.com
port
:
number
:
80
name
:
http
protocol
:
HTTP
---
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
name
:
foo-com-virtual-service
spec
:
hosts
:
-
api.foo.com
gateways
:
-
foo-com-gateway
http
:
-
route
:
-
destination
:
host
:
api.foo.svc.cluster.local
---
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
name
:
foo-com-virtual-service
spec
:
hosts
:
-
www.foo.com
gateways
:
-
foo-com-gateway
http
:
-
route
:
-
destination
:
host
:
webserver.foo.svc.cluster.local
More important, this decoupling of L4 and L7 behavior means that you can use a Gateway
to model network interfaces in Istio (e.g., network appliances or nonflat L3 networks). Finally, you can use Gateway
s to build mTLS tunnels between parts of a mesh deployed on separate L3 networks. For example, you can use them to establish secure connections between Istio deployments across separate cloud provider availability zones, over the public internet, without the need for a VPN.
Finally, you can use Gateway
s to model arbitrary network interfaces—regardless of whether that interface is under Istio’s control. So, even though a network interface might be represented by Istio, the behavior of the network service behind the Gateway
representing the interface might or might not be under Istio’s control. For example, if you’re using a Gateway
to model an externally provided load balancer, maybe in your cloud deployment, the Istio configuration cannot affect the decisions made by that load balancer. Workloads that belong to a Gateway
are described by the “selector” field on the Gateway
object. Workloads with labels matching the selector are treated like Gateway
s in Istio. When Istio controls the Gateway
implementation (i.e., when the Gateway
is an Envoy), we can bind VirtualService
s to the Gateway
to take advantage of VirtualService
features at ingress and egress points in our cluster.
We say a VirtualService
binds to a Gateway
if the following are true:
The VirtualService
lists the Gateway
’s name in its gateways field
At least one host claimed by the VirtualService
is exposed by the Gateway
The hosts in a Gateway
’s configuration are similar to those in a VirtualService
, with a few subtle differences. Distinctively, Gateway
s do not claim hostnames like VirtualService
s do. Instead, a Gateway
exposes a name, allowing a VirtualService
to configure traffic for that name by binding to that Gateway
. For example, any number of Gateway
s can exist exposing the name foo.com
, but a single VirtualService
must configure traffic for them across all Gateway
s. The host field of a Gateway
accepts wildcard hostnames in the same way the VirtualService
does, but Gateway
s do allow the wildcard hostname “*”.
Let’s explore a bit, first looking at two Gateway
s differing in their hosts
configuration: foo-gateway
and the wildcard-gateway
(see gw-examples.yaml in this book’s GitHub repository). First the foo-gateway
example:
apiVersion
:
networking.istio.io/v1alpha3
kind
:
Gateway
metadata
:
name
:
foo-gateway
spec
:
selector
:
app
:
my-gateway-impl
servers
:
-
hosts
:
-
foo.com
port
:
number
:
80
name
:
http
protocol
:
HTTP
And here is the wildcard-gateway
example:
apiVersion
:
networking.istio.io/v1alpha3
kind
:
Gateway
metadata
:
name
:
wildcard-gateway
spec
:
selector
:
app
:
my-gateway-impl
servers
:
-
hosts
:
-
*
.com
port
:
number
:
80
name
:
http
protocol
:
HTTP
Now let’s look at how the following VirtualService
s bind (or don’t bind, as the case may be) to the Gateway
s (see vs-examples.yaml in the GitHub repository for this book).
The following example binds to “foo-gateway” because the Gateway name in the VirtualService
matches, and because the VirtualService
claims the host “foo.com” which is exposed by “foo-gateway.” So requests to “foo.com” received on port 80 by this Gateway
will be routed to port 7777 of the “foo” service in namespace “default.”
This also doesn’t bind to “wildcard-gateway”; the hosts match but the VirtualService
does not list the Gateway
“wildcard-gateway” as a target:
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
name
:
foo-default
spec
:
hosts
:
-
foo.com
gateways
:
-
foo-gateway
http
:
-
route
:
-
destination
:
host
:
foo.default.svc.cluster.local
The next example binds to “foo-gateway” because the Gateway
name in the VirtualService
matches, and because the VirtualService
claims the host “foo.com” which is exposed by “foo-gateway”. Only the name “foo.com” is visible to callers of the Gateway
even though the VirtualService
claims the name “foo.super.secret.internal.name” too.
Does not bind to “wildcard-gateway”: the hosts match but the VirtualService
does not list the Gateway
“wildcard-gateway” as a target:
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
name
:
foo-default
spec
:
hosts
:
-
foo.com
-
foo.super.secret.internal.name
gateways
:
-
foo-gateway
http
:
-
route
:
-
destination
:
host
:
foo.default.svc.cluster.local
The following example doesn’t bind to either Gateway
: while the VirtualService
lists both Gateway
s, the hostname the VirtualService
claims, “foo.super.secret.internal.name”, is not exposed by either Gateway
so they will not accept requests for those names:
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
name
:
foo-internal
spec
:
hosts
:
-
foo.super.secret.internal.name
gateways
:
-
foo-gateway
-
wildcard-gateway
http
:
-
route
:
-
destination
:
host
:
foo.default.svc.cluster.local
The final example binds to “foo-gateway” because the Gateway
name in the VirtualService
matches, and because the VirtualService
claims the host “foo.com” which is exposed by “foo-gateway”.
Also, it binds to “wildcard-gateway” because the Gateway
name in the VirtualService
matches, and because the VirtualService
claims the host “foo.com” which is exposed by “wildcard-gateway” (because “foo.com” matches the wildcard “*.com”):
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
name
:
foo-internal
spec
:
hosts
:
-
foo.com
gateways
:
-
foo-gateway
-
wildcard-gateway
http
:
-
route
:
-
destination
:
host
:
foo.default.svc.cluster.local
There’s a special, implicit Gateway
in every Istio deployment called the mesh Gateway
. This kind of Gateway
has workloads that are represented by every service proxy in the mesh and exposes the wildcard host on every port. When a VirtualService
doesn’t list any Gateway
s, it automatically applies to the mesh Gateway
; that is, all of the sidecars in the mesh. A VirtualService
always binds to either the mesh gateway or the gateways listed in its gateways
field. A common tripping hazard using VirtualService
s is when we try to update a VirtualService
being used within the mesh to bind to a specific Gateway
, displacing the mesh Gateway
. On pushing that resource, its configuration no longer applies to sidecars, which causes errors. For this kind of update, include the “mesh” gateway specifically in the list of Gateway
s to bind to.
We can use the APIs described earlier in many different ways to affect traffic flow in our deployment. In this section, we cover some of the most common use cases like using VirtualService
s to make routing decisions based on the following:
Request attributes like the URI
Headers
The request’s scheme
The request’s target port
Or, you can use VirtualService
s to implement canary and blue/green deployment strategies between services.
One of Istio’s most powerful features is its ability to perform traffic routing based on request metadata like the request’s URI, its headers, the source or destination IP addresses, and other metadata about the request. The one key limitation is that Istio will not perform routing based on the body of the request.
The section “VirtualService”, earlier in the chapter, extensively covers routing based on URI prefixes extensively. You can perform similar routing on exact URI matches and regexes, as shown in Example 8-21.
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
Name
:
foo-bars-svc
spec
:
hosts
:
-
bar.foo.svc.cluster.local
http
:
-
match
:
-
uri
:
exact
:
"/assets/static/style.css"
route
:
-
destination
:
host
:
webserver.frontend.svc.cluster.local
-
match
:
-
uri
:
# Match requests like "/foo/132:myCustomMethod"
regex
:
"/foo/\d+:myCustomMethod"
route
:
-
destination
:
host
:
bar.foo.svc.cluster.local
subset
:
v3
-
route
:
-
destination
:
host
:
bar.foo.svc.cluster.local
subset
:
v2
We can also route based on headers or cookie values, as shown in Example 8-22.
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
Name
:
dev-webserver
spec
:
hosts
:
-
webserver.company.com
http
:
-
match
:
-
headers
:
cookie
:
environment
:
"dev"
route
:
-
destination
:
host
:
webserver.dev.svc.cluster.local
-
route
:
-
destination
:
host
:
webserver.prod.svc.cluster.local
Of course, Istio supports routing requests for TCP services as well, using L4 request metadata like destination subnet and target port (see Example 8-23). For TLS TCP services, you can use the SNI to perform routing just like the host header in HTTP.
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
Name
:
dev-api-server
spec
:
hosts
:
-
api.company.com
tcp
:
-
match
:
-
port
:
9090
destinationSubnets
:
-
10.128.0.0/16
route
:
-
destination
:
host
:
database.test.svc.cluster.local
-
match
:
-
port
:
9090
route
:
-
destination
:
host
:
database.prod.svc.cluster.local
tls
:
-
match
:
-
sniHosts
:
-
example.api.company.com
route
:
-
destination
:
host
:
example.prod.svc.cluster.local
See Istio’s website for a full reference on all available match conditions and their syntax.
In a blue/green deployment, two versions, old and new, of an application are deployed side by side, and user traffic is flipped from the old set to the new. This allows for a quick fallback to the previously working version if something goes wrong, because all that’s required is reverting user traffic back to the old set from the new (as opposed to a deployment strategy such as a rolling update, in which to roll back to the previous version we must first redeploy the previous version’s binary).
Istio’s networking APIs make it pretty easy to do blue/green deployments. We declare two subsets for our service using a DestinationRule and
then we use a VirtualService
to direct traffic to one subset or the other, as shown in Example 8-24.
Rather than use “blue/green” in our DestinationRule
, we refer to subsets by the version of the application they represent. This is both easier for developers to understand (because it talks about parts of their service in terms of versions they control) and less prone to errors (avoiding, “Hey, before I deploy, is blue or green the active set?"-type outages). This phrasing also makes it easier to transition to other deployment strategies like canary deployments.
apiVersion
:
networking.istio.io/v1alpha3
kind
:
DestinationRule
metadata
:
name
:
foo-default
spec
:
host
:
foo.default.svc.cluster.local
subsets
:
-
name
:
v1
labels
:
version
:
v1
-
name
:
v2
labels
:
version
:
v2
Then, we can write a VirtualService
that directs all traffic in the cluster targeting our service to a single subset of the service, as demonstrated in Example 8-25.
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
name
:
foo-blue-green-virtual-service
spec
:
hosts
:
-
foo.default.svc.cluster.local
http
:
-
route
:
-
destination
:
host
:
foo.default.svc.cluster.local
subset
:
v1
To flip to the other set, you simply update the VirtualService
to target subset v2, as shown in Example 8-26.
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
name
:
foo-blue-green-virtual-service
spec
:
hosts
:
-
foo.default.svc.cluster.local
http
:
-
route
:
-
destination
:
host
:
foo.default.svc.cluster.local
subset
:
v2
Of course, you can combine this with Gateway
s to perform blue/green deployments for users consuming your service via a Gateway
in addition to the services in your mesh.
A canary deployment is the practice of sending a small portion of traffic to newly deployed workloads, gradually ramping up until all traffic flows the new workloads. The goal is to verify that a new workload is healthy (up, running, and not returning errors) before sending all traffic to it. It’s similar to a blue/green deployment in that it allows a fast fallback to known-healthy workloads, but improves on that method by sending only a portion of traffic, rather than all of it, to the new workloads. Overall, this reduces the amount of error budget (a metric allocating a specific amount of service interruption that is allowed within a given time period) that you might spend performing a deployment.
Canary-based deployments also tend to require resource capacity for updates. A true blue/green deployment requires double the resource capacity of a standard deployment (to have both a full blue and a full green deployment). Canaries can be combined with in-place binary rollout strategies to get the rollback safety of a blue/green deployment while only requiring a constant amount of additional resources (spare capacity to schedule just a small number of additional workloads).
A new workload can be canaried in a variety of ways. You can use the full set of matches outlined in the section “Routing with request metadata” to send small portions of traffic to a new backend. However, the simplest canary deployment is a percentage-based traffic split. We can start by sending 5% of traffic to the new version, gradually pushing new VirtualService
configurations, ramping traffic up to 100% to the new version, as shown in Example 8-27 (see canary-shift.yaml in the GitHub repository for this book).
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
name
:
foo-canary-virtual-service
spec
:
hosts
:
-
foo.default.svc.cluster.local
tcp
:
-
route
:
-
destination
:
host
:
foo.default.svc.cluster.local
subset
:
v2
weight
:
5
-
destination
:
host
:
foo.default.svc.cluster.local
subset
:
v1
weight
:
95
Another common pattern is to canary a new deployment to a set of trusted test users like the service team itself or a set of customers who have opted into experimental features. You can use Istio to set a “trusted-tester” cookie, for example, which at routing time can divert requests in that specific session to different workloads as opposed to workloads serviced by requests without this cookie, as shown in Example 8-28 (see canary-cookie.yaml in the GitHub repository for this book).
Of course, take care when using caller-supplied values (like cookies) to perform routing: ideally all services in your cluster should perform authentication and authorization on all requests. This ensures that even if a caller fakes data to trigger routing behavior, they cannot access data they wouldn’t be able to otherwise (and in fact, implementing authentication and authorization via Istio is a powerful way to ensure that all services in your cluster do this correctly).
A resilient system is one that can maintain good performance for its users (i.e., staying within its SLOs) while coping with failures in the downstream systems on which it depends. Istio provides a lot of features to help build more resilient applications; most important being client-side load balancing, circuit breaking via outlier detection, automatic retry, and request timeouts. Istio also provides tools to inject faults into applications, allowing you to build programmatic, reproducible tests of your system’s resiliency.
Client-side load balancing is an incredibly valuable tool for building resilient systems. By allowing clients to communicate directly with servers without going through reverse proxies, we remove points of failure while still keeping a well-behaved system. Further, it allows clients to adjust their behavior dynamically based on responses from servers; for example, to stop sending requests to endpoints that return more errors than other endpoints of the same service (we cover this feature, outlier detection, more in the next section). DestinationRule
s allow you to define the load-balancing strategy clients use to select backends to call. As Example 8-29 shows, we can configure clients to use a simple round-robin load-balancing strategy (see round-robin.yaml in this book’s GitHub repository).
apiVersion
:
networking.istio.io/v1alpha3
kind
:
DestinationRule
metadata
:
name
:
foo-default
spec
:
host
:
foo.default.svc.cluster.local
trafficPolicy
:
loadBalancer
:
simple
:
ROUND_ROBIN
This DestinationRule
sends traffic round robin across the endpoints of the service foo.default.svc.cluster.local
. A ServiceEntry
defines what those endpoints are (or how to discover them at runtime; e.g., via DNS). It’s important to note that a DestinationRule
applies only to hosts in Istio’s service registry. If a ServiceEntry
does not exist for a host, the DestinationRule
is ignored.
More complex load-balancing strategies such as consistent hash-based load balancing are also supported. The following DestinationRule
configures load balancing based on a hash of the caller’s IP address (you also can use HTTP headers and cookies with consistent load balancing), as shown in Example 8-30.
apiVersion
:
networking.istio.io/v1alpha3
kind
:
DestinationRule
metadata
:
name
:
foo-default
spec
:
host
:
foo.default.svc.cluster.local
trafficPolicy
:
loadBalancer
:
consistentHash
:
useSourceIp
:
true
Circuit breaking is a pattern of protecting calls (e.g., network calls to a remote service) behind a “circuit breaker.” If the protected call returns too many errors, we “trip” the circuit breaker and return errors to the caller without executing the protected call. This can be used to mitigate several classes of failure, including cascading failures. In load balancing, to “lame-duck” an endpoint is to remove it from the “active” load-balancing set so that no traffic is sent to it for some period of time. Lame-ducking is one method that we can use to implement the circuit-breaker pattern.
Outlier detection is a means of triggering lame-ducking of endpoints that are sending bad responses. We can detect when an individual endpoint is an outlier compared to the rest of the endpoints in our “active” load-balancing set (i.e., returning more errors than other endpoints of the service) and remove the bad endpoint from our “active” load-balancing set, as demonstrated here:
apiVersion
:
networking.istio.io/v1alpha3
kind
:
DestinationRule
metadata
:
name
:
foo-default
spec
:
host
:
foo.default.svc.cluster.local
trafficPolicy
:
outlierDetection
:
consecutiveErrors
:
5
interval
:
1m
baseEjectionTime
:
3m
The DestinationRule
here configures the sidecar to eject any endpoint that has had five consecutive errors from the load-balancing set for at least three minutes. The sidecar scans the set of all endpoints each minute to decide whether any endpoints should be ejected or whether ejected endpoints can be returned back to the load-balancing set. Remember that outlier detection is per client because any server could return bad results to only a specific client (e.g., if there’s a network partition between them, but not between the server and its other clients).
Every system has transient failures: network buffers overflow, a server shutting down drops a request, a downstream system fails, and so on. We use retries—sending the same request to a different endpoint of the same service—to mitigate the impact of transient failures. However, poor retry policies are a frequent secondary cause of outages: “Something went wrong, and client retries made it worse,” is a common refrain. Often this is because retries are hardcoded into applications (e.g., as a for
loop around the network call) and therefore are difficult to change. Istio gives you the ability to configure retries globally for all services in your mesh. More significant, it allows you to control those retry strategies at runtime via configuration, so you can change client behavior on the fly, as shown in the following:
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
name
:
foo-default
spec
:
hosts
:
-
foo.com
gateways
:
-
foo-gateway
http
:
-
route
:
-
destination
:
host
:
foo.default.svc.cluster.local
retries
:
attempts
:
3
perTryTimeout
:
500ms
The retry policy defined in a VirtualService
works in concert with the connection pool settings defined in the destination’s DestinationRule
to control the total number of concurrent outstanding retries to the destination. You can read more about that in the section “DestinationRule” earlier in the chapter.
Timeouts are important for building systems with consistent behavior. By attaching deadlines to requests, we’re able to abandon requests taking too long and free server resources. We’re also able to control our tail latency much more finely, because we know the longest that we’ll wait for any particular request in computing our response for a client. You can attach a timeout to any HTTP route in a VirtualService
, as shown in Example 8-31 (see timeout.yaml in the GitHub repository for this book).
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
name
:
foo-default
spec
:
hosts
:
-
foo.com
gateways
:
-
foo-gateway
http
:
-
route
:
-
destination
:
host
:
foo.default.svc.cluster.local
timeout
:
1s
When used with a retry, the timeout represents the total time that the client will spend waiting for a server to return a result. Example 8-32 demonstrates the configuration of a per-try-timeout, which controls the timeout of each individual attempt (see per-try-timeout.yaml in the GitHub repository for this book).
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
name
:
foo-default
spec
:
hosts
:
-
foo.com
gateways
:
-
foo-gateway
http
:
-
route
:
-
destination
:
host
:
foo.default.svc.cluster.local
timeout
:
2s
retries
:
attempts
:
3
perTryTimeout
:
500ms
The VirtualService
in Example 8-32 configures our client to wait at most two seconds, retrying three times, at 500-ms timeouts each. We add in some slack time to allow for randomized waits between retries.
Fault injection is an incredibly powerful way to test and build reliable distributed applications. Companies like Netflix have taken this to the extreme, coining the term “chaos engineering” to describe the practice of injecting faults into running production systems to ensure that the systems are built to be reliable and tolerant of environmental failures.
Istio allows you to configure faults for HTTP traffic, injecting arbitrary delays or returning specific response codes (e.g., 500) for some percentage of traffic:
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
name
:
foo-default
spec
:
hosts
:
-
foo.default.svc.cluster.local
http
:
-
route
:
-
destination
:
host
:
foo.default.svc.cluster.local
fault
:
delay
:
fixedDelay
:
5s
percentage
:
100
For example, the VirtualService
in the preceding example injects a five-second delay for all traffic calling the foo
service. This is a great way to reliably test things like how a UI behaves on a bad network when its backends are far away. It’s also valuable for testing that applications set timeouts on their requests.
Replying to clients with specific response codes, like a 429 or a 500, is also great for testing. For example, it can be challenging to programmatically test how your application behaves when a third-party service that it depends on begins to fail. Using Istio, you can write a set of reliable end-to-end tests of your application’s behavior in the presence of failures of its dependencies, such as the following:
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
name
:
foo-default
spec
:
hosts
:
-
foo.default.svc.cluster.local
http
:
-
route
:
-
destination
:
host
:
foo.default.svc.cluster.local
fault
:
abort
:
httpStatus
:
500
percentage
:
10
For example, we can simulate 10% of requests to some backend failing at runtime with a 500 response code.
Gateway
s represent network trust boundaries in a deployment. In other words, we typically use Gateway
s to model proxies on the edge of the network that control ingress and egress of traffic into and out of the network (in an environment like Kubernetes, which provides a flat network to pods, the network spans the entire cluster). Together, Gateway
s and VirtualService
s can precisely control how traffic enters and exits the mesh. Even better, when Istio is deployed with Policy
enabled, you can apply policy to traffic as it enters or leaves the mesh.
The section “Gateway” earlier in the chapter covers how hostnames are “exposed” over a Gateway
by binding a VirtualService
to that Gateway
. After a VirtualService
is bound to a Gateway
, all of the normal VirtualService
functionality we described in the previous sections, such as retries, fault injection, or traffic steering, can be applied to traffic at ingress. In many ways, the ingress Gateway
acts as the “external-to-cluster, client-side service proxy.”
One thing Istio can’t control, though, is how client traffic gets to the ingress proxies. A common pattern in Kubernetes environments is to model Istio’s ingress proxies as a NodePort service and then let the platform handle provisioning public IP addresses, DNS records, and so on.
In the same way we think about ingress proxies as a sort of “external-to-cluster, client-side service proxy,” egress proxies act as a sort of “internal-to-cluster, server-side service proxy.” With a combination of ServiceEntrie
s, DestinationRule
s, VirtualService
s, and Gateway
s, we can trap outbound traffic and redirect it to egress proxies where we’re free to apply policy.
Let’s an egress Gateway
example. Here, we assume that Istio has been deployed with istio-egressgateway.istio-system.svc.cluster.local
as the egress proxy. That in place, we start by modeling the external destination we’re trying to reach, for example. Example 8-33 uses https://wikipedia.org
as a ServiceEntry
(see se-egress-gw.yaml in this book’s GitHub repository).
apiVersion
:
networking.istio.io/v1alpha3
kind
:
ServiceEntry
metadata
:
name
:
https-wikipedia-org
spec
:
hosts
:
-
wikipedia.org
ports
:
-
number
:
443
name
:
https
protocol
:
HTTPS
location
:
MESH_EXTERNAL
resolution
:
DNS
endpoints
:
-
address
:
istio-egressgateway.istio-system.svc.cluster.local
ports
:
http
:
443
As shown in Example 8-34, next we can configure an egress Gateway
to accept traffic for wikipedia.org
(see egress-gw-wiki.yaml in the GitHub repository for this book).
apiVersion
:
networking.istio.io/v1alpha3
kind
:
Gateway
metadata
:
name
:
https-wikipedia-org-egress
spec
:
selector
:
istio
:
egressgateway
servers
:
-
port
:
number
:
443
name
:
https-wikipedia-org-egress-443
protocol
:
TLS
# Mark as TLS as we are passing HTTPS through.
hosts
:
-
wikipedia.org
tls
:
mode
:
PASSTHROUGH
We have a problem, though. We want our egress Gateway
to use DNS to get an address for wikipedia.org
and forward the request, but we’ve configured all of the proxies in the mesh to resolve wikipedia.org
to the egress Gateway (so the proxy will forward the message back to itself, or drop it). To fix this, we need to take advantage of the fact that we can bind VirtualService
s to Gateway
s and route traffic going to wikipedia.org to a fake name we create; for example, egress-wikipedia-org:
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
name
:
egress-wikipedia-org
spec
:
hosts
:
-
wikipedia.org
gateways
:
-
https-wikipedia-org-egress
tls
:
-
match
:
-
ports
:
443
sniHosts
:
-
wikipedia.org
route
:
-
destination
:
host
:
egress-wikipedia-org
Then, we use a ServiceEntry
to resolve egress-wikipedia-org
via DNS as wikipedia.org
:
apiVersion
:
networking.istio.io/v1alpha3
kind
:
ServiceEntry
metadata
:
name
:
egress-https-wikipedia-org
spec
:
hosts
:
-
egress-wikipedia-org
ports
:
-
number
:
443
name
:
https
protocol
:
HTTPS
location
:
MESH_EXTERNAL
resolution
:
DNS
endpoints
:
-
address
:
wikipedia.org
ports
:
http
:
443
With this in place, we force traffic to an external site through a dedicated egress Gateway
deployment. By default, Istio allows routing of traffic to destinations that do not have a ServiceEntry
. As a security best practice, this default setting should be inverted and services outside of the mesh should explicitly be whitelisted by creating service entries for them. To enable egress to an external service without going through an egress proxy, just create an identity ServiceEntry
for it:
apiVersion
:
networking.istio.io/v1alpha3
kind
:
ServiceEntry
metadata
:
name
:
egress-https-wikipedia-org
spec
:
hosts
:
-
wikipedia.org
ports
:
-
number
:
443
name
:
https
protocol
:
HTTPS
location
:
MESH_EXTERNAL
resolution
:
DNS
endpoints
:
-
address
:
istio-egressgateway.istio-system.svc.cluster.local
ports
:
http
:
443
We’ve seen the full power of Istio’s networking APIs, and there are a ton of features—overwhelmingly so. The important thing to remember is that you can approach things incrementally. Pick one feature that’s valuable to you today. Apply a small configuration to your service and get comfortable with it. Then, reach for the next feature that solves your next problem.