Chapter 5. Service Proxy

You’re likely familiar with the differentiation between forward and reverse proxies. As a refresher, forward proxies focusing on outbound traffic with the aim of improving performance and filtering requests are typically deployed as the interface between users on private networks and their internet requests. Forward proxies commonly improve performance because they can cache static web content and provide a level of security by preventing users from accessing specific categories of websites. If you work in a large organization, you might have a forward proxy between your local machine and the internet, filtering protocols and websites, in accordance with your organization’s network use policy.

Conversely, reverse proxies focus on inbound traffic coming from the internet to private networks. They are commonly used to secure and filter HTTP requests, providing load balancing across real (backend) servers. To the extent that forward proxies typically represent user traffic to external servers, reverse proxies are commonly used to represent real servers to users (clients).

As illustrated in Figure 5-1, reverse proxies represent themselves as the servers. Depending on the type and configuration, to the client there’s little to no difference between the reverse proxy server and the service it’s making requests of. Reverse proxies forward requests to one or more real servers, which handle the requests. The response from the proxy server is returned as if it came directly from the real server, leaving the client with no knowledge of the real server(s).

iuar 0501
Figure 5-1. Reverse proxies act as intermediates between client and server requests.

This concept isn’t that different from how highly resilient three-tiered applications are designed. You want indirection, HA, and load balancing between each of the three tiers. Though these tiers might conventionally be vertically scaled, proxies increase application resiliency since they’re inserted in client/server communication to provide additional network services like load balancing, as illustrated in Figure 5-2.

iuar 0502
Figure 5-2. Three-tiered application with reverse proxy/load balancer between tiers

Proxies provide a placeholder for a service and control access to it, introducing an additional level of indirection.

What Is a Service Proxy?

Akin to reverse proxies, a service proxy is the client-side intermediary transiting requests on behalf of a service. The service proxy enables applications to send and receive messages over a channel as method calls. Service proxy connections can be created as needed or used to maintain open connections to facilitate pooling. Service proxies are transparently inserted, and as applications make service-to-service calls, they’re unaware of the data plane’s existence. Data planes are responsible for intracluster communication as well as inbound (ingress) and outbound (egress) cluster network traffic. Whether traffic is entering the mesh (ingressing) or leaving the mesh (egressing), application service traffic is directed first to the service proxy for handling. In Istio’s case, traffic is transparently intercepted using iptables rules and redirected to the service proxy.

An iptables Primer

iptables is a user-space CLI for managing host-based firewalling and packet manipulation in Linux. Netfilter is the Linux kernel module comprising tables, chains, and rules. Commonly, a given iptables environment will contain multiple tables: Filter, NAT, Mangle, and Raw. You can define your own tables; if you don’t, the Filter table is used by default, as shown in Figure 5-3.

iuar 0503
Figure 5-3. iptables tables, chains, and rules

Tables can contain multiple chains. Chains can be built in or user defined, and they can contain multiple rules. Rules match and map packets. You can review the iptables chains Istio uses to redirect traffic to Envoy. iptables chains are network namespaced, so changes made within a pod don’t affect other pods or the node on which the pod is running.

You can explore and even update the iptables Istio creates. You can see these chains in action and verify the missing NET_ADMIN capability of your application and sidecar containers when you exec into one of your pod’s containers, as shown in Example 5-1.

Example 5-1. Sample output from a container with sidecarred Envoy proxy
# iptables -t nat --list
Chain ISTIO_REDIRECT (2 references)
target     prot opt source               destination
REDIRECT   tcp  --  anywhere             anywhere             redir ports 15001

Recall that traffic policy is configured by Pilot and implemented by service proxies. The collection of service proxies is referred to as the data plane. Service proxies intercept every packet in the request and are responsible for health checking, routing, load balancing, authentication, authorization, and generation of observable signals. Proxies offer indirection so that clients can point to the same location (e.g., proxy.example.com) while the service can move from location to location; thus, proxies represent a permanent reference. They add resilience to distributed systems.

Envoy Proxy Overview

Living up to its tagline as the universal data-plane API, the versatile and performant Envoy has emerged as an open source, application-level service proxy. Envoy was developed at Lyft, where large distributed systems problem needed to be overcome. Envoy has enjoyed broad reuse and integration within the cloud native ecosystem. The project’s community page highlights its more prominent uses.

Why Envoy?

Why not use NGINX, a pervasively used and battle-tested proxy? Or Linkerd v1, Conduit, HAProxy, or Traefik? At the time, Envoy was little known and not necessarily the obvious selection. The Linkerd v1 Java Virtual Machine (JVM)–based service proxy, with its resource utilization characteristics, was well suited for node agent deployments but not sidecar deployments (Linkerd v2 has addressed this and moved to a Rust-based service proxy). Envoy was not originally intended as an edge proxy, but was designed to be deployed as a sidecar. Over time, at Lyft, Envoy was migrated to the sidecar pattern.

Deployment model aside, the concept of hot reloads versus hot restarts was central to the decision that Istio would use Envoy as opposed to NGINX (which was the original proxy under consideration). From its beginning, Envoy’s runtime configuration has been API driven, capable of draining and hot reloading its own process with a new process and new configuration (displacing itself). Envoy achieves hot reloading of its processes using shared memory and communication over a Unix Domain Socket (UDS), an approach that bears similarities to GitHub’s tool for zero downtime HAProxy reloads.

Additionally, and uniquely, Envoy offers an Aggregated Discovery Service (ADS) for delivering the data for each xDS API (more on these APIs later).

HTTP/2 and gRPC

Envoy’s early support for HTTP/2 and gRPC set it apart from other proxies at the time. HTTP/2 significantly improves on HTTP/1.1 in that HTTP/2 enables request multiplexing over a single TCP connection. Proxies that support HTTP/2 enjoy what can be significantly reduced overhead by collapsing what might be many separate connections into one. HTTP/2 allows clients to send multiple parallel requests and load resources preemptively using server-push.

Envoy is HTTP/1.1- and HTTP/2-compatible with proxying capability for each protocol on both downstream and upstream. This means that Envoy can accept incoming HTTP/2 connections and proxying to upstream HTTP/2 clusters, but also that Envoy can accept HTTP/1.1 connections and proxy to HTTP/2 (and vice versa).

gRPC is an RPC protocol that uses protocol buffers (protobufs) on top of HTTP/2. Envoy natively supports gRPC (over HTTP/2) and also enables the bridging of an HTTP/1.1 client to gRPC. More than this, Envoy is capable of operating as a gRPC-JSON transcoder. The gRPC-JSON transcoder functionality allows a client to send HTTP/1.1 requests with a JSON payload to Envoy, which translates the request into the corresponding gRPC call and, subsequently, translates the response message back into JSON. These are powerful features (and difficult to get right in an implementation) and that made Envoy stand out from other service proxies.

Envoy in Istio

As an out-of-process proxy, Envoy transparently forms the base unit of the mesh. Akin to proxies in other service meshes, it is the workhorse of Istio, which deploys Envoy sidecarred to application services, as illustrated in Figure 5-4.

iuar 0504
Figure 5-4. Envoy as the Istio service proxy in Istio deployments

Identified as istio-proxy in deployment files, Envoy does not require root privileges to run, but runs as user 1337 (nonroot).

Sidecar Injection

Adding a service proxy consists of two things: sidecar injection and network capture. Sidecar injection—or “sidecarring”—is the method of adding a proxy to a given application. Network capture is the method of directing inbound traffic to the proxy (instead of the application) and outbound traffic to the proxy (instead of directly back to the client or directly to subsequent upstream application services).

Manual Sidecar Injection

You can use istioctl as a tool to manually inject the Envoy sidecar definition into Kubernetes manifests. To do so, use istioctl’s kube-inject capability to manually inject the sidecar into deployment manifests by manipulating the YAML file:

$ istioctl kube-inject -f samples/sleep/sleep.yaml | kubectl apply -f -

You can update Kubernetes specifications on the fly at the time of applying them to Kubernetes for scheduling. Alternatively, you might use the istioctl kube-inject utility, like so:

$ kubectl apply -f <(istioctl kube-inject -f <resource.yaml>)

If you don’t have the source manifests available, you can update an existing Kubernetes deployment to bring its services onto the mesh:

$ kubectl get deployment <deployment_name> -o yaml | istioctl kube-inject -f -
     | kubectl apply -f -

Let’s walk through an example of onboarding an existing application onto the mesh. Let’s take a freshly installed copy of Bookinfo, Istio’s sample application, as an example of an application already running in Kubernetes but not deployed on the service mesh. We begin by looking at Bookinfo’s pods in Example 5-2.

Example 5-2. Bookinfo running off the service mesh
$ kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
details-v1-69658dcf78-nghss       1/1     Running   0          43m
productpage-v1-6b6798cb84-nzfhd   1/1     Running   0          43m
ratings-v1-6f97d68b6-v6wj6        1/1     Running   0          43m
reviews-v1-7c98dcd6dc-b974c       1/1     Running   0          43m
reviews-v2-6677766d47-2qz2g       1/1     Running   0          43m
reviews-v3-79f9bcc54c-sjndp       1/1     Running   0          43m

In Kubernetes, the atomic unit of deployment is an object called a pod. A pod is a collection of containers, so it can be one or more containers deployed atomically together. Looking over Bookinfo’s pods in Example 5-2, we see only one container running per pod. When istioctl kube-inject is run against Bookinfo’s manifests, it adds another container to the Pod specification; however, it does not actually deploy anything yet. istioctl kube-inject supports modification of Pod-based Kubernetes objects (Job, DaemonSet, ReplicaSet, Pod, and Deployment) that can be embedded into long YAML files containing other Kubernetes objects. The other Kubernetes objects will be parsed unmodified by istioctl kube-inject. Unsupported resources are left unmodified, so it is safe to run kube-inject over a single file that contains multiple Service, ConfigMap, Deployment, and so on definitions for a complex application. It is best to do this when the resource is initially created.

You can take the YAML file created by the kube-inject command and deploy that directly. To onboard this existing application, we can execute istioctl kube-inject against each Deployment and have a rolling update of that Deployment initiated by Kubernetes, as shown in Example 5-3. Let’s begin with the productpage service.

Example 5-3. Bookinfo’s productpage deployment updated to include injection of Istio’s sidecar
$ kubectl get deployment productpage-v1 -o yaml | istioctl
     kube-inject -f - | kubectl apply -f -
deployment.extensions/productpage-v1 configured

Reviewing the Bookinfo pods, we now see that the productpage pod has grown to two containers. Istio’s sidecar has been successfully injected. The rest of Bookinfo’s application services need to be onboarded for Bookinfo as an application to work, as shown in Example 5-4.

Example 5-4. Bookinfo’s productpage running on the service mesh
$ kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
details-v1-69658dcf78-nghss       1/1     Running   0          45m
productpage-v1-64647d4c5f-z95dl   2/2     Running   0          64s
ratings-v1-6f97d68b6-v6wj6        1/1     Running   0          45m
reviews-v1-7c98dcd6dc-b974c       1/1     Running   0          45m
reviews-v2-6677766d47-2qz2g       1/1     Running   0          45m
reviews-v3-79f9bcc54c-sjndp       1/1     Running   0          45m

Instead of ad hoc onboarding of a running application, you might prefer to perform this manual injection operation once and save the new manifest file with istio-proxy (Envoy) inserted. You can create a persistent version of the sidecar-injected deployment by outputting the results of istioctl kube-inject to a file:

$ istioctl kube-inject -f deployment.yaml -o deployment-injected.yaml

Or, like so:

$ istioctl kube-inject -f deployment.yaml > deployment-injected.yaml

As Istio evolves, the default sidecar configuration is subject to change (potentially unannounced or buried in detailed release notes that you might overlook).

Warning

istioctl kube-inject is not idempotent

You cannot repeat the istioctl kube-inject operation on the output from a previous kube-inject. The kube-inject operation is not idempotent. For upgrade purposes, if you’re using manual injection, we recommend that you keep the original noninjected YAML file so that the data-plane sidecars can be updated.

The --injectConfigFile and --injectConfigMapName parameters can override the sidecar injection template built into istioctl. When used, either of these options override any other default template configuration parameters (e.g., --hub and --tag). You would typically use these options with the file/configmap created with a new Istio release:

# Create a persistent version of the deployment with Envoy sidecar
# injected configuration from Kubernetes configmap 'istio-inject'
istioctl kube-inject -f deployment.yaml -o deployment-injected.yaml
     --injectConfigMapName istio-inject

Ad Hoc Sidecarring

Sidecar injection is responsible for configuring network capture. You can selectively apply injection and network capture to enable incremental adoption of Istio. Using the Bookinfo sample application as an example, let’s take the productpage service as the external-facing service and selectively remove this service (and only this service out of the set of four) from the service mesh. First, let’s quickly confirm the presence of its sidecarred service proxy:

$ kubectl get pods productpage-8459b4f9cf-tfblj
     -o jsonpath="{.spec.containers[*].image}"
layer5/istio-bookinfo-productpage:v1 docker.io/istio/proxyv2:1.0.5

As you can see, productpage container is our application container, whereas the istio/proxy is the service proxy (Envoy) that Istio injected into the pod. To manually onboard and offboard a deployment onto and off of the service mesh, you can manipulate the annotation within its Kubernetes Deployment specification, as shown in Example 5-5.

Example 5-5. Manual removal of a deployment from the mesh
$ kubectl patch deployment nginx --type=json --patch='[{"op": "add", "path":
     "/spec/template/metadata/annotations", "value":
     {"sidecar.istio.io/inject": "false"}}]'
deployment.extensions/productpage-v1 patched

Open your browser to the productpage application; you’ll find that it is still served through Istio’s ingress gateway but that its pods no longer have sidecars. Hence, the productpage app has been removed from the mesh:

UNAVAILABLE:upstream connect error or disconnect/reset before headers

Automatic Sidecar Injection

Automatic sidecar injection is the magical feeling you get as you go to onramp your services. Automatic sidecar injection means that not only do you not need to change your code, but you don’t need to change your Kubernetes manifests either. Depending on your application’s configuration, you might or might not need to change any aspect of your application. Automatic sidecar injection in Kubernetes relies on mutating admission webhooks. The istio-sidecar-injector is added as a mutating webhook configuration resource when Istio is installed on Kubernetes, as demonstrated in Examples 5-6 and 5-7.

Example 5-6. Kubernetes cluster with Istio and Linkerd mutating webhooks registered for each respective service mesh’s sidecar injector
$ kubectl get mutatingwebhookconfigurations
NAME                                    CREATED AT
istio-sidecar-injector                  2019-04-18T16:35:03Z
linkerd-proxy-injector-webhook-config   2019-04-18T16:48:49Z
Example 5-7. The istio-sidecar-injector mutating webhook configuration
$ kubectl get mutatingwebhookconfigurations istio-sidecar-injector -o yaml

apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:
  creationTimestamp: "2019-04-18T16:35:03Z"
  generation: 2
  labels:
    app: sidecarInjectorWebhook
    chart: sidecarInjectorWebhook
    heritage: Tiller
    release: istio
  name: istio-sidecar-injector
  resourceVersion: "192908"
  selfLink: /apis/admissionregistration.k8s.io/v1beta1/
             mutatingwebhookconfigurations/istio-sidecar-injector
  uid: eaa85688-61f7-11e9-a968-00505698ee31
webhooks:
- admissionReviewVersions:
  - v1beta1
  clientConfig:
    caBundle: <redacted>
    service:
      name: istio-sidecar-injector
      namespace: istio-system
      path: /inject
  failurePolicy: Fail
  name: sidecar-injector.istio.io
  namespaceSelector:
    matchLabels:
      istio-injection: enabled
  rules:
  - apiGroups:
    - ""
    apiVersions:
    - v1
    operations:
    - CREATE
    resources:
    - pods
    scope: '*'
  sideEffects: Unknown
  timeoutSeconds: 30

Having this mutating webhook registered configures Kubernetes to send all pod creation events to the istio-sidecar-injector service (in the istio-system namespace) if the namespace has the istio-injection=enabled label. The injector service then will modify the PodSpec to include two additional containers; one for the init-container to configure traffic rules, and the other for istio-proxy (Envoy) to perform proxying (not many know this). The sidecar injector service adds these two further containers via a template; the template is located in the istio-sidecar-injector configmap.

The Kubernetes life cycle allows resources to be customized before they’re committed to the etcd store, the “source of truth”; for Kubernetes configuration. When an individual pod is created (either via kubectl or a Deployment resource), it goes through this same life cycle, hitting mutating admission webhooks that modify it before it’s applied.

Kubernetes labels

Automatic sidecar injection relies on labels to identify which pods to inject Istio’s service proxy and initialize as a pod on the data plane. Kubernetes objects such as pods and namespaces may have user-defined labels attached to them. Labels are essentially key/value pairs like you find in other systems that support the concept of tags. The Webhook Admission controller relies on labels to select the namespaces to which they apply. Istio-injection is the specific label that Istio uses. You might familiarize yourself with automatic sidecar injection by labeling the default namespace with istio-injection=enabled:

$ kubectl label namespace default istio-injection=enabled

Example 5-8 demonstrates confirmation as to which namespaces have the istio-injection label.

Example 5-8. Which Kubernetes namespaces carry the istio-injection label?
$ kubectl get namespace -L istio-injection
NAME           STATUS    AGE       ISTIO-INJECTION
default        Active    1h        enabled
Docker         Active    1h        enabled
istio-system   Active    1h        disabled
kube-public    Active    1h
kube-system    Active    1h

Notice that only the istio-system namespace has the istio-injection label assigned. With the istio-injection label and its value set to disabled, the istio-system namespace will not have service proxies automatically injected into their pods upon deployment. This doesn’t mean, however, that pods in this namespace can’t have service proxies; it just means that service proxies won’t be automatically injected.

One caveat: when using the namespaceSelector, make sure the namespace(s) you select really have the label you’re using. Keep in mind that the built-in namespaces like default and kube-system don’t have labels out of the box. Conversely, the namespace in the metadata section is the actual name of the namespace, not a label:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: default
spec:
...

Kubernetes Init Containers

Similar to cloud-init for those familiar with VM provisioning, init containers in Kubernetes allow you to run temporary containers to perform a task before engaging your primary container(s). Init containers are often used to perform provisioning tasks like bundling assets, performing database migration, or cloning a Git repository into a volume. In Istio’s case, init containers are used to set up network filters—iptables—to control the flow of traffic.

Sidecar Resourcing

Istio v1.1 defined default resource limits for its sidecars. Definition of resource limits is essential to being able to autoscale the sidecar. Upon examining the container YAML for a sidecar, you’ll notice that the volume is mounted whether you’re using mTLS or not, as shown in Example 5-9.

Example 5-9. Sidecar specification found in a Kubernetes pod
...
     --controlPlaneAuthPolicy
      MUTUAL_TLS
...
    Mounts:
      /etc/certs/ from istio-certs (ro)
      /etc/istio/proxy from istio-envoy (rw)
...

Envoy’s Functionality

Like other service proxies, Envoy uses network listeners to ingest traffic. The terms upstream and downstream describe the direction of a chain of dependent service requests (see Figure 5-5). Which way is which?

iuar 0505
Figure 5-5. Clients are downstream of servers; servers are upstream of clients.
Downstream

A downstream service initiates requests and receives responses.

Upstream

An upstream service receives requests and returns responses.

Core Constructs

A listener is a named network location (e.g., port, unix domain socket, etc.) that can accept connections from downstream clients. Envoy exposes one or more listeners, which in many cases is an externally exposed port with which external clients can establish a connection. A listener binds to a specific port; physical listeners bind to a port. Virtual listeners are used for forwarding. Listeners can also be optionally configured with a chain of listener filters each of which can be used to manipulate the connection metadata or enable better systems integration without having to incorporate changes in the core.

iuar 0506
Figure 5-6. Relationships among Envoy’s core constructs

You can configure listeners, routes, clusters, and endpoints with static files or dynamically through their respective APIs: listener discovery service (LDS), route discovery service (RDS), cluster discovery service (CDS), and endpoint discovery service (EDS). Static configuration files can be either JSON or YAML formatted. A collective set of discovery services for Envoy’s APIs is referred to as xDS. The configuration file specifies listeners, routes, clusters, and endpoints as well as server-specific settings like whether to enable the Admin API, where access logs should go, and tracing engine configuration.

Note

It’s important to note that Envoy’s reference documentation explicitly distinguishes between v1 and v2 docs.

There are different versions of the Envoy configuration. The initial version (v1) has been deprecated in favor of v2 of the Envoy configuration. Envoy’s v1 API and its integration with Istio required that Envoy poll Pilot receive configuration updates. With v2, Envoy holds a long-running gRPC streaming connection to Pilot, where Pilot can push updates as it sees on an open stream. Envoy has had some backward compatibility with the v1 configuration API; however, considering that it will be removed at some point, perhaps it’s best to just focus on v2.

Istio Pilot uses Envoy’s ADS for dynamic configuration and centralizes route tables, cluster and listener definitions. Pilot can apply the same rules to multiple service proxies, easing the propagation of service proxy configuration updates across your cluster. At runtime, Pilot uses these APIs to push configuration. Pilot efficiently computes one configuration per service. By default, it pushes configuration changes every 10 seconds, but you can configure this setting using PILOT_DEBOUNCE_MAX.

They used to be poll based, but Envoy’s new APIs are now push based, in order to better scale and, critically, to be able to dictate configuration in a specific order to Envoy. Using gRPC, Envoy establishes a long-lived connection to Pilot. Pilot will push data as it computes changes. Envoy’s AggregateADS guarantees order of delivery, allowing you to sequence updates to service proxies. This is a key property of what makes the service mesh resilient.

Certificates and Protecting Traffic

So, what are your default security postures? As a recently configurable option, but still the default setting, Pilot disallows egress traffic to undefined endpoints. Istio’s default security posture dictates that, by default, Pilot needs to be informed of which endpoints external to the cluster are acceptable to send traffic to. As soon as Pilot becomes aware of a topology/environment change, it needs to reconfigure each affected service proxy in the data plane.

Depending on the type of configuration change being made Envoy listeners might or might not need to be closed (connections might or might not be dropped). An example of closing connections intentionally in Istio is when a service identity credential (a certificate) is rotated. While this isn’t required, Istio will terminate connections on reload of a service’s certificate. Envoy’s Secret Discovery Service (SDS) provides the mechanism by which you can push secrets (certificates) to each service proxy. Chapter 6 has more details on SDS.

The pilot-agent (shown in Examples 5-10 and 5-11) handles restarting Envoy when certificates are rotated (once per hour). Although an existing open connection will reuse an expired certificate, Istio will intentionally close the connection.

Example 5-10. istio-proxy is a multiprocess container with pilot-agent running alongside Envoy
$ kubectl exec ratings-v1-7665579b75-2qcsb -c istio-proxy ps
  PID TTY          TIME CMD
    1 ?        00:00:10 pilot-agent
   18 ?        00:00:32 envoy
   70 ?        00:00:00 ps
Example 5-11. Verifying that productpage’s certificate is valid
$ kubectl exec -it $(kubectl get pod | grep productpage | awk '{ print $1 }') -c
     istio-proxy -- cat /etc/certs/cert-chain.pem |
     openssl x509 -text -noout

Envoy’s connection-handling behavior can be customized in this regard, and you can examine its configuration, as shown in Example 5-12.

Example 5-12. Showing the filename of the Envoy configuration file within the istio-proxy container
$ kubectl exec ratings-v1-7665579b75-2qcsb -c istio-proxy ls /etc/istio/proxy
Envoy-rev0.json

mTLS connections are established between service proxies, with certificates used to establish mTLS communication. Service mesh deployments with sidecarred service proxies like Istio typically establish pod-local, unencrypted, TCP connections between the application service and sidecar proxy. This means your service (application container) and Envoy use pod-local networking (on the loopback interface) to communicate. Understanding this traffic flow, it follows that the Kubernetes network policy and sidecar-to-app redirection are compatible (underlap). And that application of the Kubernetes network policy between the app and a sidecar is not possible. Only when an application’s network traffic exits the pod will it encounter the Kubernetes network policy.

Administration console

Envoy provides an administration view, allowing you to view configuration, stats, logs, and other internal Envoy data. To gain access to a given service proxy’s administrative console while running within the data plane of an Istio deployment, follow the instructions in Chapter 11. If you’d like to play around with Envoy’s administrative console outside of an Istio service mesh deployment, the simplest way to do so might be to use Docker, as demonstrated in Example 5-13.

Example 5-13. Running Envoy in a Docker container
$ docker run --name=proxy -d 
  -p 80:10000 
  -v $(pwd)/envoy/envoy.yaml:/etc/envoy/envoy.yaml 
  envoyproxy/envoy:latest

After opening your browser to http://localhost:15000, you will be presented with a list of endpoints to explore, like the following:

/certs

Certificates within the Envoy instance

/clusters

Clusters with which Envoy is configured

/config_dump

Dumps the actual Envoy configuration

/listeners

Listeners with which Envoy is configured

/logging

View and change logging settings

/stats

Envoy statistics

/stats/prometheus

Envoy statistics as Prometheus records

On the list of certificates used by productpage pod’s service proxy, you should see three files (see Example 5-14). One of them should be productpage’s private key (key.pem).

Example 5-14. Verifying that the key and certificate are correctly mounted in productpage’s service proxy
$ kubectl exec -it $(kubectl get pod | grep productpage | awk '{ print $1 }')
     -c istio-proxy -- ls /etc/certs
cert-chain.pem key.pem  root-cert.pem

In theory you could use Envoy to create the state of the world; however, Pilot is responsible for configuring Envoy service proxies, the Istio configuration, and service discovery data.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset