Chapter 9. Mixer and Policies in the Mesh

Of the various ways in which you can use Mixer, we can divide its responsibilities into two categories: telemetry and policy enforcement. As you look at the APIs that Mixer exposes, these areas of responsibilities become more concretely obvious in that Mixer has two main APIs: check (for precondition tests) and report (for collecting telemetry). Reflecting these two areas of focus is the fact that by default Istio deployments have two Mixer pods running in the control plane—one Mixer pod for telemetry and another for policy enforcement.

Given its role as an aggregation point for telemetry, Mixer is often described as an attribute-processing engine because it ingests telemetric attributes from service proxies and transforms and funnels them to external systems (through adapters). Considering its role as a policy evaluator, Mixer is also described as a (second level) cache in that it responds to requests to check on traffic policy and caches evaluation results. Mixer ingests different configurations from different sources and mingles them together.

Architecture

Residing in the control plane, Mixer liaises between the data plane and the management plane. Contrary to how Mixer appears in Figure 9-1, it is not one single point of failure, because Istio’s default configuration includes a set of pod replicas for HA (a HorizontalPodAutoscaler). Mixer is a stateless component, using caching and buffering techniques along with a hardened design with the intention of having 99.999% availability.

iuar 0901
Figure 9-1. Mixer architecture overview

Mixer is referred to as a single entity because even though the API surface is split by responsibility, both functions run the same binary using the same Docker image. They are just provisioned to behave differently based on the function that instance is to handle—policy or telemetry. Separating into multiple deployments allows for each area of responsibility to scale independently and not have one affecting the performance of the other. The load characteristics of applying policy versus generating telemetry differs and so optimizing their runtime is helpful. In this way, not only can you scale them independently, you can track resource usage terms of how much is dedicated to telemetry versus policy. Not exactly neighbors, these siblings can still be noisy to each other; though it’s up to you whether to combine and deploy them as one unit if you desire to optimize resources use based on your environment’s load placed on Mixer. You could combine both into a single deployment.

Mixer acts as the central point for telemetry processing, policy evaluation, and extensibility. Mixer achieves high extensibility by having a general-purpose plug-in model. Mixer plug-ins are known as adapters. Any number of adapters can be running in an Istio deployment. Adapters expand Mixer’s two areas of responsibility:

Policy evaluation (checks)

Adapters can add precondition checking (e.g., ACLs, authentication) and quota management (e.g., rate limits).

Telemetry collection (reports)

Adapters can add metrics (e.g., request traffic statistics), logs, and traces (i.e., performance or other context carried across services).

Service proxies interact with Mixer through a client library. Depending on whether Mixer receives request attributes on its check or report API, a decision is to be made about whether a request is authorized to proceed (a precondition check) or whether request attributes are telemetry to be routed for post-request analysis.

Enforcing Policy

The check API exposed by istio-policy handles policies of different types such as that of authentication and quota. Performance and availability of the check API is important when you consider that the check API is consulted inline (synchronously) when the service proxy is processing each request. Based on the request attributes presented, the check API validates whether a given request is in or out of compliance with the active policies configured in Mixer. Ultimately, the Mixer adapters determine whether specific policies’ conditions are met. Some adapters validate policy conditions against backend systems, whereas some process checks within the adapter itself (e.g., blacklists, quotas).

Tip

Quotas can be an arbitrary dimension of a request. So, for example, they might enforce a quota by way of rate limiting based on API token or IP address.

As an attribute-processing engine, Mixer transforms attributes into requests to specific backends via adapters, which massage the attributes into a format specific to a backend system for evaluation. Backend systems that adapters can interface with could be a policy engine or API management system, for example. These systems evaluate the check request, responding affirmatively or negatively to the request based on various conditions. There is a growing list of third-party adapters created for Mixer, many times contributed by third parties who represent a specific backend system.

Let’s look again at Mixer’s architecture, this time in the context of how istio-policy goes about receiving check requests, evaluating policy, and responding with a result. istio-policy exposes a check API with a fixed set of parameters, as shown in Figure 9-2.

iuar 0902
Figure 9-2. Mixer istio-policy architecture overview

Service proxies call Mixer before each request to perform precondition checks and after each request to report telemetry. Check results from Mixer are cached in service proxies. Acting as a first-level cache, service proxies are able to answer a relatively large percentage of precondition checks from their local cache. In turn, as much as anything else, Mixer acts as a cache. The role it plays as a second-level cache for policy results is critical to mitigating the volume of policy check traffic to and evaluation overhead in Mixer.

Well-designed caches are key to enabling conscientiousness to security practices in distributed systems. Ideally, requests between services are authenticated and authorized at every single hop through the chain of upstream services (one request implicating any number of requests to other services in the process of responding to the request). Traditionally, authentication and authorization is performed on the service edge by something like an API gateway. The common pattern is that after a request is authenticated and authorized at the edge, requests to other services in this chain of requests are assumed safe, not subsequently verified.

Ideally, distributed systems have policy applied at every point in the service chain, not just at the edge. This is how we attain consistency of security throughout our distributed system. This presents a real problem to Istio Mixer, our authentication service, however. If every single service is calling Mixer, one request that implicates eight services, for example, means that eight different authorization requests would be sent to Mixer for consideration. Mixer and service proxies need to effectively operate a distributed cache.

Understanding How Mixer Policies Work

You might ask, is policy enforcement enabled? To eliminate unnecessary overhead, the default installation profile for Istio v1.1 and later has policy enforcement disabled by default. Policy is controlled in two places:

  • Within Mixer Policy—mixer.policy.enabled. By default, this is disabled. Only when it’s enabled does the second configuration item take effect.

  • Within global.disablePolicyChecks—controls whether Mixer policy will be checked. Disables Mixer Policy checks when this configuration item is set to true. Pilot will need to be restarted for any changes to this configuration item to take effect.

To install Istio with policy enforcement on, use the --set global.disablePolicyChecks=false Helm install option. Or if you have already deployed your Istio service mesh, you might want to first confirm whether policy enforcement is enabled or disabled:

$ kubectl -n istio-system get cm istio -o jsonpath="{@.data.mesh}" |
     grep disablePolicyChecks

Mixer’s configuration describes which adapters are being used and how they operate. Different adapters will map request attributes into adapter inputs based on their specific use cases and backend integration. Each adapter is called with specific inputs.

Reporting Telemetry

Telemetry is generated as and when a request (network traffic) is received by the data plane’s service proxies. For each request received, there is an array of metadata that can be captured. This metadata provides context and accountable details of each request and is captured in the form of attributes. Mixer continually receives and processes these request attributes (telemetry). In Figure 9-3, istio-telemetry exposes a report API with a potentially long list of attributes that varies by adapter.

Reports are generated as Envoy is processing requests and are sent asynchronously to Mixer’s report API (exposed by istio-telemetry) out of band of the request. Envoy buffers outgoing telemetry such that Mixer is called only after having processed many requests (a configurable buffer size). It is within istio-telemetry that Mixer synthesizes attributes and pushes to a telemetry backend via one or more adapters.

iuar 0903
Figure 9-3. Mixer istio-telemetry architecture overview: every individual service proxy has a report buffer where it batches telemetry that it flushes periodically to Mixer’s report API.
Tip

In Istio’s v1.x architecture, adapters are built into the Mixer binary. Whether they are enabled, though, is configurable.

As noted, telemetry reports are generated as a service proxy processes requests. Requests come in the form of client–server interactions. Consider that requests are created by a client and sent to a service, which in turn can initiate a request (as a client) to another service (server). Both client and server services are capable of sending telemetry reports about any given request that they have handled. The default configuration of Istio releases after version 1.1 is to only have the server send the telemetry report so that only one report will be sent in. This is configurable, however, to the extent that both client and server-side request reports are desired.

Attributes

Attributes are a key concept with Mixer and are essentially collections of typed, name/value tuples. They provide a flexible and extensible mechanism for transferring information from service proxies to Mixer. Attributes can describe request traffic and their context, enabling a mesh operator with granular control in determining what bits of request information should be involved in policy checks and distilled into collected telemetry. Attributes are fundamental to how operators experience Istio, showing up in configuration and logs.

Attribute values are many and varied depending on which adapters are present and enabled. Values can be strings, ints, floats, Bools, timestamps, durations, IP addresses, raw bytes, string maps, and so on. There is an extensible vocabulary of known attributes; Table 9-1 highlights a small sample of the attributes that are sent to Mixer engine at runtime. The complete set of attributes that Istio knows about is fixed at the time of deployment, but versioned from Istio release to release. For an exhaustive list, visit Istio’s Attribute Vocabulary page.

Table 9-1. Examples of attributes sent to Mixer
Name Type Description Kubernetes example

source.uid

string

Platform-specific unique identifier for the source workload instance.

kubernetes://redis-master-2353460263-1ecey.my-namespace

source.ip

ip_address

Source workload instance IP address.

10.0.0.117

source.labels

map[string, string]

A map of key/value pairs attached to the source instance.

version ≥ v1

source.name

string

Source workload instance name.

redis-master-2353460263-1ecey

source.namespace

string

Source workload instance namespace.

my-namespace

source.principal

string

Authority under which the source workload instance is running.

service-account-foo

source.owner

string

Reference to the workload controlling the source workload instance.

kubernetes://apis/extensions/v1beta1/namespaces/istio-system/deployments/istio-policy

Sending Reports

Service proxies send information to Mixer about requests and responses in the form of attributes (typed key/value pairs). Mixer transforms attribute sets into structured values, according to an operator-supplied configuration. Mixer dispatches the derived values to a set of adapters, according to the operator-supplied configuration. Adapters publish telemetry data to backend systems, making it available for further consumption and analysis. Attributes are primarily generated by service proxies; however, Mixer adapters also can produce attributes.

Checking Caches

After Mixer makes an initial policy verdict, the attribute protocol comes into play. Envoy and Mixer use well-known attributes to describe the policy used to evaluate service requests. When Mixer makes a verdict on a request, it computes a hash key using these attributes. Envoy uses these same attributes, reducing the overhead on request latencies since both Mixer and Envoy operate as caches once verdicts are delivered. Mixer creates a hash key using the attribute keys. A goal of Envoy’s configuration is to strike the balance of cache hits with superfluous sets of hash keys.

Check caches include a TTL value indicating the maximum amount of time a cached check result should be trusted. Caches need to be refreshed as Mixer’s configuration is bound to change over time, or in turn, backend that Mixer consults when deciding a check result may equally as likely change and cause service proxies to need their check cache refreshed. Again, in this way, the service proxies function as first-level caches and Mixer functions as a second-level shared cache.

Adapters

In Istio’s v1.x architecture, adapters are built into the Mixer binary. Whether they are enabled, though, is configurable. Istio Mixer configuration activates adapters. The only penalty paid for inactive adapters is for the footprint of the adapter bits. Multiple adapters of the same or different type can be running simultaneously. Adapters can be chained so that policy can be defined across them. You can compile your own Mixer with your own adapters. Currently, Mixer architecture accounts for in-process adapters, although, a gRPC interface for out-of-process adapters has been released in alpha in v1.0.

Adapters enable Mixer to expose a single consistent API, independent of the infrastructure backends in use. Most adapters consult external, remote backends, whereas others are self-contained, providing functionality entirely within Mixer (also known as baby backends). Baby backends are configured in Istio via adapter-level configuration. As an example, the list adapter provides simple whitelist or blacklist checks. You can configure List adapters directly with the list to check or you can provide them with a URL (could be a filepath) from which the list should be fetched. Lists are checked at runtime matching lists of strings, IP addresses, or regular expression (regex) patterns inclusively or exclusively.

In-Process Adapters

In-process adapters are written in Go and compiled into the Mixer process. Again, whether the adapter is implicated based on whether they are configured to be enacted through a handler, instance, and rule(s). Each of the adapters that follow is built into the Mixer binary, hence each Istio release:

Precondition checks

denier, listchecker, memquota, opa, rbac, redisquota

Telemetry

circonus, cloudwatch, dogstatsd, fluentd, prometheus, solarwinds, stackdriver, statsd, stdio

Attribute generation

kubernetesenv

To create a new in-process adapter, vendors implement a set of Golang interfaces and submit their adapter for review and consideration of inclusion in the Istio project.

Out-of-Process Adapters

Adapters were initially built into the main as in-process adapters but are moving to an out-of-process model in which adapter code isn’t kept in the Istio project, but kept and managed separately with the vendor. Out-of-process adapters interface via a gRPC service that implements template infrastructure backend protocol and runs outside the Mixer process (can be coprocess adapter or backend service). In moving to an out-of-process adapter model, Istio eliminates the need for custom Mixer builds to include or exclude specific adapters. Adapter authors are empowered to write their adapters in the language of their choice given the abstraction through gRPC. In moving to an out-of-process model, no longer will Mixer share a common fate with its adapters, which will run independently in their own process(es).

Creating a Mixer Policy and Using Adapters

As an operator, the sequence of steps undertaken to apply policy to Istio flows like so:

  • Apply policy to Kubernetes. Policies go into kube-api server.

  • Galley will pull them and either:

    • Push them to Pilot to realize them as an Envoy configuration, or

    • Push them to Mixer to prepare its dynamic dispatch in preparation for service proxies calling in to retrieve and implement these policies

Istio configuration lives in the Kubernetes API server. From Mixer’s perspective, the Kubernetes API server is the configuration database. From Pilot’s perspective, it’s the service discovery mechanism. That the same source of truth is referenced for both configuration and service discovery is merely an artifact of Kubernetes, not an Istio requirement that these be one-and-the-same source. Istio can just the same use Consul as service discovery and kube-api server for configuration database.

Mixer Configuration

Service operators control all operational and policy aspects of a Mixer deployment by manipulating configuration resources. Configuring Mixer includes describing which adapters are to be used, how they should operate, which request attributes to map to which adapter’s inputs, and when a particular adapter is invoked with specific inputs. Mixer configuration is manipulated and represented through Kubernetes custom resources—rules, handlers, instances, and adapters.

adapters encapsulate the logic necessary to interface with backends. Adapter configuration schema is specified by adapter packages. Configuration contains operational parameters needed by the adapter code to do work. A handler is an instantiation of an adapter—a configured adapter. Handlers can receive data (attributes). An instance is an object full of request data. Instances are a set of structured request attributes with well-known fields. Instances map request attributes to values that are passed to adapter code. Attribute mapping is controlled via attribute expressions.

As an example of an attribute expression, consider the Prometheus instance for requestduration in Example 9-1. In this example, requestduration is configured to return a response.code of 200 in the absence of a response.code. If destination.service isn’t present, the report will simply fail.

Example 9-1. Prometheus instance example
$ kubectl -n istio-system get metrics requestduration -o yaml
apiVersion: config.istio.io/v1alpha2
kind: metric
metadata:
  name: requestduration
  namespace: istio-system
spec:
  dimensions:
    destination_app: destination.labels["app"] | "unknown"
    destination_principal: destination.principal | "unknown"
    destination_service: destination.service.host | "unknown"
    destination_service_name: destination.service.name | "unknown"
    request_protocol: api.protocol | context.protocol | "unknown"
    response_code: response.code | 200
...

Rules specify when a particular handler is invoked with a specific instance, and map handlers and instances to each other. Rules essentially enforce that when a given condition is true, it’s given a particular handler specific instances (request attributes).

Rules contain a match predicate attribute expression and a list of actions to perform if the predicate evaluates to true. Rules evaluate to true for all requests if a match isn’t specified. This behavior is valuable when accounting for quota enforcement, as demonstrated in Example 9-2, in this rule snippet for a memquota adapter (a precondition check type adapter). In the following example, a rule with no match condition perpetually evaluates to true each time it is evaluated, and therefore, increments the requestcount.quota:

Example 9-2. A rule with no match condition
...
spec:
  actions:
  - handler: handler.memquota
    instances:
    - requestcount.quota
...

Open Policy Agent Adapter

The Open Policy Agent (OPA) is a general-purpose policy engine used to offload authorization decisions. OPA uses Rego as its declarative policy language. It is implemented in Go and can be deployed as a library or as a daemon.

Mixer’s OPA adapter is a check type adapter. The Mixer security model for check type adapters is to fail closed to ensure security is maintained. Any random policy that you can do with OPA, you can do with the OPA Mixer adapter. The adapter bundles up the OPA runtime. Attributes get fed to the Rego language engine for processing by what is a full OPA instance in the adapter.

Consider this example set of configurations for an OPA adapter’s rule, handler, and instance, as presented in Examples 9-3 through 9-5.

Example 9-3. Sample OPA rule configuration
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
  name: authz
spec:
  actions:
  - handler: opa-handler
    instances:
    - authz-instance
Example 9-4. Sample OPA instance configuration
apiVersion: config.istio/v1alpha2
kind: authz
metadata:
  name: authz-instance
spec:
  subject:
    user: source.uid | ""
  action:
    namespace: target.namespace | "default"
    service: target.service | ""
    path: target.path | ""
    method: request.method | ""
Example 9-5. Sample OPA handler configuration
apiVersion: config.istio.io/v1alpha2
kind: opa
metadata:
  name: opa-handler
spec:
  checkMethod: authz.allow
  policy: |
    package authz
    default allow = false
    allow { is_read }
    is_read { input.action.method = "GET" }
Note

Which Policies Come from Pilot and Which Go Through Mixer?

Policies that affect traffic are defined in Pilot. Policies that call for enforcement of authentication and authorization on the request go through Mixer. If a policy needs to consult an external system to make a decision, it goes through Mixer.

Prometheus Adapter

The Prometheus adapter is built into the Mixer binary and enabled by default with a metrics expiration duration of 10 minutes. The Prometheus adapter defines a custom resource; metrics, as shown in Example 9-6.

Example 9-6. List of metrics tracked on Istio components made available to Prometheus
$ kubectl -n istio-system get metrics
NAME                   AGE
requestcount           25h
requestduration        25h
requestsize            25h
responsesize           25h
tcpbytereceived        25h
tcpbytesent            25h
tcpconnectionsclosed   25h
tcpconnectionsopened   25h

The Prometheus handler needs to know the specific dimensions and type of metrics. This is called an instance in general, but there are a few special instances that have a proper name; metric is one of those. Typically, an adapter is built to expect certain instances. We can see in Example 9-7 the set of instances that the Prometheus adapter consumes.

Example 9-7. Example set of instances that the Prometheus adapter consumes
$ kubectl -n istio-system get metrics tcpbytereceived -o yaml

apiVersion: config.istio.io/v1alpha2
kind: metric
metadata:
  labels:
    app: mixer
    release: istio
  name: tcpbytereceived
  namespace: istio-system
spec:
  dimensions:
    connection_security_policy: conditional((context.reporter.kind | "inbound") ==
      "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls",
      "none"))
    destination_app: destination.labels["app"] | "unknown"
    destination_principal: destination.principal | "unknown"
    destination_service: destination.service.host | "unknown"
    destination_service_name: destination.service.name | "unknown"
    destination_service_namespace: destination.service.namespace | "unknown"
    destination_version: destination.labels["version"] | "unknown"
    destination_workload: destination.workload.name | "unknown"
    destination_workload_namespace: destination.workload.namespace | "unknown"
    reporter: conditional((context.reporter.kind | "inbound") == "outbound",
      "source", "destination")
    response_flags: context.proxy_error_code | "-"
    source_app: source.labels["app"] | "unknown"
    source_principal: source.principal | "unknown"
    source_version: source.labels["version"] | "unknown"
    source_workload: source.workload.name | "unknown"
    source_workload_namespace: source.workload.namespace | "unknown"
  monitored_resource_type: '"UNSPECIFIED"'
  value: connection.received.bytes | 0

Mixer needs to know when to generate this metric data and send it to Prometheus. This is defined as a rule. Every rule has a match condition that is evaluated; if the match is true, the rule is triggered. For example, we could use the match to receive only HTTP data, only TCP data, and so on. Prometheus does exactly this, as shown in Example 9-8, and defines a rule for each set of protocols for which it has metric descriptions.

Example 9-8. List of Mixer rules
$ kubectl -n istio-system get rules
NAME                      AGE
kubeattrgenrulerule       25h
promhttp                  25h
promtcp                   25h
promtcpconnectionclosed   25h
promtcpconnectionopen     25h
stdio                     25h
stdiotcp                  25h
tcpkubeattrgenrulerule    25h

And, again, let’s inspect one to see what it looks like:

$ kubectl -n istio-system get rules promtcpconnectionopen -o yaml

apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
  annotations:
  ...
  generation: 1
  name: promtcpconnectionopen
  namespace: istio-system
spec:
  actions:
  - handler: prometheus
    instances:
    - tcpconnectionsopened.metric
  match: context.protocol == "tcp" && ((connection.event | "na") == "open")

In Chapter 10, all of these metrics are exposed to backends for analysis, visualization, alerting and so on.

As our multiple personality control-plane component, Mixer enables service operator control over policy decisions and telemetry dispatch based on configuration and acts as the point of integration between Istio and infrastructure backends. Through two services (istio-policy and istio-telemetry), Mixer provides the following core features:

  • Precondition checking (ACLs, authentication)

  • Quota management (rate limits)

  • Telemetry reporting (metrics, logs, traces)

Mixer’s performance overhead can be quite high depending on how you configure your Istio deployment. Mixer provides aggressive caching and reduces observed latencies, and offers mediation for service operators to control policy enforcement and telemetry collection. With its backend abstractions, it reduces systemic complexity and the adapter model enables backend mobility.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset