Chapter 6. Security and Identity

Application and system security has for a long time been focused on the network. Historically, we’ve built hard outer shells (firewalls, VPNs, etc.) to fend off attacks, but once the outer shell is penetrated, an attacker could easily to access many systems. But we’ve built defense-in-depth and applied networking isolation concepts within our own trust domain, requiring security administrators to punch holes in the network and set things up just so, with these network identities (IP addresses) assigned here with that access there, funneled through these ports here, and so on, just so that our applications could communicate with one another. This approach to security works well when the rate of system change is low, and when change happens over the course of days it’s easier to take manual steps or automate the setup and maintenance of networks.

When it comes to container-based systems, however, the rate of change isn’t numbered in days, but rather by the second. In highly dynamic environments, traditional network security models break down. The key problem is that traditional network security puts the emphasis on the only identity available to the network: an IP address. An IP address is not a strong indication of the application, and because dynamic environments like Kubernetes can freely reuse IP addresses for different workloads over time, they’re not sufficient to use for policy or security.

To address this problem, one of Istio’s key features is the ability to issue an identity to every workload in the service mesh. These identities are tied to the workload, not to some particular host (or some particular network identity). This means that you can write policy about service-to-service communication that’s robust to changes in deployment and the topology of the system, not bound to the network.

In this chapter, we’ll explore the concepts of identity, authorization, and authentication as they relate to service-to-service communication. We also dig into how Istio provides identity to workloads, performs authentication of those identities at runtime, and uses them to authorize service-to-service communication. Let’s start with access control.

Access Control

The fundamental question that access control systems answer is: “can entity perform action on object?” We call this entity the “principal.” Action is some operation the system defines. The object is the thing being acted upon by the principal. Using the Unix filesystem as an example, the actions “read,” “write,” or “execute” can be taken on the file object by the user principal.

Authentication

Authentication is all about the principal. Authentication is the process of taking some credential (e.g., a certificate), verifying that the credential is valid (i.e., that the credential is authentic), and ensuring that the entity in the access control question contains the identity that in fact is representative of the entity. Authorization is all about what actions the entity can and cannot perform on an object.

Authentication (abbreviated authn and pronounced “auth-in”) is the act of taking a credential from a request and ensuring that it’s authentic. In Istio’s case, the credential that services use when they communicate with one another is an X.509 certificate. Service proxies authenticate the identity of the calling service (and the client mutually authenticates the server) by validating the X.509 certificate provided by the other party using the normal certificate validation process. If the certificate is valid, the identity encoded within the certificate is considered authenticated. Once we’ve performed authentication, we say that the principal is authenticated, calling it an authenticated principal to differentiate it from a principal, which might or might not be authenticated.

Authorization

Authorization (abbreviated authz and pronounced “auth-zee”) is the act of answering the question “is entity allowed to perform action on object?” For example, to run a shell script on Unix, the system checks that the current user (an authenticated principal) has the execute permission on the script file. In Istio, authorization of service-to-service communication is configured with RBAC policies, which we cover in detail later in the chapter.

Note

We also use the abbreviation “auth” to refer to both authn and authz.

When we think about the access control question, “can entity perform action on object?”, it becomes clear that both authentication and authorization are required, and one without the other is useless. If we only authenticate credentials, any user can perform any action on any object. All we’ve done is assert that this user is in fact who they present themselves to be while they do it! Similarly, if we only authorize requests, any user can pretend to be any other user and perform actions on that user’s objects; all we’ve done is make sure someone has permission to perform the action in question, not necessarily the caller. One final thing to note is that the question Istio auth answers is a little more specific than “can entity perform action on object?” More specifically, Istio answers the question: “Can Service A perform action on Service B?” In other words, both entity and object are identities of services in the mesh.

With the concepts of authentication and authorization in hand, natural next questions are: What are the identities and who are the principals that Istio gives to services? How does the service mesh manage these identities at runtime? How do you write policy about the actions one service can perform on another and how does Istio enforce those policies at runtime? The remainder of this chapter steps through answers to each of these questions.

Identity

Understanding that service meshes span clusters—that services on and off of a service mesh are able to communicate with one another—where does the service mesh begin and end? What would you say is the boundary of a service mesh? Typically, the answers revolve around the concept of administrative domain, with the administrative domain being either all things that are configured by one service operator or all things that can communicate with one another as part of the same mesh. Both are popular answers. In our opinion, these types of answers fall short. For example, multiple teams can administer different segments of a mesh and different parts of a mesh might not in fact be allowed to communicate. Instead, we believe the best answer is “a service mesh is a single identity domain.” In other words, a single namespace from which every service in the system is allocated an identity.

Identity forms the boundary of a service mesh. Identity is a fundamental function of a service mesh in that all communication stems from identity. Traffic steering and telemetry functions of the service mesh rely on an understanding of how to identify services. Without knowing what you’re metering, metrics are useless data.

SPIFFE

Istio implements the Secure Production Identity Framework for Everyone (SPIFFE) specification to issue identities. In short, Istio creates an X.509 certificate, which sets the certificate’s subject alternative name (SAN) to a uniform resource identifier (URI) that describes the service. Istio defers to the platform for identity attributes. In a Kubernetes deployment, Istio uses a pod’s service account as its identity, encoding it into a URI: spiffe://ClusterName/ns/Namespace/sa/ServiceAccountName.

Note

In Kubernetes, a pod will use the “default” service account for the namespace it’s deployed in if the ServiceAccount field is not set in the pod specification. This means that all services in the same namespace will share a single identity if service accounts aren’t already set up for each service.

SPIFFE is a specification for a framework that can bootstrap and issue identities. SPIRE (the SPIFFE Runtime Environement) is the SPIFFE community’s reference implementation, and Citadel (formerly Istio Auth) is a second implementation. The SPIFFE specification describes three concepts:

  • An Identity, as a URI, used by services to communicate

  • A standard encoding of that Identity into a SPIFFE Verifiable Identity Document (SVID)

  • An API for issuing and retrieving SVIDs (the Workload API)

SPIFFE requires that a service’s identity be encoded as a URI with the scheme spiffe, like: spiffe://trust-domain/path. The trust domain is the root of trust of the identity (e.g., an organization, an environment, or a team). The trust domain is the URI’s authority field (specifically the host section of the authority). The specification allows the path section of the URI to be anything—a universally unique identifier (UUID), a trust hierarchy, or nothing. On Kubernetes, Istio encodes a service’s ServiceAccount using the local cluster’s name as the trust domain, and creates a path using the ServiceAccount name and namespace. For example, the default ServiceAccount is encoded as spiffe://cluster.local/ns/default/sa/default (“ns” for namespace, “sa” for service account).

SPIFFE also describes how to encode this identity into an X.509 SVID. An X.509 certificate can be verified to prove identity. The specification stipulates that the Identity URI be encoded as the certificate’s SAN field. There are three verifications to perform when validating an SVID:

  1. Perform normal X.509 validation.

  2. Confirm that the certificate is not a signing certificate. Signing certificates cannot be used for identification according to the specification.

  3. Verify that there is exactly one SAN in the certificate with the SPIFFE scheme.

SPIFFE defines a Workload API, which is an API for issuing and retrieving SVIDs; however, this is where Istio diverges from SPIFFE. Istio implements certificate provisioning using a custom protocol, the CA Service, instead. The Citadel Node Agent issues a CSR via that API when a new workload is scheduled; Citadel performs validation on the request and returns an SVID for the workload. Both the SPIFFE Workload API and the CA service accomplish a similar goal (prove some information about the workload to receive an identity for it).

Finally, while the SPIFFE specification does not require it, both SPIRE and Istio issue X.509 SVIDs that are short lived—they expire on the order of an hour after issuance. This is in contrast with traditional usage of X.509 certificates, which tend to be used for HTTPS TLS termination and commonly expire a year or more after issuance.

The benefit of short-lived certificates is that attacks are bounded within that expiry time without requiring certificate revocation (and making revocation easy if you do choose to use it). Suppose that an attacker compromises a workload and steals the workload’s SVID; it’s only valid across the rest of the trust domain for a short time. If their attack requires an extended period to execute, they must continually extract a valid credential from the workload. As soon as you become aware of the attack, you can use policy to prohibit that identity from accessing other services, stop reissuing certificates for that identity, and even put that certificate into a revocation list. Because the certificates are ephemeral, managing that revocation list is easy—it’s a standard practice to remove expired certificates from a certificate revocation list. The list stays small as the certificates expire quickly.

This use of short-lived certificates comes with one big disadvantage, though: it’s difficult to issue and rotate certificates on every workload across the fleet on a short interval. We talk about how Istio solves this problem in the next section.

Key Management Architecture

Three components—Citadel, node agents, and Envoy—are the key management architecture, all participating in issuing and rotating SVIDs across an Istio deployment (see also Figure 6-1):

Citadel

Citadel issues identities to workloads across the deployment by acting as a CA, signing certificate requests that form X.509 SVIDs.

Node agent

A trusted agent deployed on each node, this agent acts as a broker between Citadel and the Envoy sidecars deployed on the node.

Envoy (service proxy)

Envoy speaks to the node agent locally to retrieve an identity and presents that identity to other parties at runtime.

iuar 0601
Figure 6-1. Istio’s key management architecture and component interactions (see Chapter 5 for more on Envoy xDS APIs)

Citadel

Citadel is responsible for accepting requests for identity, authenticating them, authorizing them, and ultimately issuing a certificate for the identity in question. Citadel itself is composed of several logical components, as shown in Figure 6-2.

iuar 0602
Figure 6-2. Citadel’s architecture and internal component interactions

Walking through the CA service certificate provisioning flow from bottom to top, left to right in Figure 6-2, we see the following:

  1. Citadel exposes the CA service as its public API to identity requesters. To request an identity, a caller interfaces with the CA service, ultimately sending a CSR to Citadel, which Citadel signs, transforming the CSR into a certificate (an X.509 SVID).

  2. When the request is received, it’s then fed to an authenticator, which verifies the request. The authentication method depends on how Citadel is deployed. For example, in Kubernetes, Pilot is trusted to provide each workload with its service name.

  3. After the request is authenticated and the authorizer determines whether the requested identity is valid for the authenticated principal to receive. The authorizer consults an Identity Registry, which maps workloads via their authenticated principal to identities to perform authorization.

  4. Once a workload is authorized to receive an identity, we need to actually issue it by signing a certificate. The authorizer calls the issuer to generate a certificate and make it available to the requestor. Issuers in Citadel today include an in-memory CA as well as HashiCorp Vault.

Node Agents

Node agents are deployed on every node that has workloads to which Citadel will issue identities. It has two responsibilities: First, it acts as a simple protocol adapter between Envoy and Citadel. Envoy consumes a SDS API, which configures the secrets Envoy will serve at runtime. This API is great for issuing key material—the certificates themselves—but does not support verification. In other words, Citadel cannot use the SDS API to authenticate ownership of an identity. Instead, Citadel uses its bespoke CA service API to authenticate requests, as we described in the previous section. Node agents bridge the SDS and CA service on behalf of the workloads deployed on the nodes. This brings us to the second key responsibility of node agents: to be trusted agents on the nodes, able to validate workload environments for Citadel on behalf of the workload and to distribute keys locally to the workloads.

The node agent keeps in memory the secrets that it’s retrieved from Citadel, and when they near expiration (e.g., have 25% of their time-to-live [TTL] left) the agent will contact Citadel and attempt to refresh the certificate. We leave a little wiggle room in case Citadel is temporarily inaccessible. When the node agent dies and is restarted by the container orchestrator, it will attempt to retrieve fresh credentials for all workloads on the node. In this way, the node agent remains stateless. Upon receiving a fresh SVID from Citadel, the node agent pushes the certificate to Envoy via SDS. As described in Chapter 5, this triggers Envoy to establish new connections to destination workloads. Envoy will drain the current connections into the new connections and terminate the connections that were using the old (and now potentially expired) certificates.1

Relying on the node agent in this way is justified in Istio’s security model since the agent is already handling all secrets on the node to begin with; it must operate at a higher level of trust than other components on the node for exactly that reason. Therefore, extending that trust to additionally force the agent to validate the execution environment is reasonable. Even if we had Envoy communicate directly with Citadel, we could not trust environmental answers that Envoy provides because it runs in the same trust domain as the application itself (so an attacker compromising the application could just as easily confuse Envoy into giving different or wrong answers to environmental attestation challenges).

Finally, the architectural decision to have workloads communicate with the local node agent and have that node agent communicate with Citadel is important for keeping Citadel scalable. This design binds the number of connections to Citadel instances in the system with the number of nodes in the deployment (generally, a smaller number), not with the number of workloads (generally, a larger number). Often in environments like Kubernetes, there is a substantially larger number of workloads than nodes, so this design provides real benefits.

Envoy

The final participant in this whole dance is the service proxy, Envoy. Envoy is configured to communicate with the local node agent as its source for the SDS API. The location of the SDS server can be served to Envoy by Pilot dynamically; however, Istio deployments typically configure this information statically because doing so is less error prone at runtime. Envoy can communicate with the SDS server (the node agent) by resolving some address to the local node or, more commonly, by communicating locally over UDSs.

Note

Envoy won’t overwrite static configuration with configuration from the API, so you can’t accidentally push a configuration to Envoy that makes it unable to communicate with the configuration server.

Envoy uses the SVID certificate when it initiates connections to other services in the mesh. Two workloads in the mesh will optionally perform mTLS when they communicate. Doing so assures both the client and server of the identity of the other party, allows both to perform authentication and authorization of the communication being initiated, and provides for encryption in transit. mTLS alone isn’t enough, though, because we still need something to perform authorization of communication between two identities. We cover both mTLS and authorization of communication later in the chapter.

Pilot

Pilot also plays a minor role in key management. When Pilot pushes configuration to Envoy, including configuration about destination services and how to receive traffic, it needs to reference certificates. It references these certificates by name; therefore, it must coordinate with SDS, which is provided by the node agent. It would not be desirable to force the node agents to communicate with Pilot in addition to Citadel. Instead, Istio components plan on a common naming scheme for secrets ahead of time so that Pilot can unambiguously refer to secrets provisioned by SDS. Further, the primary certificate on all Envoys, the identity SVID, resides at a well-known location (in /etc/certs/, as described in Chapter 5).

mTLS

With all of the identity certificates (SVIDs) distributed to workloads across the system, how do we actually use them to verify the identity of the servers with which we’re communicating and perform authentication and authorization? This is where mTLS comes into play. First, though, a bit of background.

When we think of TLS or SSL (TLS is the new version of SSL), the use case that typically comes to mind is HTTPS. A user wants to use their browser to connect to some web server (e.g., http://wikipedia.org). Their browser (or the OD) will perform a DNS lookup to determine the site’s IP address. The browser fires off an HTTPS request to that address and waits for a response from the server (site). When the browser (client) attempts to connect to the server, the server responds by presenting a certificate with the identity (e.g., wikipedia.org) signed by some root of trust that the client trusts. The client validates the certificate, authenticating the identity of the server and allowing the connection to be established. Then, a set of keys are generated for the connection to enable encryption of the data sent by both the client and the server. In other words, TLS is how the client knows it can trust the server, that the server really is controlled by wikipedia.org, and that no one is eavesdropping or otherwise tampering with the data sent by the server.

mTLS is TLS in which both parties, client and server, present certificates to each other. This allows the client to verify the identity of the server, like normal TLS, but it also allows the server to verify the identity of the client attempting to establish the connection. We use mTLS in Istio, where both parties present their SVID to each other. This allows both parties to authenticate the SVID provided by the other and to perform authorization on the connection. In practice, in Istio we perform authorization only on the server side. This makes sense given how you write authorization policy, which we cover in the next section.

Configuring Istio Auth Policies

Istio splits authentication and authorization policy into two sets of configurations. The first, authentication policy, controls how proxies in the mesh communicate with one another (whether or not they require an SVID). The second, authorization policy, requires authentication policy first and configures which identities are allowed to communicate.

Authentication Policy: Configuring mTLS

Adopting mTLS into an existing deployment is challenging because you need to make sure both client and server are provisioned with certificates at the same time (traditional TLS is far simpler given that it requires coordinating only the server deployment). As a result, Istio provides a few knobs to take a deployment not using mTLS and gradually enable it without causing outages for clients.

Authentication policy (authentication.istio.io/v1alpha1.Policy) is the primary CRD we use to configure how services in the mesh communicate with one another. Authentication policy allows us to require, make optional, or disable mTLS on a service-by-service, namespace-by-namespace basis. A cluster-scoped variant, MeshPolicy, applies a default policy to every namespace and service in the mesh.

To enable mTLS for a single service, we create a policy in that service’s namespace with that service as the target, requiring mTLS like so:

apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
  name: foo-require-mtls
  namespace: default
spec:
  targets:
  - name: foo.default.svc.cluster.local
  peers:
  - mtls:
      mode: STRICT

This policy applies to the default namespace and marks TLS as required for talking to service foo. Note that because the default mTLS configuration is in STRICT mode, we can simplify this configuration a bit by omitting the redundant fields:

apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
  name: foo-require-mtls
  namespace: default
spec:
  targets:
  - name: foo.default.svc.cluster.local
  peers:
  - mtls: {}

Many policy examples on istio.io take this form, omitting the mTLS object because the default behavior is to require STRICT mode.

Of course, just creating this configuration in a cluster could cause outages if the clients don’t already have certificates with which to perform mTLS. That’s why Istio includes a PERMISSIVE mTLS mode, which allows clients to connect in either clear text or via mTLS. The following configuration allows clients to contact service bar using both mTLS and clear text:

apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
  name: bar-optional-mtls
  namespace: default
spec:
  targets:
  - name: bar.default.svc.cluster.local
  peers:
  - mtls:
      mode: PERMISSIVE

Similarly, we could make mTLS optional across an entire namespace by omitting the targets field:

apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
  name: default-namespace-optional-mtls
  namespace: default
spec:
  peers:
  - mtls:
      mode: PERMISSIVE

This configuration allows workloads in the mesh to contact any service in the default namespace using either mTLS or clear text. We can also enable or disable mTLS per port on a service. An example of where per-port policy is valuable is the health checks performed by kubelet in Kubernetes deployments. It can be burdensome to provision separate certificates for mTLS connections with kubelets. By writing two policy objects, we can exclude the health check port from mTLS while requiring mTLS for all other ports, making integration with existing systems easier, like so:

apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
  name: bar-require-mtls-no-port-81
  namespace: default
spec:
  targets:
  - name: bar.default.svc.cluster.local
  peers:
  - mtls:
      mode: STRICT
---
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
  name: bar-require-mtls-no-port-81
  namespace: default
spec:
  targets:
  - name: bar.default.svc.cluster.local
    port:
      name: http-healthcheck
  peers:
  - mtls:
      mode: PERMISSIVE

Using this same approach, operators can exclude mTLS as a requirement for connections to the http-healthcheck port across the namespace by omitting listing any specific service names in the target field.

To apply the same configuration across all namespaces, we use the MeshPolicy resource. This is identical to the policy resource in schema but exists at the cluster level. Also, note that the default MeshPolicy must be named “default” or Istio will not recognize it correctly.

apiVersion: authentication.istio.io/v1alpha1
kind: MeshPolicy
metadata:
  name: mesh-wide-optional-mtls
spec:
  peers:
  - mtls:
      mode: PERMISSIVE

And, of course, we can make mTLS required across the mesh by setting mode: STRICT or by omitting the mTLS object entirely:

apiVersion: authentication.istio.io/v1alpha1
kind: MeshPolicy
metadata:
  name: mesh-wide-mtls
spec:
  peers:
  - mtls: {}

Istio also supports performing end-user authentication via JSON Web Tokens (JWTs). Istio’s authentication policy supports setting a rich set of restrictions about the data in the JWT, allowing you to validate nearly all of the fields of the JWT. The following policy configures Envoy to require mTLS, but it also requires end-user credentials stored as a JWT in the "x-goog-iap-jwt-assertion" header, issued by Google ("https://securetoken.google.com"), verified against Google’s public keys ("https://www.googleapis.com/oauth2/v1/certs"), for the audience “bar”:

apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
  name: end-user-auth
  namespace: default
spec:
  target:
  - name: bar
  peers:
  - mtls: {}
  origins:
  - jwt:
      issuer: "https://securetoken.google.com"
      audiences:
      - "bar"
      jwksUri: "https://www.googleapis.com/oauth2/v1/certs"
      jwt_headers:
      - "x-goog-iap-jwt-assertion"
  principalBinding: USE_ORIGIN

Authorization Policy: Configuring Who Can Talk to Whom

With authentication policy in place, we want to use the identities across the system to control which services can communicate. In other words, we want to describe a service-to-service communication policy. Istio’s authorization policy is described using an RBAC system. Like most RBAC systems, it defines two objects that are used together to write policy:

ServiceRole

Describes a set of actions that can be performed on a set of services by any principal with the role.

ServiceRoleBinding

Assigns roles to a set of principals. In this context, the principals are the service identities Istio issues. Recall that in Kubernetes deployments, these identities are Kubernetes ServiceAccounts.

First, we need to create a ClusterRBACConfig (formerly RBACConfig prior to v1.1) object, which turns on RBAC in Istio:

apiVersion: "rbac.istio.io/v1alpha1"
kind: RBACConfig
metadata:
  name: default
  namespace: istio-system
spec:
  mode: ON

This configuration enables RBAC of service-to-service communication across the entire mesh. Like enabling mTLS, this is potentially dangerous to do in a live system, so Istio supports enabling RBAC for service-to-service communication incrementally by changing the RBACConfig’s mode. Istio supports four modes:

OFF

No RBAC required for communication. If no ClusterRBACConfig object exists, this is the default behavior of the system.

ON

RBAC policies are required for communication, and communication not allowed by a policy is forbidden.

ON_WITH_INCLUSION

RBAC policies are required for communicating with any service in the set of namespaces listed in the policy.

ON_WITH_EXCLUSION

RBAC policies are required for communicating with any service in the mesh, except for services in the set of namespaces listed in the policy.

To roll out RBAC incrementally across the system, first enable RBAC in ON_WITH_INCLUSION mode. As you define policies for each service or namespace, add that service or namespace to the inclusion list. This allows you to enable RBAC service by service (or namespace by namespace), as shown in Example 6-1.

Example 6-1. Stepwise rollout of RBAC policy
apiVersion: "rbac.istio.io/v1alpha1"
kind: ClusterRBACConfig
metadata:
  name: default
  namespace: istio-system
spec:
  mode: ON_WITH_INCLUSION
  inclusion:
    services:
    - bar.bar.svc.cluster.local
    namespaces:
    - default

The policy in Example 6-1 does not require RBAC policies to communicate with any service in the default namespace other than with the bar service. At some point, more namespaces and services in our system will have RBAC policies than those without; at that point we can swap to an ON_WITH_EXCLUSION policy.

With RBAC enabled for the bar service, we need to write policies. We begin by picking a namespace or service and describing the roles that exist for that service. For our example, we create a ServiceRole that allows read access (HTTP GET requests) to the bar service:

apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRole
metadata:
  name: bar-viewer
  namespace: default
spec:
  rules:
  - services:
    - bar.default.cluster.local
    methods:
    - GET

We then can use a ServiceRoleBinding to assign that role to the service account that the bar service uses, allowing it to call the foo service:

apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRoleBinding
metadata:
  name: bar-bar-viewer-binding
  namespace: default
spec:
  subjects:
  - properties:
      # the SPIFFE ID of the bar service account in the bar namespace
      source.principal: "cluster.local/ns/bar/sa/bar"
  roleRef:
    kind: ServiceRole
    name: "bar-viewer"

Unlike RBAC in applications, permitting or denying users a specific operation, Istio’s RBAC is service-to-service focused, specifying which services can connect and communicate with one another. To achieve this, include a key management system, Citadel, to provide an identity for each service in the mesh and allow it to authenticate itself.

Identity forms the boundary of our mesh. With Istio’s service proxies carrying individual identities and handling all traffic to and from services, you can use mutual trusted certificates to secure connections and authorize these connections. Istio facilitates incremental adoption of service-to-service mTLS and RBAC.

1 We don’t need to reestablish connections using the new credentials immediately—after all, the TLS session remains valid so long as it was initiated before the certificate expires. An established connection will happily continue to use an expired certificate. We choose to reestablish connections to mitigate certain types of credential hijacking attacks. Plus, a little jitter tends to be good for a system!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset