Application Security
As always, security is an important consideration. This chapter describes security requirements within the context of a cloud native microservices architecture. The following topics are covered:
6.1 Securing microservice architectures
The dynamic nature of microservice architectures changes how security should be approached. This consideration is specifically relevant to how application or service boundaries are defined. Figure 6-1 shows simplified representations of how requests flow in monolithic and microservice architectures. Connections between microservices and persistent storage services (RDBMS, NoSQL) have been hidden for general simplicity, but represent yet another dimension of network traffic.
Figure 6-1 Simplified topology for monolithic and microservice architectures
The left side of Figure 6-1 shows a simplification of a traditional monolithic architecture. Requests come through the gateway, go through a load balancer, to business logic in the middleware, and then down to the data tier.
On the right side of Figure 6-1, things are less organized. Collections of microservices are grouped as applications, and surface external APIs as endpoints through a gateway. One obvious difference is the number of moving parts. The other is how many more requests there are.
The biggest difference is that the composition of services shown on the right side are under constant change. Individual service instances come and go based on load. Microservices are continuously updated independently. Overall, there can be hundreds of updates a day across services. Securing the perimeter is not going to be enough. The question is how to secure a rapidly changing infrastructure in an organic way that still allows the individual services to change and grow without requiring central coordination.
6.1.1 Network segmentation
Figure 6-1 on page 68 shows an isolated subnet in both systems. Using an extra firewall or gateway to guard resources that require more levels of protection is a good idea in either environment.
Something else to observe in microservices approach shown in Figure 6-1 on page 68 is that the externally exposed APIs are connected to two different microservices-based applications. Defined application boundaries provide a reasonable amount of isolation between independently varying systems, and are a good way to maintain a reasonable security posture in a dynamic environment.
Network segmentation happens naturally in hosted environments. For example, each microservices application that is shown in Figure 6-1 on page 68 could be a separate tenant in a multi-tenant environment. Where it does not happen naturally, you might want to encourage it. However, describing the best practices for managing networks in microservice architectures is well outside the scope of this publication.
6.1.2 Ensuring data privacy
Different kinds of data require different levels of protection, and that can influence how data is accessed, how it is transmitted, and how it is stored.
When dealing with data, take into account these considerations:
Do not transmit plain text passwords.
Protect private keys.
Use known data encryption technologies rather than inventing your own.
Securely store passwords by using salted hashes. For more information about salted hashing, see the following website:
Further, sensitive data should be encrypted as early as possible, and decrypted as late as possible. If sensitive data must flow between services, only do so while it is encrypted, and should not be decrypted until the data needs to be used. This process can help prevent accidental exposure in logs, for example.
Backing services
As mentioned earlier, connections to backing services are additional sources of network traffic that need to be secured. In many cloud environments, backing services are provided by the platform, relying on multi-tenancy and API keys or access tokens to provide data isolation. It is important to understand the characteristics of backing services, especially how they store data at rest, to ensure regulatory requirements (HIPAA, PCI, and so on) are satisfied.
Log data
Log data is a trade-off between what is required to diagnose problems and what must be protected for regulatory and privacy reasons. When writing to logs, take full advantage of log levels to control how much data is written to logs. For user-provided data, consider whether it belongs in logs at all. For example, do you really need the value of the attribute written to the log, or do you really only care about whether it was null?
6.1.3 Automation
As mentioned way back in Chapter 1, “Overview” on page 1, as much as possible should be automated in microservice environments, which includes general operation. Repeatable automated processes should be used for applying security policies, credentials, and managing SSL certificates and keys across segmented environments to help avoid human error.
Credentials, certificates, and keys must be stored somewhere for automation to use. Remember the following considerations:
Do not store your credentials alongside your applications.
Do not store your credentials in a public repository.
Only store encrypted values.
Spring Cloud Config, as an example, uses Git to back up their configuration. It provides APIs to automatically encrypt configuration values on the way into the repository, and to decrypt them on the way out. For more information about Spring Cloud Config, see this website:
Hashicorp’s Vault is similar, but adds extra capabilities to allow use of dynamic keys with expiry. For more information about Vault, see the following website:
Other key/value stores, for example etcd, zookeeper, or consul, could also be used, but care should be taken with how credentials are stored in what are otherwise plain text repositories.
6.2 Identity and Trust
A highly distributed, dynamic environment like microservices architectures place some strain on the usual patterns of establishing identity. You must establish and maintain the identity of users without introducing extra latency and contention with frequent calls to a centralized service.
Establishing and maintaining trust throughout this environment is not the easiest either. It is inherently unsafe to assume a secure private network. End-to-end SSL can bring some benefit in that bytes are not flowing around in plain text, but it does not establish a trusted environment on its own, and requires key management.
The following sections outline techniques for performing authentication and authorization and identity propagation to establish and maintain trust for inter-service communications.
6.2.1 Authentication and authorization
There are new considerations for managing authentication and authorization in microservice environments. With a monolithic application, it is common to have fine grained roles, or at least role associated groups in a central user repository. With the emphasis on independent lifecycles for microservices, however, this dependency is an anti-pattern. Development of an independent microservice is then constrained by and coupled with updates to the centralized resource.
It is common to have authentication (establishing the user’s identity) performed by a dedicated, centralized service or even an API gateway. This central service can then further delegate user authentication to a third party.
When working with authorization (establishing a user’s authority or permission to access a secured resource), in a microservices environment keep group or role definitions coarse grained in common, cross cutting services. Allow individual services to maintain their own fine grained controls. The guiding principle again is independence. A balance must be found between what can be defined in common authorization service to meet requirements for the application as a whole, and what authorization requirements are implementation details for a particular service.
Java EE security
Traditional enterprise access control tokens also have less traction in a microservice environment. The presence of other programming languages and the inability to rely on long-lived sessions undercut the operating assumptions of traditional Java EE security mechanisms.
For Java EE security, each Java EE server must be trusted by all of the other services. The security token (in WebSphere Application Server, this is an LTPA Token) that the user receives after authentication must be valid on the other servers. Establishing trust this way means that either (a) every server needs the certificates of every other server in its keystore, or (b) all of the services need to use the same key. A user authenticating to the system must also be in the security realm defined for your Java EE server. Key management, specifically keystore management for Java applications, in a dynamically provisioned environment is a non-trivial problem to solve. For more information about Java EE Security, see this website:
If you compare the effort that is required to achieve and maintain this environment to the benefit you get from Java EE security, then it is difficult to recommend using Java EE security.
6.2.2 Delegated Authorization with OAuth 2.0
OAuth provides an open framework for delegating authorization to a third party. Although the framework does define interaction patterns for authorization, it is not a strictly defined programming interface, with some variations between OAuth providers. For more information about the OAuth Authorization Framework, see the following website:
OpenID Connect (OIDC) provides an identity layer on top of OAuth 2.0, making it possible to also delegate authentication to a third party. It defines an interoperable REST-based mechanism for verifying identity based on the authentication steps performed by the OAuth provider, and to obtain basic profile information about the authorized user. For more information about ODIC, see the following website:
Game On! uses three-legged authentication defined by OAuth and OIDC to delegate to social sign-on providers, specifically Facebook, GitHub, Google, and Twitter. As shown in Figure 6-2, the game has a dedicated Auth service that acts as a proxy for the various social login providers. This Auth service triggers a request to the target OAuth provider including information required to identify the Game On! application.
Figure 6-2 OAuth invocation sequence diagram for Game On!
The service interaction goes through these steps:
1. The front end (web client or mobile application) makes a request to the game’s Auth service on behalf of the user.
2. The Auth service returns a forwarding response that is automatically handled by the browser to forward the user to the selected OAuth provider with the application identifier.
3. After the user has authenticated and granted permission, the OAuth provider returns an authorization code in another redirect response. The browser automatically handles the redirect to invoke a callback on the Auth service with the authorization code.
4. The Auth service then contacts the OAuth provider to exchange the authorization code for an access token.
5. The Auth service then converts data from that token into a Signed JSON Web Tokens (JWTs), which allows you to verify the identity of the user over subsequent inter-service calls without going back to the OAuth provider.
Some application servers, like WebSphere Liberty, have OpenID Connect features to facilitate communication with OAuth providers. As of this writing, those features do not provide the access token that was returned from the OAuth provider. Also, minute protocol differences can make using individual provider libraries easier than more generic solutions.
A simpler approach might be to perform authentication with an OAuth provider by using a gateway. If you take this approach, it is still a good idea to convert provider-specific tokens into an internal facing token that contains the information that you need.
6.2.3 JSON Web Tokens
Identity propagation is another challenge in a microservices environment. After the user (human or service) has been authenticated, that identity needs to be propagated to the next service in a trusted way. Frequent calls back to a central authentication service to verify identity are inefficient, especially given that direct service-to-service communication is preferred to routing through central gateways whenever possible to minimize latency.
JWTs can be used to carry along a representation of information about the user. In essence, you want to be able to accomplish these tasks:
Know that the request was initiated from a user request
Know the identity that the request was made on behalf of
Know that this request is not a malicious replay of a previous request
JWTs are compact and URL friendly. As you might guess from the name, A JWT (pronounced “jot”) contains a JSON structure. The structure contains some standard attributes (called claims), such as issuer, subject (the user’s identity), and expiration time, which has clear mappings to other established security mechanisms. Room ia also available for claims to be customized, allowing additional information to be passed along.
While building your application, you will likely be told that your JWT is not valid for reasons that are not apparent. For security and privacy reasons, it is not a good idea to expose (to the front end) exactly what is wrong with the login. Doing so can leak implementation or user details that could be used maliciously. JWT.IO provides some nice web utilities for working with JWTs, including a quick way to verify whether the encoded JWT that scrolled by in your browser console or your trace log is valid or not. For more information about JWT.IO, see the following website:
Dealing with time
One of the nice attributes of JWTs is that they allow identity to be propagated, but not indefinitely. JWTs expire, which triggers revalidation of the identity that results in a new JWT. JWTs have three fields that relate to time and expiry, all of which are optional. Generally, include these fields and validate them when they are present as follows:
The time the JWT was created (iat) is before the current time and is valid.
The “not process before” claim time (nbf) is before the current time and is valid.
The expiration time (exp) is after the current time and is valid.
All of these times are expressed as UNIX epoch time stamps.
Signed JWTs
Signing a JWT helps establish trust between services, as the receiver can then verify the identity of the signer, and that the contents of the JWT have not been modified in transit.
JWTs can be signed by using shared secrets, or a public/private key pair (SSL certificates work well with a well known public key). Common JWT libraries all support signing. For more information about JWT libraries, see the following website:
Working with JWTs
Common JWT libraries make working with JWTs easy. For example, with JJWT1, creating a new JWT is straightforward, as shown in Example 6-1.
Example 6-1 Creating and validating a signed JWT using JJWT
import java.time.Instant;
import java.time.temporal.ChronoUnit;
import java.util.Date;
import io.jsonwebtoken.Claims;
import io.jsonwebtoken.Jws;
import io.jsonwebtoken.Jwts;
import io.jsonwebtoken.SignatureAlgorithm;
public class JwtExample {
public String createJwt(String secret) throws Exception {
// create and sign the JWT, including a hint
// for the key used to sign the request (kid)
String newJwt = Jwts.builder()
.setHeaderParam("kid", "meaningfulName")
.setSubject("user-12345")
.setAudience("user")
.setIssuedAt(Date.from(Instant.now()))
.setExpiration(Date.from(Instant.now().plus(15, ChronoUnit.MINUTES)))
.signWith(SignatureAlgorithm.HS512, secret)
.compact();
return newJwt;
}
public void validateJwt(String jwtParameter, String secret) throws Exception {
// Validate the Signed JWT!
// Exceptions thrown if not valid
Jws<Claims> jwt = Jwts.parser()
.setSigningKey(secret)
.parseClaimsJws(jwtParameter);
// Inspect the claims, like make a new JWT
// (need a signing key for this)
Claims jwtClaims = jwt.getBody();
System.out.println(jwtClaims.getAudience());
System.out.println(jwtClaims.getIssuer());
System.out.println(jwtClaims.getSubject());
System.out.println(jwtClaims.getExpiration());
System.out.println(jwtClaims.getIssuedAt());
System.out.println(jwtClaims.getNotBefore());
}
}
Example 6-1 is an altered version of what the authentication service in Game On! does to create the JWT from the returned OAuth token2. JWTs should be validated in a central place such as a JAX-RS request filter.
A consideration is the longevity of the JWT. If your JWT is long lived, you will have significant reduction in traffic to revalidate JWTs, but might then have a lag in the system reacting to access changes.
In the case of Game On!, the user is connected by using long running WebSocket connections to enable 2-way asynchronous communication between third parties. The JWT is passed as part of the query string when the connection is established. Expiring that JWT leads to an unpleasant user experience at the time of writing, so we chose to use a relatively long JWT expiry period.
6.2.4 Hash-Based Messaging Authentication Code
Authentication by using Hash-Based Messaging Authentication Code (HMAC) signatures is superior to HTTP Basic authentication. When using HMAC, request attributes are hashed and signed with a shared secret to create a signature that is then included in the sent request. When the request is received, the request attributes are rehashed by using the shared secret to ensure that the values still match. This process authenticates the user, and verifies that the content of the message has not been manipulated in transit. The hashed signature can also be used to prevent replay attacks.
HMAC validation can be performed by an API Gateway. In this case, the Gateway authenticates the user, and then can either serve cached responses, or produce a JWT or other custom headers to include in the request before forwarding on to backend/internal services. Using a JWT has the advantage of allowing downstream services to consistently use JWTs to work with the caller’s identity, independent of the original authentication method (such as OAuth when using a UI, versus HMAC for programmatic invocation).
HMAC is not yet a complete standard, API providers (either gateways or individual services) often require different request attributes (http headers or a hash of the message body) in the generated signature. Minimally, an identifying string and the date are used.
Using shared libraries is encouraged when working with HMAC signatures because HMAC calculations must be performed in the same way by both the API consumer and the provider. Consider providing client-side libraries to help consumers sign their requests correctly.
Game On! does not use an API gateway at the time of writing. We provide a shared library3 to ensure that a common HMAC signing algorithm is used for both REST and WebSocket requests. The shared library provides JAX-RS filters for inbound and outbound requests, and a core utility that can be invoked around the WebSocket handshake between the Mediator4 and third-party Room services. See Example 6-2.
For example, the Mediator uses the shared library to sign outbound requests to the Map5.
Example 6-2 JAX-RS 2.0 client using SignedClientRequestFilter from shared library
Client client = ClientBuilder.newClient()
.register(JsonProvider.class);
// a filter is added to the JAX-RS client
// the filter will sign the request on the way out
SignedClientRequestFilter apikey = new
SignedClientRequestFilter(userid, secret);
client.register(apikey);
The shared library allows the use of simple annotations to flag server-side endpoints that should have the inbound signature verified. JAX-RS filters and interceptors are provided to verify both request headers and the message body as shown in Example 6-3.
Example 6-3 Map service JAX-RS endpoint: Annotation from shared library to enable signing
@POST
@SignedRequest
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
public Response createRoom(RoomInfo newRoom) {
Site mappedRoom = mapRepository.connectRoom(…);
return Response.created(…).entity(mappedRoom).build();
}
Using the shared library in this way keeps the business logic clean. The configuration ensures that the client and server side agree on included attributes and hash calculation as part of producing the library. The shared library simplifies what both the API provider and the consumer need to deal with.
6.2.5 API keys and shared secrets
An API key can be one of a few different things, but the usual usage is to identify the origin of a request. Identifying the origin of a request is important for API management functions like rate limiting and usage tracking. API keys are usually separate from an account’s credentials, enabling them to be created or revoked at will.
An API key is sometimes a single string, but unlike a password, these strings are randomly generated and over 40 characters long. However, sometimes the string is used directly in the request as either a bearer token or query parameter (over https). Using an API key is more secure when used as a shared secret for signing tokens (JWTs) or for digest methods of request authentication (HMACs).
If your service creates API keys, ensure that API keys are securely generated with high entropy and are stored correctly. Ensure that these keys can be revoked at any time. If your application has distinct parts, consider using different API keys for each part. This configuration can enable more granular usage tracking, and reduces the impact if a key is compromised.
Asymmetric keys can also be used (such as SSL), but require a public key infrastructure (PKI) to be maintained. Although it is more secure, this configuration is also more labor intensive, and can be especially hard to manage when API consumers are third parties.
Consistent with the 12 factors, secrets should be injected configuration when used by services for intra-service communication. The values should never be hardcoded because that increases the likelihood they will be compromised (such as checked into a repository in plain text). It either makes your code sensitive to the environment, or requires different environments to use the same shared secret, neither of which are good practices.
 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset