Chapter 15. Handling failure and latency

This chapter covers

  • Introducing the circuit breaker pattern
  • Handling failure and latency with Hystrix
  • Monitoring circuit breakers
  • Aggregating circuit breaker metrics

15.1. Understanding circuit breakers

The circuit breaker pattern, as made popular in Release It!, 2nd edition, by Michael Nygard (Pragmatic Bookshelf, 2018) addresses the reality that the code we write will fail. What’s important is that when it fails, it fails gracefully. This powerful pattern is even more significant in the context of microservices, where it’s important to avoid letting failures cascade across a distributed call stack.

The idea of the circuit breaker pattern is relatively simple and is quite similar to a real-world electrical circuit breaker from which it gets its name. With an electrical circuit breaker, when the switch is in a closed position, the electricity flows through the circuits in a house, powering lights, televisions, computers, and appliances. But if there’s any fault in the line, such as a power surge, the circuit breaker opens, stopping the flow of electricity before it damages electronics or results in a house fire.

Likewise, a software circuit breaker starts in a closed state, allowing invocations of a method. If, for any reason, that method fails (perhaps exceeding a defined threshold), the circuit opens and invocations are no longer performed against the failing method. Where a software circuit breaker differs, however, is that it provides fallback behavior and is self-correcting.

If the protected method fails within a given threshold of failure, then a fallback method can be called in its place. Once the circuit opens, that fallback method will be called almost exclusively. Every so often, though, a circuit that’s open will enter a half-open state and attempt to invoke the failing method. If it still fails, the circuit resumes in an open state. If it succeeds, then it’s assumed that the problem has been resolved and the circuit returns to a closed state. Figure 15.1 illustrates the flow of a software circuit breaker.

Figure 15.1. The circuit breaker pattern enables graceful failure handling.

It can be helpful to think of circuit breakers as a more powerful form of try/catch. A closed circuit is analogous to the try block, whereas the fallback method is akin to the catch block. Unlike try/catch, however, circuit breakers are intelligent enough to route calls to bypass the intended method, always calling the fallback method when the intended method is failing too frequently.

As I’ve implied, circuit breakers are applied on methods. There could easily be several dozen (or more) circuit breakers within a given microservice. Deciding where to declare circuit breakers in your code is a matter of identifying methods that are subject to failure. The following categories of methods are certainly candidates for circuit breakers:

  • Methods that make REST calls These could fail due to the remote service being unavailable or returning HTTP 500 responses.
  • Methods that perform database queries These could fail if, for some reason, the database becomes unresponsive, or if the schema changes in ways that break the application.
  • Methods that are potentially slow These won’t necessarily fail, but may be considered unhealthy if they’re taking too long to do their job.

That last item highlights another benefit of circuit breakers beyond handling failure. Latency is also an important concern in microservices, and it’s crucial that an excessively slow method not drag down the performance of the microservice, resulting in cascading latency to upstream services.

As you can see, the circuit breaker pattern is an incredibly powerful means of gracefully handling failure and latency in code. How can we apply circuit breakers in our code? Fortunately, Netflix open source projects provide an answer with the Hystrix library.

Netflix Hystrix is a Java implementation of the circuit breaker pattern. Put simply, a Hystrix circuit breaker is implemented as an aspect applied to a method that triggers a fallback method should the target method fail. And, to properly implement the circuit breaker pattern, the aspect also tracks how frequently the target method fails and then forwards all requests to the fallback if the failure rate exceeds some threshold.

A point to make about Hystrix’s name

When coming up with the name for their circuit breaker implementation, the developers at Netflix wanted a name that captured the resilience, defense, and fault tolerance that would be provided. They settled on Hystrix, which happens to be the genus of what is known as the Old-World porcupine, an animal characterized by its ability to defend itself with long quills. Also, as explained in the Hystrix FAQ, it’s a cool-sounding name. When we look at the Hystrix dashboard in section 15.3.1, you’ll get to see how a porcupine found a position as the project logo.

Spring Cloud Netflix includes support for Hystrix, providing a simple programming model that should be familiar to Spring and Spring Boot developers. Declaring a circuit breaker on a method is a simple matter of annotating the method with @HystrixCommand and providing a fallback method. Let’s see how to handle failure gracefully with Hystrix by declaring circuit breakers in your Taco Cloud code.

15.2. Declaring circuit breakers

Before you can declare circuit breakers, you’ll need to add the Spring Cloud Netflix Hystrix starter to the build specification of each of the services. In a Maven pom.xml file, the dependency looks like this:

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-netflix-hystrix</artifactId>
</dependency>

As part of the Spring Cloud portfolio, you’ll also need to declare dependency management for the Spring Cloud release train in your build. As I write this, the latest release train version is Finchley.SR1. Therefore, the Spring Cloud version should be set as a property, and the following entry should appear in the pom.xml file <dependencyManagement> block:

<properties>
  ...
  <spring-cloud.version>Finchley.SR1</spring-cloud.version>
</properties>

...

<dependencyManagement>
  <dependencies>
    <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-dependencies</artifactId>
      <version>${spring-cloud.version}</version>
      <type>pom</type>
      <scope>import</scope>
    </dependency>
  </dependencies>
</dependencyManagement>
Note

This starter dependency is also available as a check box with the label Hystrix in the Initializr when creating a project. If you use the Initializr to add Hystrix to your project build, then the dependency management block is automatically created for you.

With the Hystrix starter dependency in place, the next thing you’ll need to do is to enable Hystrix. The way to do that is to annotate each application’s main configuration class with @EnableHystrix. For example, to enable Hystrix in the ingredient service, you’d annotate IngredientServiceApplication like this:

@SpringBootApplication
@EnableHystrix
public class IngredientServiceApplication {
    ...
}

At this point, Hystrix is enabled in your application. But that only means that all the pieces are in place for you to declare circuit breakers. You still haven’t declared any circuit breakers on any of the methods. That’s where the @HystrixCommand annotation comes into play.

Any method that’s annotated with @HystrixCommand will be declared as having a circuit breaker aspect applied to it. For example, consider the following method that uses a load-balanced RestTemplate to fetch a collection of Ingredient objects from the ingredient service:

public Iterable<Ingredient> getAllIngredients() {
  ParameterizedTypeReference<List<Ingredient>> stringList =
      new ParameterizedTypeReference<List<Ingredient>>() {};
  return rest.exchange(
      "http://ingredient-service/ingredients", HttpMethod.GET,
      HttpEntity.EMPTY, stringList).getBody();
}

The call to exchange() is a potential cause for trouble. If there’s no service registered in Eureka as ingredient-service, or if the request fails for any reason, then a RestClientException (an unchecked exception) will be thrown. Because the exception isn’t being handled with a try/catch block, the caller must handle the exception. If the caller doesn’t handle it, then it’ll continue to be thrown upstream in the call stack. If it isn’t handled at all, then the error cascades to any upstream microservices or clients.

Uncaught exceptions are a challenge in any application, but especially so in microservices. When it comes to failures, microservices should apply the Vegas Rule—what happens in a microservice, stays in a microservice. Declaring a circuit breaker on the getAllIngredients() method satisfies that rule.

At a minimum, you only need to annotate the method with @HystrixCommand, and then provide a fallback method. First, let’s add @HystrixCommand to the getAllIngredients() method:

@HystrixCommand(fallbackMethod="getDefaultIngredients")
public Iterable<Ingredient> getAllIngredients() {
  ...
}

With a circuit breaker protecting it from failure, getAllIngredients() is fail safe. If, for any reason, any uncaught exceptions escape from getAllIngredients(), the circuit breaker will catch them and redirect the method call to a method named getDefaultIngredients().

Fallback methods can do anything you want them to do, but the intention is that they offer backup behavior in the event that the originally intended method is unable to perform its duties. The only rule for the fallback method is that it has the same signature (aside from the method name) as the method it’s serving as a backup for.

To meet this requirement, the getAllIngredients() method must accept no parameters and return List<Ingredient>. The following implementation of getAllIngredients() satisfies that rule and returns a default list of ingredients:

private Iterable<Ingredient> getDefaultIngredients() {
  List<Ingredient> ingredients = new ArrayList<>();
  ingredients.add(new Ingredient(
        "FLTO", "Flour Tortilla", Ingredient.Type.WRAP));
  ingredients.add(new Ingredient(
        "GRBF", "Ground Beef", Ingredient.Type.PROTEIN));
  ingredients.add(new Ingredient(
        "CHED", "Shredded Cheddar", Ingredient.Type.CHEESE));
  return ingredients;
}

Now, if for any reason getAllIngredients() fails, the circuit breaker falls back with a call to getDefaultIngredients(), and the caller will receive a default (albeit limited) list of ingredients.

You might be wondering if a fallback method can itself have a circuit breaker. Although there’s little that could go wrong with getDefaultIngredients() as you’ve written it, it’s possible that a more interesting implementation of getDefaultIngredients() could be a potential point of failure. In that case, you can annotate getDefaultIngredients() with @HystrixCommand and provide yet another fallback method. In fact, you can stack up as many fallback methods as make sense, if necessary. The only restriction is that there must be one method at the bottom of the fallback stack that doesn’t fail and doesn’t require a circuit breaker.

15.2.1. Mitigating latency

Circuit breakers can also mitigate latency by timing out if a method is taking too long to return. By default, all methods annotated with @HystrixCommand time out after 1 second, falling back to their declared fallback method. That means that if, for some reason, the ingredient service is sluggish in responding, then the call to getAllIngredients() times out after 1 second, and getDefaultIngredients() will be called instead.

The one-second timeout is a reasonable default and suitable for most use cases. But you can change it to be more or less restrictive by specifying a Hystrix command property. Setting Hystrix command properties can be done through the commandProperties attribute of the @HystrixCommand annotation. The commandProperties attribute is an array of one or more @HystrixProperty annotations that specify a name and a value of the property to be set.[1]

1

If you’re like me, you’ll agree that using annotations to set attributes of an annotation is weird. Weird or not, that’s still how it’s done.

In order to tweak the timeout of a circuit breaker, you need to set the Hystrix command property execution.isolation.thread.timeoutInMilliseconds. For example, to tighten the timeout period on the getAllIngredients() method to a half second, you can set the timeout to 500 as follows:

@HystrixCommand(
    fallbackMethod="getDefaultIngredients",
    commandProperties={
        @HystrixProperty(
            name="execution.isolation.thread.timeoutInMilliseconds",
            value="500")
    })
public Iterable<Ingredient> getAllIngredients() {
  ...
}

The value given is in milliseconds. If you want to loosen up the restriction, you can set it to some higher value. Or, if you don’t think that there should be a timeout imposed, then you can remove the timeout altogether by setting the command property execution.timeout.enabled to false:

@HystrixCommand(
    fallbackMethod="getDefaultIngredients",
    commandProperties={
        @HystrixProperty(
            name="execution.timeout.enabled",
            value="false")
    })
public Iterable<Ingredient> getAllIngredients() {
  ...
}

When the execution.timeout.enabled property is set to false, there’s no latency protection. In this case, whether the getAllIngredients() method takes 1 second, 10 seconds, or 30 minutes, it won’t time out. This could cause a cascading latency effect, so care should be taken when disabling execution timeouts.

15.2.2. Managing circuit breaker thresholds

By default, if a circuit breaker protected method is invoked over 20 times, and more than 50% of those invocations fail over a period of 10 seconds, the circuit will be thrown into an open state. All subsequent calls will be handled by the fallback method. After 5 seconds, the circuit will enter a half-open state, and the original method will be attempted again.

You can tweak the failure and retry thresholds by setting the Hystrix command properties. The following command properties influence the conditions that result in a circuit breaker being thrown:

  • circuitBreaker.requestVolumeThreshold The number of times a method should be called within a given time period
  • circuitBreaker.errorThresholdPercentage A percentage of failed method invocations within a given time period
  • metrics.rollingStats.timeInMilliseconds A rolling time period for which the request volume and error percentage are considered
  • circuitBreaker.sleepWindowInMilliseconds How long an open circuit remains open before entering a half-open state and the original failing method is attempted again

If both circuitBreaker.requestVolumeThreshold and circuitBreaker.errorThresholdPercentage are exceeded within the time specified in metrics.rollingState.timeInMilliseconds, then the circuit breaker enters an open state. It remains open for as long as specified by circuitBreaker.sleepWindowInMilliseconds, at which point it becomes half open, and the original failing method will be attempted again.

For example, suppose that you want to adjust the failure settings such that the method must be invoked more than 30 times and fail more than 25% of the time within 20 seconds. For that, you’ll need to set the following Hystrix command properties:

@HystrixCommand(
    fallbackMethod="getDefaultIngredients",
    commandProperties={
        @HystrixProperty(
            name="circuitBreaker.requestVolumeThreshold",
            value="30"),
        @HystrixProperty(
            name="circuitBreaker.errorThresholdPercentage",
            value="25"),
        @HystrixProperty(
            name="metrics.rollingStats.timeInMilliseconds",
            value="20000")
    })
public List<Ingredient> getAllIngredients() {
  // ...
}

Additionally, should you decide that, once thrown, the circuit breaker must remain open for up to 1 full minute before becoming half open, then you can also set the circuitBreaker.sleepWindowInMilliseconds command property:

@HystrixCommand(
    fallbackMethod="getDefaultIngredients",
    commandProperties={
        ...
        @HystrixProperty(
            name="circuitBreaker.sleepWindowInMilliseconds",
            value="60000")
    })

Aside from gracefully handling method failures and latency, Hystrix also publishes a stream of metrics for each circuit breaker in an application. Next up, let’s take a look at how to monitor the health of a Hystrix-enabled application by way of the Hystrix stream.

15.3. Monitoring failures

Every time a circuit breaker protected method is invoked, several pieces of data are collected about the invocation and published in an HTTP stream that can be used to monitor the health of the running application in real time. Among the data collected for each circuit breaker, the Hystrix stream includes the following:

  • How many times the method is called
  • How many times it’s called successfully
  • How many times the fallback method is called
  • How many times the method times out

The Hystrix stream is provided by an Actuator endpoint. We’ll talk more about Actuator in chapter 16. But, for now, the Actuator dependency needs to be added to the build for all the services to enable the Hystrix stream. In a Maven pom.xml file, the following starter dependency adds Actuator to a project:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

The Hystrix stream endpoint is exposed at the path /actuator/hystrix.stream. By default, most of the Actuator endpoints are disabled. But you can enable the Hystrix stream endpoint with the following configuration in each application application.yml file like this:

management:
  endpoints:
    web:
      exposure:
        include: hystrix.stream

Optionally, the management:endpoints:web:exposure:include property can be made global for all of your services by placing it in the application.yml configuration properties that are served by the Config Server.

Application startup exposes the Hystrix stream, which can then be consumed using any REST client you want. But, before you set out to write a custom Hystrix stream client, be aware that each entry in the HTTP stream is rich with all kinds of JSON data, and it’ll require a lot of client-side work to interpret that data. Although writing your own Hystrix stream presentation client isn’t an impossible task, perhaps you should consider using the Hystrix dashboard before expending much effort on your own dashboard.

15.3.1. Introducing the Hystrix dashboard

To use the Hystrix dashboard, you first need to create a new Spring Boot application with a dependency on the Hystrix dashboard starter. If you’re using the Spring Boot Initializr to create the project, you’ll select the Hystrix Dashboard check box. Otherwise, you’ll need to add the following <dependency> to your project’s Maven pom.xml file:

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-netflix-hystrix-dashboard</artifactId>
</dependency>

Once the project has been initialized, you’ll also need to enable the Hystrix dashboard by annotating the main configuration class with @EnableHystrixDashboard:

@SpringBootApplication
@EnableHystrixDashboard
public class HystrixDashboardApplication {
  public static void main(String[] args) {
    SpringApplication.run(HystrixDashboardApplication.class, args);
  }
}

At development time, you’ll be running the Hystrix dashboard alongside all of your other services, as well as Eureka and Config Server on your local machine. Therefore, to avoid port conflicts, you’ll need to pick a unique port for the Hystrix dashboard. In the dashboard application’s application.yml file, set the server.port property to any unique value you want. I usually set it to 7979, like this:

server:
  port: 7979

Now you’re ready to fire up the Hystrix dashboard and kick the tires on it. Once it’s running, open your web browser to http://localhost:7979/hystrix. You should see the Hystrix dashboard homepage, as shown in figure 15.2.

Figure 15.2. The Hystrix dashboard homepage

The first thing you’ll notice about the Hystrix dashboard homepage is the logo, which is the cartoonish porcupine mascot of the Hystrix project. To start viewing a Hystrix stream, enter the URL for one of the service application Hystrix streams into the text box. For example, if the ingredient service is running on localhost and listening on port 59896 (thanks to setting server.port to 0), then you’d enter http://localhost:59896/actuator/hystrix.stream into the text box.

You can also set a delay and a title to display on the Hystrix stream monitor. The delay, which defaults at 2 seconds, is the time between polling cycles, which effectively slows down the stream. The title is merely displayed as a title on the monitor page. But for your needs, the defaults are perfectly fine.

Click the Monitor Stream button to be taken to the Hystrix stream monitor. You should see a page that looks something like figure 15.3.

Figure 15.3. The Hystrix stream monitor page shows the metrics from each of an application’s circuit breakers.

Each circuit breaker can be viewed as a graph along with some other useful metrics data. Figure 15.3 shows a single circuit breaker for getAllIngredients(), because that’s the only circuit breaker you’ve declared so far.

If you don’t see any graphs representing each circuit breaker and all you see is the word Loading, that’s probably because none of the circuit breaker methods has been called yet. You must make a request to the service that would trigger a circuit breaker protected method for that method’s circuit breaker metrics to appear in the dashboard. Figure 15.4 takes a closer look at an individual circuit breaker monitor, providing a breakdown of the information presented.

Figure 15.4. Each circuit breaker monitor provides useful information regarding the current state of the circuit breaker.

The most noticeable part of the monitor is the graph in the top-left corner. The line graph represents the traffic for the given method over the past 2 minutes, giving a brief history of how busy the method has been.

The background of the graph has a circle whose size and color fluctuate. The size of the circle indicates the current traffic volume; the bigger the circle grows, the higher the traffic flow. The circle color indicates its health. Green indicates healthy, yellow indicates an occasionally failing circuit breaker, and red indicates a failing circuit breaker.

The top right of the monitor shows various counters presented in three columns. Going top-down in the leftmost column, the first number (in green—see the electronic versions of this book for color) shows how many invocations are currently successful, the second number (blue) is the number of short-circuited requests, and the last number (cyan) is the count of bad requests. The middle column shows the number of requests that have timed out (yellow), the number that the threadpool rejects (purple), and the number of failing requests (red). The third column shows a percentage of errors in the past 10 seconds.

Below the counters are two numbers representing the number of requests per second for the host and for the cluster. Below those two request rates is the status of the circuit. The bottom of the monitor shows median and mean latency, as well as the latency for the 90th, 99th, and 99.5th percentiles.

15.3.2. Understanding Hystrix thread pools

Imagine that a method is taking an excessive amount of time to do its job. Perhaps that method is making an HTTP request to another service, and the service is sluggish in responding. Until the service responds, Hystrix blocks the thread, waiting for a response.

If the method is executing in the context of the same thread as the caller of the method, then the caller doesn’t have an opportunity to walk away from the long-running method. Moreover, if the blocked thread is one of a limited set of threads, such as a request-handling thread from Tomcat, and if the problem persists, then scalability can take a hit when all the threads are saturated and waiting for responses.

To avoid this situation, Hystrix assigns a thread pool for each dependency (for example, for each Spring bean with one or more Hystrix command methods). When one of the Hystrix command methods is called, it’ll be executed in a thread from the Hystrix-managed thread pool, isolating it from the calling thread. This allows the calling thread to give up and walk away from the call if it’s taking too long, and isolates any potential thread saturation to the Hystrix-managed thread pool.

You may have noticed that in addition to the circuit breaker monitor, figure 15.3 also showed another monitor near the bottom of the page, under a header titled Thread Pools. This section includes a monitor for each Hystrix-managed thread pool. Figure 15.5 shows an individual thread pool monitor, annotated to describe the data it presents.

Figure 15.5. Thread pool monitors show vital statistics about each of the Hystrix-managed thread pools.

Much like the circuit breaker monitor, each thread pool monitor includes a circle in its upper-left corner. The size and color of this circle indicate how active the thread pool is currently, as well as its health. Unlike the circuit breaker monitor, however, thread pool monitors don’t display a line graph showing thread pool activity over the past few minutes.

The thread pool’s name is displayed in the upper-right corner, above the statistics showing the number of requests per second being handled by the threads in the thread pool. The lower-left corner of the thread pool monitor displays the following information:

  • Active thread count The current number of active threads.
  • Queued thread count How many threads are currently queued. By default, queuing is disabled, so this value is always 0.
  • Pool size How many threads are in the thread pool.

Meanwhile, the lower-right corner displays this information about the thread pool:

  • Maximum active thread count The maximum number of active threads over the current sampling period.
  • Execution count The number of times that threads in the thread pool have been called on to handle executions of Hystrix commands.
  • Queue size The size of the thread pool queue. Thread queueing is disabled by default, so this value has little meaning.

It’s worth noting that as an alternative to Hystrix thread pooling, you can choose to use semaphore isolation. Semaphore isolation, however, is a more advanced usage of Hystrix and thus outside of the scope of this chapter. Refer to the Hystrix documentation for more information.

Now that you’ve seen the Hystrix dashboard in action, let’s consider how to deal with multiple streams of circuit breaker data and how to aggregate them into a single stream to be viewed in the Hystrix dashboard.

15.4. Aggregating multiple Hystrix streams

The Hystrix dashboard is only capable of monitoring a single stream at a time. Because each instance of each microservice publishes its own Hystrix stream, it’s almost impossible to get a holistic view of an application’s health.

Fortunately, another Netflix project, Turbine, offers a way to aggregate all of the Hystrix streams from all the microservices into a single stream that the Hystrix dashboard can monitor. Spring Cloud Netflix supports creating a Turbine service using an approach similar to creating other Spring Cloud services. To create a Turbine service, create a new Spring Boot project and include the Turbine starter dependency in the build:

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-netflix-turbine</artifactId>
</dependency>
Note

As a new project, it’s easiest to simply check the Turbine check box in the Initializr when creating the new Spring Boot project.

Once you create the project, you’ll need to enable Turbine. To do that, add the @EnableTurbine annotation to the application’s main configuration class:

@SpringBootApplication
@EnableTurbine
public class TurbineServerApplication {
  public static void main(String[] args) {
    SpringApplication.run(TurbineServerApplication.class, args);
  }
}

For development purposes, you’ll run Turbine locally, alongside the other services in the Taco Cloud application. To avoid port conflicts, you’ll need to select a unique port for Turbine so that it doesn’t conflict with any of the other services. You can pick any port you like, but I tend to choose port 8989:

server:
  port: 8989

Turbine works by consuming the streams from multiple microservices and merging the circuit breaker metrics into a single stream. It acts as a client of Eureka, discovering the services whose streams it’ll aggregate into its own stream. But Turbine doesn’t assume that it should aggregate the streams of all services registered in Eureka; you must configure Turbine to tell it which services it should work with.

The turbine.app-config property accepts a comma-delimited list of service names to look up in Eureka and for which it should aggregate Hystrix streams. For Taco Cloud, you’ll need Turbine to aggregate the streams for the four services registered in Eureka as ingredient-service, taco-service, order-service, and user-service. The following entry in application.yml shows how to set turbine.app-config:

turbine:
  app-config: ingredient-service,taco-service,order-service,user-service
  cluster-name-expression: "'default'"

Notice that in addition to turbine.app-config, you also set turbine.cluster-name-expression to 'default'. This indicates that Turbine should collect all of the aggregated streams under a cluster whose name is default. It’s important to set this cluster name or else the Turbine stream won’t contain any stream data aggregated from the specified applications.

Now you can fire up the Turbine server application and point your Hystrix dashboard at the stream at http://localhost:8989/turbine.stream. All the circuit breakers from all of the specified applications will be displayed in the circuit breaker dashboard. Figure 15.6 shows how this might look.

Figure 15.6. The Hystrix dashboard shows all circuit breakers from all services when pointed at an aggregated Turbine stream.

Now that the Hystrix dashboard is displaying health information for all the circuit breakers in all of your microservices—thanks to Turbine—you get a one-stop shop for monitoring the health of the circuit breakers in the Taco Cloud application.

Summary

  • The circuit breaker pattern enables graceful failure handling.
  • Hystrix implements the circuit breaker pattern, enabling fallback behavior when a method fails or is too slow.
  • Each circuit breaker provided by Hystrix publishes metrics in a stream of data for purposes of monitoring the health of an application.
  • The Hystrix stream can be consumed by the Hystrix dashboard, a web application that visualizes circuit breaker metrics.
  • Turbine aggregates multiple Hystrix streams from multiple applications into a single stream that can be visualized together in the Hystrix dashboard.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset