Best practices

When delivering ElastiCache clusters, the best practices will always depend on the type of data that needs to be stored. The best approach to choosing and scaling your caching solution is to ask yourself the following questions:

  • What type of data am I storing?
  • What is the volume, frequency, and concurrency of data coming in/out of the cache?
  • Do I need support for transactions on the cache?
  • Do I need high availability and high resilience of the caching cluster?

When choosing your cache, the first question will be the most important. The data type will determine which of the two services you can use: is it just values that I want to store, or is it complex datasets? We also need to ask ourselves what the data update volume is: do we need very high parallel performance, or is the application single-threaded? Is there any requirement to support transactions on the caching cluster, and should it be highly available and resilient? These questions are going to determine which caching engine to use and what best practices to apply against it. Is it simple datasets and simple values that we need to store? Then, go with Memcached. Is the application multi-homed or multi threaded? Memcached. No need for transactions and no need to deliver cross-availability zone clusters? Again, Memcached.

In pretty much all other cases, Redis should be considered as it can deliver high availability and read replicas, and supports complex datasets and the other features mentioned before. However, being quite a beast of a service, Redis has some specifics we need to look out for. Specifically, any commands requiring administrative privileges will not be available on an ElastiCache Redis cluster.

Being an in-memory database, Redis is tricky to manage when we need to persist the data on it. We can take snapshots of our Redis database state at any time, but of course that will consume additional memory so we need to make sure we have an instance that has enough memory for the active dataset of which the snapshot is being taken, and any data coming into the cluster while the snapshot is taking place. We also assign reserved memory to the Redis cluster and this is used for non-data operations such as replications and failover; we will need to scale our reserved memory accordingly. Configuring too little memory will potentially cause failover and replication to fail. When you run out of memory in a Redis cluster, you can always resize the environment quite easily:

  1. We can simply log in to the management console, select our existing Redis cluster, and select a new size for the instances that are running Redis. In the console, select the cluster and click Modify:

  1. In the Modify Cluster dialog, simply select the new size; here, we have chosen to increase our cache.t2.small to any available larger size. Next, click Modify:

  1. After you have completed the process, the cluster will be in the modifying state until the instances are replaced with bigger ones. When you have read replicas in your cluster, this operation will be performed without downtime:

We should also be considering the caching strategy to use and the TTLs to deliver by determining whether we are storing the data as is or we need to retrieve responses via complex queries from the database. We will also need to determine the TTL for the data and decide on the caching strategy to best suit our needs.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset