Conceptly
← All Concepts

Amazon ElastiCache

DatabaseIn-Memory Caching

ElastiCache is the managed cache layer that keeps frequently read data in memory and returns it quickly. It reduces both latency and load by absorbing repetitive lookups in front of the primary data store.

Architecture Diagram

🔍 Structure

Dashed line animations indicate the flow direction of data or requests

Why do you need it?

If each request rereads the same product information or session data from the primary database, latency rises and the database feels the pressure first. When the read pattern is repetitive but every request still hits the source of truth, bottlenecks show up earlier as traffic grows.

Why did this approach emerge?

Initially, applications ran caches in process memory or self-managed Redis, but scaling and failover were cumbersome. This is why ElastiCache, treating cache as a managed service like a database, became necessary.

How does it work inside?

ElastiCache provides Redis and Memcached as managed services, where the application checks the cache first and only queries the primary store on a miss. Redis can extend beyond caching into queue-like and broker-like roles, and serverless options reduce capacity-planning work through multi-AZ redundancy and automatic scaling.

What is it often confused with?

ElastiCache and DynamoDB are both used for fast lookups, but they serve different roles. ElastiCache is a cache in front of primary data, while DynamoDB is a database that acts as the durable primary store itself. If the goal is to reduce load on an existing data store and lower response times, look at ElastiCache; if the problem is designing the durable store itself, look at DynamoDB.

When should you use it?

Well-suited for session storage, frequently queried keys, rankings, low-latency read optimization, and very fast temporary state. It should be seen as a layer that complements rather than replaces the primary database.

Database cachingSession managementReal-time leaderboardsPub/Sub messaging