Detailed comparison between Redis Cache and Memcached Cache

memcached vs redis

1. Quick Overview

CriterionRedisMemcached
TypeIn‑memory datastore with rich data typesSimple in‑memory key–value store
Threading modelSingle‑threadedMulti‑threaded
Data typesStrings, Lists, Sets, Hashes, Sorted Sets, …Only key–value (binary‑safe)
PersistenceYes (RDB/AOF), configurableNo (RAM only)
Replication/ClusterBuilt‑in replica, clustering, shardingOnly manual sharding
Pub/Sub, LuaYesNo

2. Performance

  • Memcached
    • Multi‑threaded, leverages multiple CPU cores for extremely high throughput on simple GET/SET.
    • Typical latency: around 0.1–0.5 ms on standard workloads.
  • Redis
    • Single‑threaded but highly optimized; pipelining and I/O multiplexing can reach >100 000 ops/sec on commodity hardware.
    • Supports pipelining to reduce round trips and Lua scripting to batch operations atomically.

Head‑to‑head, for pure GET/SET workloads Memcached may edge out Redis by a few percent in throughput; however, when using pipelining, batching, or advanced data structures, Redis often outperforms.


3. Advantages & Disadvantages

Redis

Advantages

  1. Rich data types: lists, sets, hashes, sorted sets, streams, etc., ideal for caching, leaderboards, queues, and more.
  2. Persistence: RDB snapshots and AOF logs allow durable storage and recovery.
  3. High‑availability: supports replicas, Sentinel for auto‑failover, and clustering for sharding.
  4. Pub/Sub & Streams: native support for messaging and event streaming.
  5. Lua scripting & transactions: atomic multi‑command operations, fewer network round‑trips.

Disadvantages

  1. Single‑threaded: a single instance uses one CPU core, limiting per‑instance parallelism.
  2. Memory overhead: rich data structures consume more RAM compared to Memcached.
  3. Operational complexity: more features require careful configuration and monitoring.

Memcached

Advantages

  1. Simplicity & lightweight: key–value only, minimal footprint, easy to deploy.
  2. Multi‑threaded: naturally leverages all available CPU cores for high throughput.
  3. Memory‑efficient: very low per‑item overhead, perfect for pure caching.
  4. Mature ecosystem: many client libraries, plugins, and modules.

Disadvantages

  1. No persistence: cache is lost on restart.
  2. Limited data model: only key–value, no advanced structures.
  3. No built‑in HA: sharding and failover must be managed externally.

4. Basic Deployment Steps

Deploying Memcached

Install

sudo apt-get update
sudo apt-get install memcached

Configure (/etc/memcached.conf)

  • -m: memory in MB
  • -p: port (default 11211)
  • -u: run user
  • -t: number of threads

Start & monitor

systemctl enable memcached
systemctl start memcached

Check stats via memcached-tool or telnet stats.

Sharding: phân chia key qua client hoặc proxy (Twemproxy,…).

Deploying Redis

Install

sudo apt-get update
sudo apt-get install redis-server

Configure (/etc/redis/redis.conf)

  • maxmemory & maxmemory‑policy
  • Enable persistence: save (RDB), appendonly yes (AOF)
  • Replication: replicaof host port
  • Clustering options if needed

Start & monitor

systemctl enable redis-server 

systemctl start redis-server

Use redis-cli info to inspect stats, replication, and memory usage.

  1. Scaling
    • Replica: add Redis replicas for read scaling.
    • Sentinel: configure for auto‑failover HA.
    • Cluster mode: automatic sharding across multiple nodes.

5. When to Choose Which

Use CaseChoose RedisChoose Memcached
Simple key–value caching only
Extreme throughput across many cores
Need complex data types (queues, sorted sets)
Need persistence & recovery on restart
Need built‑in pub/sub or messaging
Require auto‑failover and high availability✔ (Sentinel/Cluster)❌ (external management)
Want minimal configuration & footprint

Conclusion

  • Use Memcached when your application demands a simple, ultra‑fast in‑memory cache with minimal operational overhead and no need for persistence or complex data types.
  • Use Redis when you require rich data structures, durability, replication/HA, pub/sub, or scripting capabilities, or when you plan to scale out via clustering.