DEV Community

丁久
丁久

Posted on • Originally published at dingjiu1989-hue.github.io

Redis vs Memcached: Caching Solution Comparison

This article was originally published on AI Study Room. For the full version with working code examples and related articles, visit the original post.

Introduction

Redis and Memcached are the two most widely used in-memory data stores, but they serve different purposes despite overlapping use cases. Memcached is a purpose-built cache; Redis is a versatile data structure server that happens to excel at caching. Choosing between them requires understanding their architectural differences and the specific requirements of your application.

Data Structure Support

Redis: Rich Data Types

Redis supports a wide range of data structures beyond simple key-value pairs:

import redis.asyncio as redis

r = redis.Redis(host='localhost', port=6379, decode_responses=True)

# Strings (basic key-value)
await r.set('user:1000:name', 'Alice')
name = await r.get('user:1000:name')

# Lists (ordered collection, great for queues)
await r.lpush('notifications:queue', 'email_1', 'email_2')
notification = await r.brpop('notifications:queue', timeout=5)

# Sets (unique members, set operations)
await r.sadd('user:1000:roles', 'admin', 'editor', 'viewer')
await r.sadd('user:1001:roles', 'editor', 'viewer')
common_roles = await r.sinter('user:1000:roles', 'user:1001:roles')
# Returns: {'editor', 'viewer'}

# Sorted Sets (leaderboards, rate limiting)
await r.zadd('leaderboard:weekly', {'user:1000': 1500, 'user:1001': 2300})
top_players = await r.zrevrange('leaderboard:weekly', 0, 9, withscores=True)

# Hashes (objects)
await r.hset('product:500', mapping={
    'name': 'Widget',
    'price': 29.99,
    'stock': 100,
})
product = await r.hgetall('product:500')

# Bitmaps (analytics, feature flags)
await r.setbit('active:users:2026-05-12', user_id=1000, value=1)
daily_active = await r.bitcount('active:users:2026-05-12')

# Streams (event log, message queue)
await r.xadd('order:events', {'order_id': '123', 'status': 'created'})
events = await r.xread({'order:events': '0'}, count=10)
Enter fullscreen mode Exit fullscreen mode

Memcached: Simple Key-Value

Memcached provides a minimal key-value API:

import pymemcache

client = pymemcache.Client(('localhost', 11211))

# Basic get/set
client.set('user:1000:profile', profile_data, expire=3600)
profile = client.get('user:1000:profile')

# Multi-get for batch operations
users = client.get_multi([
    'user:1000:profile',
    'user:1001:profile',
    'user:1002:profile',
])

# Atomic operations
client.add('lock:payment:123', 'locked', expire=30)  # Only if not exists
client.replace('user:1000:profile', updated_profile)  # Only if exists
client.append('log:buffer', 'new entry\n')  # Append to existing value
client.prepend('log:buffer', 'header\n')  # Prepend

# Increment/Decrement
client.set('counter:api:day', '0')
client.incr('counter:api:day', 1)
Enter fullscreen mode Exit fullscreen mode

Persistence and Durability

Feature Redis Memcached
Persistence RDB snapshots, AOF logs None (ephemeral)
Recovery Automatic on restart All data lost
Replication Master-replica, sentinel, cluster No replication
Durability modes fsync policies (always, every sec, no) N/A

Redis persistence configuration:

# redis.conf
# RDB snapshot (point-in-time)
save 900 1     # Save if 1 key changed in 900 seconds
save 300 10    # Save if 10 keys changed in 300 seconds
save 60 10000  # Save if 10000 keys changed in 60 seconds

# AOF (append-only log)
appendonly yes
appendfsync everysec  # fsync every second
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

# Hybrid persistence (Redis 7+)
aof-use-rdb-preamble yes  # RDB prefix for faster loading
Enter fullscreen mode Exit fullscreen mode

Memory Efficiency and Eviction

Memcached: Slab Allocation

Memcached uses slab allocation to minimize fragmentation:

# Memcached slab configuration
# Memory is divided into slabs of various chunk sizes
# Items are stored in the smallest slab that fits

stats = client.stats()
# STAT slab_reassign_rescues 0
# STAT slab_reassign_evictions_nomem 0
# STAT slab_reassign_inline_reclaim 0
# STAT slab_reassign_busy_items 0

# Eviction: LRU only
client.set('key', 'value', expire=0, noreply=False)
Enter fullscreen mode Exit fullscreen mode

Redis: Multiple Eviction Policies

# redis.conf eviction policies
maxmemory 2gb
maxmemory-policy allkeys-lru  # Evict least recently used keys

# Available policies:
#   noeviction:        Return errors on writes when memory full
#   allkeys-lru:       Evict LRU keys (most common)
#   allkeys-lfu:       Evict least frequently used
#   volatile-lru:      Evict LRU among keys with TTL
#   volatile-lfu:      Evict LFU among keys with TTL
#   allkeys-random:    Evict random keys
#   volatile-random:   Evict random keys with TTL
#   volatile-ttl:      Evict keys with shortest TTL
Enter fullscreen mode Exit fullscreen mode

Memory overhead comparison:

# Redis: ~50 bytes overhead per key + value size
# Example: 1M keys with 100-byte values
overhead_redis = 1_000_000 * (50 + 100)  # ~150MB

# Memcached: ~56 bytes overhead per key + value size + slab fragmentation
# Example: 1M keys with 100-byte values
overhead_memcached = 1_000_000 * (56 + 100)  # ~156MB + ~10% fragmentation
Enter fullscreen mode Exit fullscreen mode

Clustering and High Availab


Read the full article on AI Study Room for complete code examples, comparison tables, and related resources.

Found this useful? Check out more developer guides and tool comparisons on AI Study Room.

Top comments (0)