System Design

How Redis Works

The Swiss-army knife of backend infrastructure — cache, queue, lock, session store, and leaderboard in one.

In-Memory Optional Persistence Single-threaded Core Official Docs

🔑 Key insight: Redis is single-threaded for command processing. No locks needed internally — every command is atomic by nature. This is why Redis achieves >1M ops/sec with microsecond latency.

6 Core Data Structures

Redis isn't just a key-value store. Each data structure is purpose-built for specific access patterns. Choosing the right one is the difference between O(1) and an expensive full scan.

String

Max size: 512 MB · Complexity: O(1)

The simplest type — a binary-safe byte sequence. Can store text, integers, or serialized objects. Redis can increment integers atomically without a lock.

Common Use Cases

  • Session tokens
  • Rate limit counters (INCR)
  • JSON blobs (small objects)
  • Distributed locks (SETNX + EX)

Key Commands

SET key value [EX seconds]
GET key
INCR counter
SETNX key value # only if not exists

Eviction Policies (maxmemory-policy)

When Redis hits maxmemory, it must evict keys. Choose the right policy for your workload.

Policy What gets evicted Best for
noeviction Nothing — returns error on write Databases where data loss is unacceptable
allkeys-lru Least Recently Used key from all keys General-purpose cache (most common)
volatile-lru LRU among keys WITH TTL set Mix of persistent + cached data
allkeys-lfu Least Frequently Used (Redis 4+) Power-law access patterns (few hot keys)
volatile-ttl Key with shortest TTL first When you want soonest-expiring evicted first
allkeys-random Random key from all Uniform access patterns

Single-Threaded = Atomic

All commands serialize through one thread. INCR, SETNX, ZADD — all atomic without explicit locks.

TTL Everything

Always set expiry on cached data. Redis lazily deletes expired keys on access, plus a background sweeper runs periodically.

Pick the Right Type

Sorted Sets for rankings, Hashes for objects, HyperLogLog for cardinality, Bloom Filter (RedisBloom) for membership.

The Engineering of Redis: Why Single-Threaded Wins

Every developer is taught that concurrency requires multiple threads. So why does Redis—the fastest mainstream database on the planet, capable of processing millions of operations per second—run its core execution engine on exactly one single thread? Understanding this deliberate architectural limitation is the key to understanding how Redis achieves reliable microsecond latency.


Part 1: The Context Switching Tax

In a multi-threaded database like MySQL or PostgreSQL, thousands of concurrent queries mean thousands of dedicated OS threads. When Thread A pauses to read from the disk, the CPU pauses, saves Thread A's exact state to RAM, loads Thread B's state from RAM, and resumes. This is a Context Switch.

Context switching takes roughly 3 to 5 microseconds. But Redis is entirely In-Memory. Reading from RAM takes roughly 100 nanoseconds. If Redis were multi-threaded, the CPU would spend 50x more time physically switching between threads than actually executing the user's command.

By employing an Event Loop (using epoll/kqueue) pegged to a single thread, Redis avoids context switching entirely. It reads a command from the network socket, accesses RAM in nanoseconds, formats the response, and moves to the next command. No locking. No deadlocks. Pure, unadulterated execution speed.

Part 2: Multi-Key Atomicity by Default

The single-threaded execution model has a massive secondary benefit: Every single command is perfectly Atomic.

If two competing web servers send an INCR counter command at the exact same millisecond, the Redis networking layer queues them. The execution thread mathematically cannot process both at the exact same time. It processes Command 1, counter goes from 0 to 1. Then it processes Command 2, counter goes from 1 to 2.

This means developers can build complex Distributed Locks, atomic rate-limiters, and highly concurrent inventory systems using Redis without ever writing a BEGIN TRANSACTION or mutex lock, completely eliminating race conditions.

Part 3: The Danger of Big O Notation

The single-threaded architecture creates one fatal, system-crashing flaw: Blocking Operations.

If you execute a command that takes 5 seconds to complete, Redis freezes. It cannot process any other commands from any other clients for those 5 seconds. The entire database goes totally offline.

This is why executing the KEYS * command (which forces an O(N) full-table scan of every single key in RAM) in a production environment is notoriously dangerous and will likely cause an immediate severity-one outage. Developers using Redis must be intimately familiar with the Big-O complexity of every command they run (e.g., heavily preferring O(1) Hash access and O(log N) Sorted Set access).

Part 4: I/O Threads in Modern Redis

As modern ethernet networks upgraded to 10Gbps and 100Gbps, Redis hit a new bottleneck. The single thread could execute commands faster than it could physically parse the incoming network packets and write the outgoing network responses.

Redis 6.0 introduced a brilliant compromise: I/O Threads. Redis offloaded the network parsing (reading sockets, framing protocol messages) to secondary threads. However, the core execution of the actual database commands remains strictly single-threaded. This allows Redis to saturate a 100Gbps network card while retaining the perfect, lock-free atomicity of its single-threaded query engine.

Glossary & Concepts

💾 In-Memory

Data is stored primarily in RAM for extremely fast read/write access (microseconds), as opposed to traditional databases that write to disk (milliseconds).

🧩 Hash Slot

A concept in Redis Cluster used to distribute data. There are 16,384 hash slots, and every key mathematically maps to exactly one of them.

🗑️ Eviction Policy

The rule Redis follows to free up memory when the max memory limit is reached. Common policies include LRU (Least Recently Used) and LFU (Least Frequently Used).

🕒 TTL (Time To Live)

An expiration time set on a key. Once the TTL expires, the key is automatically deleted, ideal for caching temporary data like session tokens.