LRU Cache Simulator

Runs in browser

Visualize how LRU cache evicts least recently used entries

SET Operation
GET Operation

How to Use

Use SET to add entries, GET to retrieve.

You will see:

  • Cache hits, misses, and evictions
  • Key access order (LRU → MRU)
  • Capacity limit enforcement
Cache Hits
0
Cache Misses
0
Evictions
0
Hit Rate
0.0%

Cache (0/5)

LRU → MRU

Cache is empty

Use SET to add entries

1
Empty slot
2
Empty slot
3
Empty slot
4
Empty slot
5
Empty slot

Operation Log

0 events

No operations yet

About LRU Cache

LRU (Least Recently Used) is a widely adopted cache eviction algorithm designed to manage limited memory resources. The core philosophy of LRU rests on the principle of temporal locality: data that has been accessed recently is highly likely to be accessed again in the near future. Conversely, the data untouched for the longest time is the best candidate for eviction when the cache is full.

The O(1) Architecture

Achieving high-performance caching requires both read (GET) and write (SET) operations to complete in O(1) (constant) time. To accomplish this, an LRU cache combines two distinct data structures:

  • Hash Map (Dictionary): Stores key-node pairs. This provides O(1) lookups to instantly check if an item exists and locate it in memory.
  • Doubly-Linked List: Maintains the chronological ordering of accesses. When an item is read or written, its corresponding node is detached from its current position and moved to the Head (Most Recently Used). The Tail always points to the Least Recently Used item.

Because nodes in a doubly-linked list have pointers to their immediate previous and next neighbors, a node can be spliced out of the list in O(1) time without traversing the entire structure.

Cache Eviction Mechanics

When a cache reaches its maximum predetermined capacity, and a new record must be inserted, the LRU algorithm triggers an Eviction event:

  1. Identify the node at the Tail of the doubly-linked list (the LRU element).
  2. Remove the node's key from the Hash Map.
  3. Detach the node from the Doubly-Linked list entirely.
  4. Insert the new record into the Hash Map and push its new node to the Head (MRU) of the list.

Real-World Implementations

Redis

Redis provides LRU configurations out-of-the-box (e.g., allkeys-lru). Since strict LRU requires significant memory overhead per key, Redis uses an approximated LRU algorithm that randomly samples a subset of keys and evicts the oldest among the sample.

Memcached

Memcached organizes memory into identical "slabs". When a slab allocated to a specific item size fills up, Memcached performs strict LRU eviction exclusively within that slab class, preventing a large volume of tiny objects from evicting larger media objects.

PostgreSQL Buffers

Databases don't use strict LRU because sequential table scans would instantly wipe out the entire cache. Instead, Postgres implements a Clock Sweep algorithm, an approximation of LRU that skips frequently used pages.

Linux Page Cache

Operating systems cache disk I/O heavily. To prevent cache thrashing during backups or large file reads, Linux employs modified LRU algorithms with multiple lists (Active vs Inactive) preventing one-time sequential reads from destroying the hot working set.

LRU Limitations & Advanced Variants

While LRU is intuitive, it heavily suffers from Sequential Scans (a large file scan will flush all genuinely popular objects, ruining the cache). Consequently, advanced variants are often preferred in modern production environments:

  • LRU-K: An item is only cached if it is requested K times, ensuring one-off accesses don't pollute the LRU pool.
  • ARC (Adaptive Replacement Cache): Uses two LRU lists (one for frequency, one for recency) and automatically resizes them to match the workload pattern. Highly efficient but historically patent-encumbered by IBM.
  • W-TinyLFU: The algorithm behind the popular Java cache library Caffeine. Combines recency with frequency tracking via a Count-Min Sketch.

References & Further Reading