Linked List Simulator

Runs in browser

Simulate and manipulate Singly, Doubly, and Circular Linked Lists interactively.

Singly Linked List: Each node contains a value and a single next pointer to the subsequent node. Traversal is strictly forward. O(N) to find the tail if un-tracked, and O(N) to access by index.

Singly Linked List (0/20)

HEAD ➔ ➔ NULL
List is empty. Add a node to begin.

Operation Log

No operations yet

The Complete Guide to Linked Lists

If arrays are like a box of crayons—fixed in size and packed consecutively—then Linked Lists are like a scavenger hunt. Instead of storing data consecutively in memory, each piece of data (called a Node) can live anywhere in the computer's RAM. To keep track of the sequence, each Node contains a pointer—a memory address—that directs the computer to the exact location of the next Node in the sequence.

Memory Layout: Array vs. Linked List

When you create an array of size 5, the operating system blocks out 5 contiguous memory slots. If you want to add a 6th element but the contiguous space is full, the OS must allocate a brand new block of 6 slots, copy the old data over, and delete the old block. This is called exactly what it is: Dynamic Array Resizing, and it involves a massive performance penalty O(N).

A Linked List simply asks the OS for enough memory for one node. Because nodes point to each other, they don't need to be adjacent. Adding a 6th element is an immediate O(1) operation: you just allocate the node, and point the 5th node's pointer to it.


The Three Pillars of Linked Lists

While the underlying concept of scattered memory connected by pointers remains identical, the number of pointers and their destinations dramatically change what a Linked List can be used for. There are three primary variations you must understand.

1
Singly Linked Lists

The most bare-bones implementation. A Singly Linked List node contains data and exactly one pointer leading forward (`next`). You start at the `Head` of the list and follow the `next` pointers until you hit `null`, indicating the `Tail`.

class Node<T> {
  T value;
  Node<T> next;
}

The Trade-off

Because you only store one pointer, Singly Linked Lists are incredibly memory-efficient. However, traversal is strictly one-way. If you are at node C and realize you needed node B, you cannot go backward. You must start all the way from the `Head` again. Furthermore, deleting a node requires finding its predecessor so you can reroute the predecessor's `next` pointer around the deleted node—an `O(N)` operation if you don't already have the predecessor cached.

Real World Application: Hash Tables

Singly linked lists are explicitly used to resolve collisions in Hash Maps using a technique called Chaining. When two keys hash to the same bucket array index, rather than overwriting the data, the bucket stores a pointer to a Singly Linked List, and the new key-value pair is simply appended to the tail of the list.

2
Doubly Linked Lists

Doubly Linked Lists add a second pointer to every single node pointing backward to the previous node (`prev`). This seemingly small addition solves almost every performance bottleneck of a Singly Linked List at the explicit cost of 100% more pointer memory allocations.

class Node<T> {
  T value;
  Node<T> next;
  Node<T> prev;
}

The Trade-off

Every node requires more RAM. For large datasets composed of tiny values (like 32-bit integers), the pointers (`prev` and `next`, generally 64-bits each on modern systems) consume 4x more memory than the actual data.

However, the performance gains are massive. If you have a reference to a node, you can delete it in pure `O(1)` time instantly. You just take `node.prev.next = node.next` and `node.next.prev = node.prev`, bypassing the node completely. You can also traverse backward from the tail.

Real World Application: LRU Caches & Browsers

Your web browser's Forward/Back button is a masterclass in Doubly Linked Lists. The current page is a node. Clicking "Back" navigates via `prev`. Clicking "Forward" navigates via `next`.
More critically, Least Recently Used (LRU) Caches (like Redis internals) use a Hash Table mapping to nodes in a Doubly Linked List. When an item is accessed, it's spliced out in `O(1)` time and moved to the `Head` (most recently used). When the cache fills up, the `Tail` is popped off in `O(1)` time.

3
Circular Linked Lists

A Circular Linked List takes a Singly or Doubly linked list and connects the `Tail` directly back to the `Head`. There is no `null` termination. The list forms an infinite, continuous loop.

The Trade-off

Iteration requires a stop condition. `while(node != null)` will immediately result in an infinite loop that crashes your program. You must track the starting node and break the loop when you reach it again.

Real World Application: OS Schedulers & Multiplayer Games

Operating system CPU schedulers (like Round Robin) use circular lists to allocate CPU time explicitly. Process A gets 10ms, then the pointer moves to Process B. Once it reaches the end, it seamlessly wraps back to Process A without any bounds checking or array resetting. Turn-based multiplayer games use the exact same logic to cycle turns continuously between players.


The Ugly Truth: Cache Misses

If Linked Lists offer `O(1)` insertion and deletion (if you have the pointer), why do arrays power 99% of all software? The answer lies inside your CPU registers and L1/L2/L3 caches.

Modern CPUs don't read RAM one byte at a time. When an array element is accessed, the CPU fetches the element and immediately pulls the next 64 bytes of adjacent data into ultra-fast L1 cache. This is called Cache Locality. When iterating an array, the CPU is reading from the blisteringly fast L1 cache.

Linked Lists intentionally scatter data randomly across the heap. Every jump to a `next` pointer requires a fresh fetch from main system RAM (which is hundreds of times slower than L1 cache). This constant stream of Cache Misses means that iterating over an array is astronomically faster in absolute wall-clock time than iterating over a Linked List, even if both are Big-O `O(N)`. As Bjarne Stroustrup (creator of C++) famously noted, almost always use a `std::vector` (dynamic array) over a `std::list` (linked list) unless you have highly specific hardware benchmarks proving otherwise.

Algorithmic Patterns

Linked lists are heavily featured in technical interviews to test pointer manipulation logic. Master these two algorithms and you will be able to solve 90% of list-based problems.

1. Fast & Slow Pointers

Also known as Floyd's Cycle Finding Algorithm (The Tortoise and the Hare). To detect if a List has an infinite loop (a cycle), you create two pointers. The slow pointer moves 1 step, the fast pointer moves 2 steps. If they ever equal each other, a cycle exists.

slow = head;
fast = head;
while (fast != null && fast.next != null) {
  slow = slow.next;
  fast = fast.next.next;
  if (slow == fast) return true; // Cycle!
}
return false;

2. In-Place Reversal

To reverse a singly linked list in `O(N)` time and `O(1)` space, you keep track of three pointers: `prev`, `current`, and `next`. You walk down the list explicitly flipping the `current.next` pointer backward.

prev = null;
curr = head;
while (curr != null) {
  next = curr.next; // Save
  curr.next = prev; // Flip!
  prev = curr;      // Slide
  curr = next;      // Slide
}
return prev; // New Head