Queue & Stack Simulator

Runs in browser

Simulate all queue and stack variants — FIFO, Priority, Deque, Min/Max, Monotonic

Simple Queue (FIFO): Elements enter at the back and leave from the front. Used in task scheduling, BFS, and message queues.
0
Size / 10
Front (next out)
Back (last in)

Simple Queue (0/10)

FRONT BACK
Enqueue (back) →
Queue is empty
← Dequeue (front)

Operation Log

No operations yet

Queue & Stack: The Two Pillars of Linear Data Structures

Every non-trivial algorithm that processes data in a specific order relies on one of two fundamental abstractions: the Queue or the Stack. A Queue enforces First In, First Out (FIFO) ordering — the oldest element is always processed next, preserving arrival sequence. A Stack enforces Last In, First Out (LIFO) ordering — the newest element is always processed next, reversing arrival sequence. Despite their simplicity, understanding when to use each — and which variant — separates engineers who reach for the right tool from those who fight their data structures. This guide covers every major type, their implementations, time and space complexity, real-world applications, and the common pitfalls that bite developers in production.

Part 1: Queues — FIFO and Its Variants

What Is a Queue?

A queue is an ordered collection with two access points. New elements join at the back (tail/rear); elements leave from the front (head). Once an element is behind another, it cannot overtake it. This mirrors everyday queues: a ticket counter, a print spooler, a network packet buffer, or a lineup for a roller coaster. The element that has been waiting the longest is always served first.

The three core operations are enqueue (add to back, O(1)), dequeue (remove from front, O(1)), and peek/front (inspect front without removing, O(1)). All three are constant-time in a correctly implemented queue. Searching for an arbitrary element takes O(n). There is no concept of random access — you cannot jump to position 5 without dequeuing positions 0–4 first.

1. Simple Queue (Linear Queue)

The most basic queue. Elements are added to the back and removed from the front. In most languages this is backed by a linked list (constant-time head/tail access) or a dynamic array with a head pointer. A pure array implementation without a head pointer requires O(n) dequeue because every element shifts left — avoid this mistake.

Underlying data structures: Singly or doubly linked list (most memory-efficient for general use), dynamic array with head/tail indices (cache-friendly, lower constant factor), or two stacks (amortized O(1)).

Complexity

OperationTimeSpace
EnqueueO(1)O(1)
DequeueO(1)O(1)
PeekO(1)O(1)
SearchO(n)O(1)

Real-World Uses

  • OS CPU scheduler (ready queue): Processes waiting for CPU time are queued in FIFO order in many scheduling policies (Round Robin). The OS enqueues a process when it becomes runnable and dequeues it when the CPU is free.
  • BFS (Breadth-First Search): Graph traversal level by level. Each node's unvisited neighbors are enqueued; the next node to explore is always dequeued from the front. BFS finds shortest paths in unweighted graphs precisely because of FIFO ordering.
  • Web server request buffer: HTTP requests arriving faster than they can be served queue up and are processed in arrival order, ensuring fairness.
  • Print spooler: Print jobs queue in submission order — the first job submitted to the printer is always printed first.
  • Producer-consumer pipelines: The canonical synchronization pattern. Producers enqueue tasks; consumers dequeue them. A shared queue with a mutex decouples their speeds.

2. Circular Queue (Ring Buffer)

A circular queue is a fixed-capacity queue implemented on an array where the tail wraps around to index 0 when it reaches the end. Two index pointers — head and tail — advance modulo the capacity. This eliminates the "wasted space" problem of linear array queues: in a linear array, once you've enqueued and dequeued enough elements, the front pointer moves right and you can never reuse those slots without a costly shift. The ring buffer reuses every slot indefinitely.

Empty vs Full detection: When head == tail the queue may be empty or full. The standard trick is to reserve one slot as a sentinel: the queue is full when (tail + 1) % capacity == head, and empty when head == tail. Alternatively, maintain a count variable. Lock-free ring buffers (single producer, single consumer) use this structure with atomic operations — they are the backbone of audio drivers, network I/O rings, and inter-core communication in embedded systems.

Real-World Uses

  • Audio/video streaming buffers: A fixed ring buffer holds audio samples between the producer (decoder) and consumer (sound card driver). If the producer falls behind, older frames are overwritten — the "drop oldest" strategy.
  • Linux kernel's io_uring: Uses a pair of ring buffers (submission queue and completion queue) to allow user-space to submit I/O requests and receive completions without syscalls.
  • Network interface receive rings (DPDK, XDP): Packet descriptors circulate through a ring between the NIC DMA engine and the software driver.
  • CPU rolling-window metrics: Store the last N CPU utilization samples in a ring; the oldest is automatically overwritten when the ring is full.

3. Priority Queue (Heap-Based)

A priority queue dequeues elements by priority, not by insertion order. The element with the highest (or lowest) priority is always at the front. While conceptually a queue variant, the internal implementation is almost always a binary heap — a complete binary tree stored in an array where the parent is always ≤ (min-heap) or ≥ (max-heap) its children according to priority.

How heapify works: After enqueue, the new element "bubbles up" (sift up) by swapping with its parent until the heap property is restored — O(log n). After dequeue (removing the root), the last element is placed at the root and "sinks down" (sift down) — O(log n). Building a heap from n unsorted elements takes O(n) via the bottom-up heapify algorithm, not O(n log n) as naive insertion would suggest.

Other implementations: Fibonacci heaps offer O(1) amortized decrease-key (used to speed up Dijkstra) but are complex to implement. Binomial heaps and pairing heaps are alternatives with good practical performance.

Complexity (Binary Heap)

OperationTime
Insert (enqueue)O(log n)
Extract-min/max (dequeue)O(log n)
Peek (min/max)O(1)
Build from arrayO(n)

Real-World Uses

  • Dijkstra's shortest path: Each unvisited node is stored in a min-priority queue keyed by tentative distance. The node with the smallest tentative distance is always expanded next.
  • A* pathfinding: Like Dijkstra but uses f(n) = g(n) + h(n) as the key, where h is a heuristic. Used in game AI, GPS routing, and robotics.
  • OS task scheduler (Linux CFS): The Completely Fair Scheduler uses a red-black tree (a balanced BST with similar properties to a heap) to pick the task with the smallest virtual runtime.
  • Huffman encoding: The character with the lowest frequency is always merged first — a min-heap drives the construction of the optimal prefix-free code tree.
  • Event-driven simulation: Events are stored in a min-heap keyed by timestamp. The next event to process is always the earliest one.
  • Top-K problems: Maintain a heap of size K to find the K largest/smallest elements in a data stream in O(n log k) time.

4. Deque (Double-Ended Queue)

A deque (pronounced "deck") supports O(1) insertion and deletion at both ends. It is a superset of both a queue and a stack. Python's collections.deque, Java's ArrayDeque, and C++'s std::deque all implement this. Because it generalizes both ends, it can simulate a queue (add back, remove front), a stack (add front, remove front), or a steque (stack + queue combined).

The classic implementation is a doubly linked list (O(1) at both ends, but poor cache locality) or a circular buffer. The sliding window maximum algorithm is Deque's defining application: maintain a deque of indices in decreasing order; the front always holds the index of the maximum in the current window. When the window slides, dequeue expired indices from the front; when a new element arrives, pop from the back all smaller elements (they can never be the max), then push the new index to the back. This gives O(n) for the entire array.

Real-World Uses

  • Sliding window maximum/minimum: Stock price analysis, image processing convolutions, and time-series anomaly detection.
  • Browser history (back/forward): The current page is the center; going back = pop from the front; going forward = pop from the back (conceptually).
  • Palindrome checking: Push all characters; repeatedly compare and dequeue from both front and back — simpler than two-pointer on a string.
  • Work-stealing schedulers: Each worker thread has a deque of tasks. It pushes/pops its own tasks from one end (LIFO for cache locality). Idle threads steal tasks from the front of another thread's deque (FIFO to minimize contention). This is how Go's goroutine scheduler and Java ForkJoinPool work.
  • A-Steal algorithm (job scheduling): Enables efficient parallel algorithms with O(T∞ + T₁/P) expected time on P processors.

Part 2: Stacks — LIFO and Its Variants

What Is a Stack?

A stack is a collection where all access happens at a single end — the top. You can only push to the top, pop from the top, or peek at the top. Think of a stack of plates: you can only take the plate on top, and you can only add a plate to the top. The paradox of the stack is that it naturally reverses order: push A, B, C in that sequence, and they come out C, B, A. This reversal property, combined with the "remember context and return to it" semantic, makes stacks the natural choice for recursive processes, expression evaluation, and state backtracking.

Every function call implicitly uses a stack — the call stack. When you invoke mainfoobar, each frame is pushed. When bar returns, its frame is popped. Stack overflow occurs when recursion depth exceeds the allocated stack size — generally around 1–8 MB on most platforms.

1. Simple Stack

The foundational LIFO structure. Can be implemented with a dynamic array (append = push, pop from end = pop — O(1) amortized, excellent cache locality) or a linked list (push/pop at head — strictly O(1), but pointer overhead and poor cache locality). In practice, array-based stacks are faster due to cache effects.

Complexity

OperationTime
PushO(1) amortized
PopO(1)
Peek/TopO(1)
SearchO(n)

Real-World Uses

  • Function call stack: Every program uses this implicitly. Understanding it explains stack overflow, tail call optimization, and why recursion can be converted to iteration with an explicit stack.
  • Undo/Redo: Each action is pushed onto an undo stack. Ctrl+Z pops the last action and pushes it to a redo stack. Ctrl+Y pops from redo and pushes back to undo.
  • DFS (Depth-First Search): Either recursive (implicit call stack) or iterative (explicit stack). Push the start node; loop: pop a node, process it, push unvisited neighbors.
  • Balanced brackets/parentheses: For every opening bracket, push it. For every closing bracket, check if the top matches. If the stack is empty at the end, the expression is balanced. Used in compilers, XML/HTML parsers, and code linters.
  • Expression evaluation (shunting-yard): Dijkstra's algorithm that converts infix expression (3 + 4 * 2) to postfix RPN using an operator stack, respecting precedence and associativity. All calculators and compiler expression parsers use this principle.
  • Backtracking algorithms: Maze solving, N-Queens, Sudoku: push the current state, try a move, push new state, backtrack (pop) when stuck.

2. Min/Max Stack

A min/max stack tracks the current minimum (or maximum) in O(1) at all times, even as elements are pushed and popped. The naive approach of scanning the entire stack is O(n) — unacceptable for streaming data.

Implementation: Maintain two stacks: the primary stack and an auxiliary minStack. When pushing value v: push v to primary and push min(v, minStack.top()) to minStack. When popping: pop from both stacks. getMin() = minStack.top(). This is a classic LeetCode problem (#155) but has genuine production applications. A space-optimized variant stores pairs (value, currentMin) in a single stack — same semantics, half the pointer overhead.

Gotcha: If you only push to minStack when the new element is ≤ current min (to "save space"), you must pop from minStack only when the popped element equals the current min. Failing to handle equal values correctly is a common interview bug.

Real-World Uses

  • Stock span problem: Find the number of consecutive days a stock's price was ≤ today's price. Use a stack that tracks running maxima.
  • Online statistics: Track the minimum latency or maximum throughput seen in the last N operations — combine with a circular buffer for a sliding window variant.
  • Database query optimizers: Track the minimum cost plan seen during branch-and-bound search without rescanning.
  • Text editor rope structures: Specialized min-cost tracking for cursor position rollback.

3. Monotonic Stack

A monotonic stack maintains elements in strictly increasing or decreasing order at all times. When you push a new element x: pop all elements from the top that would violate the monotonic property (i.e., those ≥ x for an increasing stack). Each element is pushed and popped at most once, giving an amortized O(n) for processing n elements — even though there may be inner loops.

Key insight: The act of popping an element a when pushing x means x is the next greater element (NGE) of a in the original sequence. By recording this relationship during the pop, you solve the "next greater element" problem for every element in one O(n) pass.

Largest rectangle in histogram: For each bar, the width of the largest rectangle with that bar as the shortest is determined by the nearest shorter bar to the left and right. A monotonic increasing stack finds exactly these boundaries: when you pop bar h[i] because the current bar h[j] is shorter, j is the right boundary and the stack top after popping is the left boundary.

Real-World Uses

  • Next greater/smaller element: Stock price alerts, next warmer day (LeetCode 739), next shorter building — all O(n).
  • Largest rectangle in histogram / Maximal rectangle: Image processing (finding largest white region), building skyline problems in computational geometry.
  • Trapping rain water: Find how much water accumulates between elevation bars. A monotonic stack approach processes boundaries in O(n).
  • Task cooldown (LeetCode 621): Scheduling problems where tasks must respect cooldown periods — monotonic reasoning about next-available slots.
  • Sum of subarray minimums: Finding the sum of minimums of all contiguous subarrays in O(n) using left[] and right[] arrays built with monotonic stacks.

Part 3: Queue vs Stack — When to Use Which

Scenario Use Queue Use Stack
Graph traversalBFS — level order, shortest path in unweighted graphDFS — topological sort, cycle detection, path finding
Processing orderFairness matters — serve oldest firstMost recent first — undo, backtracking
Reversing dataPush all elements, then pop — O(n) reversal
Matched pairsBracket matching, XML tags, nested structures
Scheduling with priorityPriority Queue — serve highest priority first
Fixed-size bufferCircular Queue / Ring Buffer
Sliding window extremesDeque-based O(n) window max/minMonotonic stack — next NGE, histogram area
Expression parsingShunting-yard, postfix evaluation
Async message passingFIFO message queue (RabbitMQ, Kafka, SQS)
O(1) min/max trackingMin/Max Stack with auxiliary stack

Implementing a Queue from Two Stacks

One of the classic computer science interview puzzles: implement a FIFO queue using only two LIFO stacks. The trick is to use one stack for enqueuing (inbox) and another for dequeuing (outbox). Enqueue always pushes to inbox. Dequeue pops from outbox; if outbox is empty, pour all elements from inbox to outbox (which reverses them, turning LIFO into FIFO for the next batch). Each element moves from inbox to outbox at most once, giving amortized O(1) dequeue. This pattern is used in lock-free queue implementations (Michael-Scott queue) and in functional languages where lists are immutable.

Implementing a Stack from Two Queues

Less practical but instructive: maintain two queues q1 and q2. To push value x: enqueue to q2, then enqueue all elements from q1 to q2, then swap names. q1 now has x at the front (LIFO). To pop: dequeue from q1. Push is O(n), pop is O(1). Alternatively, push is O(1) and pop is O(n) — you must dequeue n-1 elements to reach the bottom. This is O(n) overall and not used in practice; its value is pedagogical.

Part 4: Implementations in Common Languages

Queue — Go (slice-based)

queue := []int{}}
// Enqueue — O(1) amortized
queue = append(queue, 42)
// Dequeue — O(1) with head pointer trick
// (simple slice: re-slice, underlying array sticks around)
front := queue[0]
queue = queue[1:]
// Better: use container/ring or a linked list
import "container/list"
q := list.New()
q.PushBack(42)          // enqueue
el := q.Front()         // peek
q.Remove(el)            // dequeue

Stack — Go (slice)

stack := []int{}}
// Push — O(1) amortized
stack = append(stack, 42)
// Pop — O(1)
top := stack[len(stack)-1]
stack = stack[:len(stack)-1]
// Peek — O(1)
peek := stack[len(stack)-1]

Queue — Python

from collections import deque
q = deque()
q.append(42)        # enqueue back — O(1)
q.appendleft(1)     # enqueue front (deque only)
front = q[0]        # peek — O(1)
q.popleft()         # dequeue front — O(1)
q.pop()             # dequeue back (deque only)

# Priority Queue
import heapq
pq = []
heapq.heappush(pq, (1, 'task-A'))  # (priority, value)
pri, val = heapq.heappop(pq)       # lowest priority first

Stack — Python / Min Stack

stack = []
stack.append(42)       # push — O(1) amortized
top = stack.pop()      # pop — O(1)
peek = stack[-1]       # peek — O(1)

# Min Stack O(1)
class MinStack:
    def __init__(self):
        self._s = []
        self._min = []
    def push(self, v):
        self._s.append(v)
        m = min(v, self._min[-1]) if self._min else v
        self._min.append(m)
    def pop(self):
        self._s.pop(); self._min.pop()
    def get_min(self):
        return self._min[-1]

Queue & Stack — JavaScript

// Queue (Array — shift is O(n)! Use a class)
class Queue {
  #data = []; #head = 0;
  enqueue(v) { this.#data.push(v); }
  dequeue() {
    if (this.isEmpty()) return undefined;
    const v = this.#data[this.#head++];
    if (this.#head > this.#data.length / 2) {
      this.#data = this.#data.slice(this.#head);
      this.#head = 0;
    }
    return v;
  }
  peek()    { return this.#data[this.#head]; }
  isEmpty() { return this.#head >= this.#data.length; }
}
// Stack
const stack = [];
stack.push(42);         // push
const top = stack.pop(); // pop

Queue & Stack — Java

// Queue — prefer ArrayDeque over LinkedList
Queue<Integer> q = new ArrayDeque<>();
q.offer(42);       // enqueue (returns false if full)
int front = q.peek(); // peek (null if empty)
q.poll();          // dequeue (null if empty)

// Stack — prefer ArrayDeque over Stack class
Deque<Integer> stack = new ArrayDeque<>();
stack.push(42);    // push
int top = stack.peek();
stack.pop();

// Priority Queue (min-heap by default)
PriorityQueue<Integer> pq = new PriorityQueue<>();
pq.offer(5); pq.offer(1); pq.offer(3);
pq.poll(); // returns 1

Part 5: Common Pitfalls and Gotchas

⚠️ Python's list.pop(0) is O(n), not O(1)

When using a Python list as a queue, list.pop(0) shifts all n-1 remaining elements left — O(n). This makes a queue of n enqueue+dequeue operations O(n²) total. Always use collections.deque which provides O(1) popleft(). This is one of the most common performance bugs in Python interview submissions.

⚠️ JavaScript Array.shift() is also O(n)

Array.prototype.shift() in JavaScript also shifts all elements — O(n). For a proper queue, implement one with a head index pointer (shown above) or use a linked list via a custom class. This affects all standard JS BFS implementations that naively use arrays as queues.

⚠️ Don't use Java's Stack class — it's synchronized and slower

Java's legacy Stack<E> extends Vector, which is synchronized. Every push/pop acquires a lock even in single-threaded code. The Java documentation itself recommends using ArrayDeque<E> as a stack instead (deque.push(), deque.pop()). Similarly, prefer ArrayDeque over LinkedList for queue operations — it's 2–4x faster in benchmarks due to cache locality.

⚠️ Go slice reuse causes memory leaks in queues

Using queue = queue[1:] to dequeue in Go doesn't release the underlying array memory — the head elements are still referenced by the backing array. For long-running queues, use a proper ring buffer, container/ring, or container/list. Nil out dequeued slots explicitly: queue[0] = nil; queue = queue[1:] if storing pointers.

⚠️ Circular queue off-by-one in full/empty detection

The most common circular queue bug: when head and tail are equal, is the queue full or empty? Without extra care, you can't distinguish these two states with just two indices. Solutions: (1) reserve one slot — full when (tail+1)%cap == head; (2) maintain a count variable; (3) maintain a boolean isFull flag toggled on enqueue/dequeue. Mixing approaches leads to subtle race conditions in concurrent scenarios.

⚠️ Priority queue stability — equal priorities

Binary heaps are not stable — when two elements have equal priority, the heap makes no guarantee about which is dequeued first. If FIFO tie-breaking is needed (common in schedulers), add a sequence number as a secondary key: (priority, seqNo, value). Python's heapq will compare the seqNo when priorities are equal, giving stable FIFO ordering among equal-priority items.

⚠️ Stack overflow vs. heap overflow

A stack overflow (uncaught recursion) is limited by the OS thread stack size, typically 1–8 MB. Convert deep recursion to iterative DFS with an explicit heap-allocated stack ([]Frame) to bypass this limit. In Go, goroutines start with 2KB stacks and grow dynamically, making deep recursion safer. In Python, the default recursion limit is 1000 — increase with sys.setrecursionlimit() for tree algorithms, or refactor to iteration.

Part 6: Complexity Summary — All Types

Structure Add Remove Peek Space
Simple QueueO(1)O(1)O(1)O(n)
Circular QueueO(1)O(1)O(1)O(cap)
Priority QueueO(log n)O(log n)O(1)O(n)
DequeO(1)O(1)O(1)O(n)
Simple StackO(1)†O(1)O(1)O(n)
Min/Max StackO(1)O(1)O(1)O(2n)
Monotonic StackO(1)*O(1)O(1)O(n)

All structures listed have O(n) space complexity for n stored elements. Circular queue uses O(capacity). Min/Max stack uses O(2n) ≈ O(n).