HTTP/2 Stream Visualizer

Runs in browser

Visualize multiplexing & frame flow

Active Streams

0

Total Streams

0

Frames

0

Streams

No active streams

How to Use

Start simulation to see multiplexing.

You will see:

  • Interleaved Frames (Map to Stream ID)
  • Single TCP Connection
  • Frame Types (Headers, Data)
Frame Flow

Single TCP Connection

C
S

Frames from different streams interleaved on same connection

Frame Types

HEADERS
DATA
WINDOW_UPDATE
RST_STREAM
PRIORITY
PUSH_PROMISE
Frame Log
No frames yet. Start simulation or add a request.

HTTP/2: The Protocol That Rescued the Web

In the early days of the internet, websites consisted of a single HTML file and perhaps a few small images. HTTP/1.1 was perfectly suited for this era: the browser opened a TCP connection, asked for the HTML, waited for the response, closed the connection, and rendered the text.

But the modern web is a beast. A typical webpage today requires downloading 100+ separate assets (JavaScript bundles, CSS chunks, fonts, analytics scripts, high-res hero images). Relying on HTTP/1.1 to deliver these heavy, modern applications exposed a catastrophic flaw in the protocol's architecture. HTTP/2 was engineered from the ground up to solve this single problem.


1. The Crisis: Head-of-Line Blocking in HTTP/1.1

Under HTTP/1.1, a single TCP connection is strictly synchronous. You send Request A, and you must wait for Response A to fully download before you can send Request B. This queuing is known as Application-Layer Head-of-Line (HoL) Blocking.

If a browser needs 100 images, requesting them one by one over a single connection would take agonizingly long. To hack around this, browsers started opening 6 concurrent TCP connections per domain. This helped, but establishing 6 TCP handshakes and 6 TLS handshakes is incredibly slow, especially on mobile networks. Furthermore, developers were forced to resort to awful anti-patterns:

  • Domain Sharding: Creating fake subdomains (img1.mydomain.com, img2.mydomain.com) just to trick the browser into opening 6 more concurrent connections.
  • Image Sprites: Stitching 50 small icons into one massive PNG file, downloading it, and using CSS to display only specific 16x16 pixel blocks of the giant image.
  • Inlining / Concatenation: Ramming 20 separate JavaScript files into one monolithic bundle.js file, destroying the browser's ability to cache individual files efficiently.

2. The HTTP/2 Solution: True Multiplexing over a Single TCP Connection

HTTP/2 mandates that between any single client and any single server, there is only one TCP connection.

Instead of sending a text-based request (GET /image.png HTTP/1.1) and waiting for a massive chunk of text back, HTTP/2 introduces the Binary Framing Layer.

Streams, Messages, and Frames:

Streams: A bi-directional flow of bytes within the single TCP connection. Every request/response pair is assigned a unique integer (e.g., Stream #5).

Messages: A logical HTTP request or response (like the 200 OK response for /logo.png). A message consists of one or more Frames.

Frames: The smallest unit of communication. A response message is broken into tiny pieces: one HEADERS frame containing the status code, followed by dozens of tiny DATA frames containing the actual image bytes. Every single frame is tagged with its parent Stream ID.

Interleaving in Action

Because everything is broken into tiny, tagged frames, the server can send them back in any order, perfectly interleaved. It can send one frame of app.js (Stream 1), then two frames of logo.png (Stream 3), then another frame of app.js. The browser receives this chaotic soup of frames, looks at their tags, and seamlessly reassembles them into the original files. Application-layer Head-of-Line blocking is dead.


3. HPACK, Stream Priorities, and Server Push

HPACK Header Compression

In HTTP/1.1, every single request redundantly sends the exact same 1-2 KB of headers (User-Agent, Accept, Cookies). If a user loads 100 assets, that's 200 KB of wasted upstream bandwidth just asking for files! HTTP/2 uses the HPACK algorithm. Both the client and server maintain an indexed lookup table of previously seen headers. If a header hasn't changed since the last request, the client just sends a 1-byte index number (e.g., "Use header #62"). This reduces header sizes by ~85%.

Stream Prioritization

When a browser requests 50 files simultaneously over one connection, how does the server know what to send first? Does it send 10% of the CSS and 90% of a heavy image? No. The browser uses PRIORITY frames to assign weights and dependencies to streams. It explicitly tells the server: "Stop sending the image on Stream 5, and give 100% of your bandwidth to the critical CSS on Stream 3."

Server Push (A Failed Experiment?)

HTTP/2 introduced "Server Push" (via PUSH_PROMISE frames). If a client requested index.html, the Server could proactively inject style.css into the client's cache before the client even realized it needed it. While brilliant in theory, it proved incredibly difficult to deploy in reality because servers often pushed files the browser already had cached locally, wasting bandwidth. Today, Server Push is largely deprecated in favor of `<link rel="preload">` tags.


4. The Lingering Flaw & The Birth of HTTP/3 (QUIC)

HTTP/2 solved Application-Layer Head-of-Line blocking, but it accidentally exposed a new flaw: TCP-Layer Head-of-Line blocking.

Because all 100 concurrent HTTP streams are now stuffed into a single TCP connection, what happens if the network drops a single packet? TCP guarantees strict ordering. The operating system kernel will halt the entire connection, refusing to deliver any of the subsequently arriving packets to the browser until the sender notices the drop and retransmits that single missing packet.

A single dropped packet stalls all 100 streams simultaneously. On high-packet-loss networks (like a cell phone on a train), HTTP/2 can actually perform worse than HTTP/1.1.

To solve this, the industry had to abandon TCP entirely. HTTP/3 replaces TCP with QUIC, a protocol built on top of UDP. QUIC implements multiplexing at the transport layer, ensuring that a dropped packet containing data for Stream 5 only stalls Stream 5, while Streams 1-4 and 6-100 continue downloading perfectly.

References & Deep Dives