The Engineering of TCP: Building Reliability over Chaos
The Internet Protocol (IP) underlying the web provides absolutely zero guarantees. It routes packets like postcards tossed into a hurricane—capable of arriving duplicated, out-of-order, or never arriving at all. TCP's monumental engineering achievement was layering a perfectly reliable, ordered, byte-stream abstraction directly on top of this intrinsically chaotic medium.
Part 1: The Sequence Number Space
To guarantee order, TCP assigns a 32-bit Sequence Number to every byte of data sent. However, it does not start at zero.
During the 3-Way Handshake, both the client and server generate a random Initial Sequence Number (ISN). This prevents extremely dangerous "stray packet" attacks. If a router delays a packet from an old, closed connection for two minutes and finally delivers it, random ISNs ensure the current connection instantly rejects the packet as mathematically impossible, rather than inadvertently injecting corrupted data into your ongoing database query.
Part 2: Cumulative Acknowledgment
If the server receives Sequence Numbers 1 through 100, but then receives 201 through 300, it realizes packets 101-200 were lost.
It does NOT send an ACK for 300. TCP uses Cumulative ACKs. The server
persistently sends ACK=101 to the client, effectively screaming, "I have
everything up to 100, and I am halted until you give me 101." This prevents the client
from assuming the entire stream is successfully arriving.
When the client receives three identical duplicate ACKs (ACK=101) in rapid
succession, it doesn't wait for its internal timer to expire. It triggers a
Fast Retransmit, instantly resending the missing segment 101 to repair
the hole in the receiver's buffer.
Part 3: Flow Control vs Congestion Control
TCP regulates speed using two entirely separate feedback loops.
1. Flow Control (Protecting the Receiver): Inside every TCP header is a "Receive Window" value. This tells the sender exactly how much RAM the receiver has left in its OS buffer. If a mobile phone's CPU gets overwhelmed and its buffer fills, it advertises a Window of 0. The mighty Google server instantly halts transmission, preventing the phone from crashing.
2. Congestion Control (Protecting the Network): The sender maintains a hidden "Congestion Window" (cwnd). When a connection starts, the sender assumes the network is fragile and sends just 10 packets (Slow Start). On successful ACKs, it doubles the rate every Round Trip Time (RTT). The moment a router drops a packet due to a full queue, TCP detects the drop, drastically slashes its sending rate in half, and slowly probes upward again (Congestion Avoidance). This cooperative yielding prevents the entire global internet from melting down in "Congestion Collapse."
Part 4: Head-of-Line Blocking
TCP's strict in-order delivery mandate creates its greatest architectural flaw: Head-of-Line Blocking.
If you multiplex 5 separate HTTP/2 images over a single TCP connection, and the very first packet of Image 1 is dropped by a router, TCP halts everything. The OS will physically withhold perfectly valid, arriving packets for Images 2, 3, 4, and 5 in its kernel buffer because it is legally bound to deliver byte 1 to the application before byte 1000.
This flaw is unfixable within TCP. This is exactly why Google engineers abandoned TCP entirely, choosing to build the modern HTTP/3 over the connectionless UDP protocol (via QUIC), implementing reliability strictly at the application layer to bypass Head-of-Line blocking once and for all.