How Reverse Proxy Works

The bodyguard of your backend. It handles the heavy lifting so your apps don't have to.

Public Internet
Private Network
Client
NGINX
Reverse Proxy
App Server
Static Files
HTTPS
🔐 SSL Termination
1 / 4

Public Request

Hitting the Gateway

A client sends an encrypted HTTPS request to your public domain (example.com).

Technical Detail

TLS 1.3 Handshake completes with the Proxy, not the backend app.

Key Takeaways

Offloading

The proxy handles SSL handshake and static files, letting your app focus on logic.

Security

No one hits your backend directly. The proxy is the hardened "outer shell".

Flexibility

Upgrade backends or change topology without the user ever noticing.

The Engineering of Reverse Proxies: NGINX, HAProxy, and the Demilitarized Zone

If you expose a Node.js or Python application directly to the public internet on port 80/443, you are committing architectural malpractice. Application servers are inherently fragile. They block on single threads, they leak memory, and their built-in TLS decryption is often extremely slow. A Reverse Proxy (like NGINX or HAProxy) sits in front of your application as an indestructible, hyper-optimized bodyguard.


Part 1: The Mathematics of SSL Termination

Decrypting HTTPS traffic is a mathematically brutal operation. Establishing a TLS 1.3 connection requires asymmetric Elliptic Curve cryptography (to negotiate a shared secret), followed by continuous symmetric AES-GCM decryption for the entire ensuing conversation.

If you force your Application Server (e.g., a Python Django process) to handle this math, you burn CPU cycles that should have been spent querying your database or rendering HTML.

SSL Termination moves this burden to the Reverse Proxy. NGINX is written in C and hooks directly into the hyper-optimized OpenSSL C library. It decrypts traffic in microseconds and forwards purely unencrypted plaintext HTTP traffic over a secure internal VPC network to your Python application. Your Application Server no longer needs to possess or even know about the SSL Certificates.

Part 2: The Caching Edge

The fastest HTTP request in the world is the one that never reaches your application.

If 100,000 users ask for the homepage of your blog in the same hour, a raw application server will painfully re-render the HTML template from the database 100,000 times. A Reverse Proxy entirely eliminates this.

When request #1 hits NGINX, it forwards it to your application, but keeps a copy of the finalized HTML in its own RAM or SSD. When requests #2 through #100,000 arrive, NGINX instantly responds with standard HTTP 200 OK headers directly from cache. Your application server effectively registers zero traffic for those requests.

Part 3: Path-Based Routing (Microservices)

In a modern infrastructure, yourwebsite.com is not a single monolith. It is composed of dozens of specialized services. The Reverse Proxy acts as the master Traffic Cop, seamlessly routing requests based on the URL.

  • Requests to /api/* are forwarded to a cluster of Go microservices.
  • Requests to /blog/* are forwarded to an internal WordPress container.
  • Requests to /static/* never hit a backend at all; NGINX serves the files directly from the disk using the kernel's perfectly optimized sendfile() syscall.

To the user, it feels like a unified, cohesive application.

Part 4: Connection Draining & Slowloris Protection

Malicious users often launch Slowloris attacks, where they open a connection but send data agonizingly slowly (one byte every 10 seconds). In a single-threaded server like Node.js or a thread-per-connection server like Apache, enough slow clients will completely exhaust the connection pool, taking the application totally offline.

Reverse proxies like NGINX and HAProxy are intrinsically immune to this. They use an asynchronous, non-blocking epoll event loop. They can sustain hundreds of thousands of concurrent, hanging connections. NGINX acts as a buffer: it waits patiently until the extremely slow client finishes sending the entire HTTP request before it forwards a perfectly clean, fast request to your fragile backend application.

Layer 4 vs. Layer 7 Proxying

Fast Layer 4 (Transport)

Operates purely on IP addresses and TCP/UDP ports. It is completely blind to the actual content of the packets (like HTTP headers or URLs).

  • Incredibly fast and efficient.
  • Cannot do SSL termination.
  • Cannot route based on URL path (e.g., `/api` vs `/blog`).
HAProxy (TCP mode), AWS Network Load Balancer (NLB)

Smart Layer 7 (Application)

Terminates the TCP connection, decrypts the TLS, and parses the actual HTTP request. It makes routing decisions based on Headers, Cookies, or URLs.

  • Slower, requires more CPU.
  • Handles SSL Termination.
  • Allows advanced routing, intelligent caching, and WAF rules.
NGINX, Envoy, AWS Application Load Balancer (ALB)

Glossary & Concepts

🛡️ SSL Termination

The process of decrypting HTTPS traffic at the reverse proxy. This offloads CPU-intensive encryption tasks from backend servers.

🔄 Upstream Server

Referred to in NGINX configurations, this is the internal application or backend server that the reverse proxy forwards requests to.

⚡ Reverse Proxy Cache

Storing copies of responses from backend servers temporarily, so the proxy can serve future identical requests instantly without hitting the backend.

🔀 Path-based Routing

Directing incoming traffic to different backend services depending on the URL path (e.g., `/api/` goes to Node.js, `/blog/` goes to WordPress).