Proxy
A proxy is a structure where clients and servers do not communicate directly but go through an intermediary server. A forward proxy controls outbound access from the client side, while a reverse proxy receives incoming requests on the server side and distributes them to backend servers. Functions like caching, access control, TLS termination, and routing can be handled at this middle layer instead of in each application.
βΆArchitecture Diagram
π RelationshipDashed line animations indicate the flow direction of data or requests
A direct client-to-server architecture is simple, but as a service grows it hits several problems. If the server IP is directly exposed, building a security boundary is difficult. If there are multiple backend servers and clients need to know individual addresses, every server change forces a client-side update. Having the origin server repeatedly send the same static response thousands of times is wasteful, and leaving employees free to access anything external without controls is an operational risk. Solving these problems in each application individually creates code duplication and management sprawl, and every policy change means touching every service. A proxy gathers this burden into a single intermediary layer between clients and servers.
In the early web, clients connected directly to servers and that was all. But as corporate networks needed to control employee internet access, forward proxies appeared first. On the server side, the need to serve multiple services behind one domain, hide server architecture from clients, and manage TLS certificates centrally grew, and reverse proxies naturally took their place. As microservice architectures spread, the role of proxies expanded further. Routing dozens of backend services from a single entry point, applying inter-service communication policies centrally, and using sidecar proxies in service meshes to separate each service's networking concerns became common patterns. Proxies evolved from simple relay devices into the hub for traffic control and policy enforcement in modern infrastructure.
A proxy is an intermediary server that sits between clients and servers, exchanging requests and responses on their behalf. There are two main directions. A forward proxy sits on the client side and relays users' outbound requests. It is used to block access to certain sites from a corporate network, hide users' real IPs, or cache repeated requests to save bandwidth. A reverse proxy sits on the server side and receives incoming requests from the outside. Clients only need to know the reverse proxy's address; how many servers are behind it and which path goes where is the proxy's decision. A reverse proxy routes to the appropriate backend based on URL path or host header, manages TLS certificates in one place so internal servers only need to handle plaintext, and caches frequently requested responses to reduce origin load. Because these functions are concentrated in one layer, adding or replacing backend services does not change the entry point that clients see. This is why tools like Nginx, HAProxy, and Envoy are widely used as reverse proxies.
Proxies, load balancers, and CDNs all relay traffic between clients and servers, and in practice a reverse proxy often performs load balancing and caching together. But the core problem each one solves is different. The essence of a proxy is relay itself: intercepting, inspecting, transforming, and routing requests as a general-purpose intermediary whose strength is flexible policy enforcement like access control, protocol translation, and header modification. A load balancer specializes in distributing requests evenly across multiple replicas of the same service, with health checks and failover at its core. A CDN focuses on caching content at geographically distributed edges to reduce latency caused by physical distance. For a small service, one reverse proxy handling routing, TLS termination, and simple distribution is natural, but as the service grows, separating each concern into a dedicated layer tends to be better for operations and fault isolation.
Commonly Compared Concepts
Load Balancer
The backbone of traffic distribution and high availability
A reverse proxy and a load balancer both receive requests in front of servers, but a proxy's strength is general-purpose relay including routing, transformation, and policy enforcement, while a load balancer specializes in distributing load evenly across replicas of the same service.
CDN
Fast content delivery to users worldwide
Proxies and CDNs can both reduce origin load through caching, but a proxy handles routing, policy, and caching near the server while a CDN distributes content across global edges to reduce geographic latency.
A reverse proxy appears almost naturally when you have two or more backend services, need to serve different services by path behind one domain, or want to manage TLS certificates in one place. In container-based deployment environments, an ingress controller is essentially a reverse proxy. Forward proxies are used in corporate networks to control external access, or in test environments to intercept and mock external API calls. However, redundancy is needed to prevent the proxy from becoming a single point of failure, and the extra layer adds a slight amount of latency. Also, if you pile too many policies onto the proxy -- routing, caching, authentication, rate limiting, logging -- configuration becomes complex and narrowing down failure causes becomes harder. The key to running a proxy well is not enabling as many features as possible, but clearly defining what concerns this layer should own and delegating the rest to appropriate dedicated infrastructure.