Load balancing is a technique that can be used to scale website traffic. Distribute traffic across multiple servers in a cluster or multiple geographical origins.
Load balancing for performance
Let’s use our own website as an example. We are load balancing our inbound traffic, based on geography. Servebolt.com is running in several of our data centers, and we route (load balance) visitors to the data center that is in the closest proximity to the user. If any of these data centers are down, the requests would automatically be routed to on of the other data centers. This type of load balancing is why our website has almost the same low response times (performance), from anywhere on the planet.
Load balancing for scalability
Some sites have so much traffic, that it just is not enough with one server. When the hosting is scaled horizontally (meaning spread across several similar servers), the traffic to each of the nodes in such a setup, needs to be balanced. That’s when we put a load balancer in front, that can distribute the traffic across the nodes in our setup – with a load that is acceptable.
Cloudflare Load Balancer
We use the Cloudflare load balancer ourselves for most load balancing tasks. It has several advantages over haproxy and other classical load balancing services. Firstly, the service itself is running across all nodes in Cloudflare’s network. That means that the service itself is redundant, and largely failsafe.
The second thing, is that it can do geo load balancing. Because Cloudflare’s network spans around the whole globe, and traffic can be routed efficiently with Argo Routing – Cloudflare can route the traffic from the CDN edge, back to the closest server – faster than any competing services.
Cloudflare load balancer is also easy to use. Whether you’re configuring a local cluster, or a world wide array of servers – the configuration is point and click (or API based), and done or changed in minutes or seconds. It’s super easy.