Home System Design Tutorial Introduction to Load Balancer - System Design

Introduction to Load Balancer – System Design

Load Balancers are a critical component in designing any distributed systems. Load Balancer does its magic by dividing the traffic among a group of servers thus resulting in improved response and availability of a website or application.

Traditional Client-Server Architecture

In a traditional client-server architecture, we used to have multiple clients trying to connect to one server. The problem with such architecture was if the traffic volume or number of requests increased then it increased the load on the server hence affecting the response time of the server. Also, there was only one server hence it was prone to failure i.e. if the server goes down, the entire application/website will be impacted.

Traditional Client-Server Architecture
Traditional Client-Server Architecture

Load Balancer

To overcome the problem with the traditional Client-Server architecture, Load Balancers are used. As discussed above, the load balancer works by distributing the high volume of incoming traffic among the group of available servers, hence making the distributed system fault-tolerant(i.e, by removing the single point of failure).

Load Balancers. Source: educative.io
Load Balancers. Source: educative.io

Load Balancers reside between the client and server, hence when a client makes a request to the server, the request is first received by the Load Balancers which then handles the request by routing it to a backend server. The server then responds directly to the client.

Client =====> Load Balancer =====> Server
Client  <====================== Server

Load Balancing Algorithms

  1. Least Connection Method – The load balancers will choose the server handling least number of active connections. This algorithm is particularly useful when the load is unevenly distributed.
  2. The Least Response Time Method – The server with the lowest average response time will be chosen. Some load balancers will consider the server with the lowest average response time and the least number of active connections.
  3. Round Robin Method – This algorithm will work as follows: the first request is sent to the first server, then the next to the second, and so on to the last. Then it is started again, assigning the next request to the first server, and so on.
  4. Weighted Round Robin Method – Every server is assigned an integer value known as weight based on their processing capacity. While serving a new request, the server will be chosen on the basis of weight (the server with higher weight will be preferred over the server with less weight).
  5. Least Bandwidth Method – The algorithm will choose the server currently serving the least amount of traffic, measured in megabits per second (Mbps).
  6. Hash – The hash of the client IP address is used to redirect the request to the server.

Advantages of Load Balancers

  1. Fasters response time and increased throughput.
  2. A fault-tolerant system as it removes the problem of a single point of failure.
  3. The system becomes highly scalable.
  4. Higher availability and negligible downtime.

Further Reading

  1. Application of Load Balancers
  2. Load Balancer uses Horizontal Scaling


Please enter your comment!
Please enter your name here