[go: up one dir, main page]

DEV Community

Cover image for Using Pipy as software load balancer
Ali Naqvi for Flomesh

Posted on

Using Pipy as software load balancer

Load balancing is a commonly used technique which refers to the process of distributing network traffic across multiple application instances and/or servers to optimize resource utilization, maximizing throughput, reducing latency, ensuring fault-toreant configurations, ensuring availability, and improving application responsiveness.

The following are a few of the advantages of using load balancer:

  • Efficient use of resources.
  • Increased performance of your application because of faster responses.
  • If a server crashes the application is still up and served by the other servers in the cluster.
  • When appropriate load balancing algorithm is used, it brings optimal and efficient utilization of the resources, as it eliminates the scenario of some server’s resources are getting used than others.
  • Scalability: We can increase or decrease the number of servers on the fly without bringing down the application
  • Load balancing increases the reliability of your enterprise application
  • Increased security as the physical servers and IPs are abstract in certain cases.

Load balancers are an integral part of an organization's digital strategy. Traditionally, load balancers consist of a hardware appliance. Yet they are increasingly becoming software-defined. Both types of load balancers implements different types of scheduling algorithms and routing mechanisms. In this article we will be focusing on software load balancer and how Pipy an open source programmable network proxy for cloud, edge, and IoT can be used to build a very efficient HTTP load balancer to distribute traffic to several application servers and to improve performance, scalability and reliability of web applications and/or services.

For detailed step-by-step instructions and complete working code please refer to tutorials Part 7: Load Balancing and Part 8: Load Balancing Improved available on Pipy website

Load Balancing Algorithms

Pipy ships with built-in load-balancing algorithms which can be plugged easily to design a load balancing proxy service.

Supported load balancers

The following load balancing mechanisms (or methods) are supported in pipy:

Round Robin

This is a simple algorithm in which requests are distributed evenly across servers. Requests are served by the server sequentially one after another. After sending the request to the last server, it starts from the first server again. This algorithm is used when servers are of equal specification and there not much persistent connections.

The simplest configuration for using this algorithm in Pipy may look like the following:

new algo.RoundRobinLoadBalancer([
  'localhost:8080',
  'localhost:8081',
  'localhost:8082',
])
Enter fullscreen mode Exit fullscreen mode

Weighted round robin

This is same as round robin algorithm, but if weights are assigned to endpoints, then a weighted round robin schedule is used, where higher weighted endpoints will appear more often in the rotation to achieve the effective weighting. This algorithm is used when there is a considerable difference between the capabilities and specification of the servers present in the farm or cluster.
This algorithm stands out to be efficient in managing the load without swarming the low capability servers the most and efficiently utilizing the available server resource at any instant of time.

The simplest configuration for using this algorithm in Pipy may look like the following:

new algo.RoundRobinLoadBalancer({
  'localhost:8080': 50,
  'localhost:8081': 25,
  'localhost:8082': 25,
})
Enter fullscreen mode Exit fullscreen mode

Least workload

Select the destination with the least assigned requests. Requests are served first to the server which is currently handling least number of persistent connections.This requires examining all destinations. This algorithm is used when we have a large number of persistent connections in the traffic unevenly distributed between the servers. It is often coupled with Sticky Session or Session aware load balancing. In this, all the request related to a session is sent to the same server to maintain the session state and syncronization. This algorithm is well suited when we have session aware write operations in sync with client and the server so that it avoids any inconsistency.

The simplest configuration for using this algorithm in Pipy may look like the following:

new algo.LeastWorkLoadBalancer([
  'localhost:8080',
  'localhost:8081',
  'localhost:8082',
])
Enter fullscreen mode Exit fullscreen mode

Weighted Least workload

This is same as least workload algorithm, but if weights are assigned to endpoints, then a weighted schedule is used, where higher weighted endpoints will appear more often in the rotation to achieve the effective weighting.

The simplest configuration for using this algorithm in Pipy may look like the following:

new algo.LeastWorkLoadBalancer({
  'localhost:8080': 50,
  'localhost:8081': 25,
  'localhost:8082': 25,
})
Enter fullscreen mode Exit fullscreen mode

Generic hash

The server to which a request is sent is determined from a hash of user-defined key which may be a text, variable, or their combination. For example, the key may be a source IP and port, or URI.

The simplest configuration for using this algorithm in Pipy may look like the following:

new algo.HashingLoadBalancer([
  'localhost:8080',
  'localhost:8081',
  'localhost:8082',
])
Enter fullscreen mode Exit fullscreen mode

Session Affinity

Session affinity is a mechanism to bind (affinitize) a causally related request sequence to the destination that handled the first request when the load is balanced among several destinations. It is useful in scenarios where the most requests in a sequence work with the same data and the cost of data access differs for different nodes (destinations) handling requests. The most common example is a transient caching (e.g. in-memory) where the first request fetches data from a slower persistent storage into a fast local cache and the others work only with the cached data thus increasing throughput.

With round-robin or least-workload load balancing, each subsequent client’s request can be potentially distributed to a different server. There is no guarantee that the same client will be always directed to the same server. If there is the need to tie a client to a particular application server — in other words, make the client’s session “sticky” or “persistent” in terms of always trying to select a particular server — the Generic hash load balancing mechanism can be used.

For algorithms like round-robin, least workload to implement session affinity, Pipy provides algo.Cache class for caching functionality.

algo.Cache

Pipy provides a caching mechanism via its class algo.Cache which accepts two callbacks as its constructor arguments:

  1. Callback when a missing entry is to be filled.
  2. Callback when an entry is to be erased.

For example, we can call our RoundRobinLoadBalancer in those callbacks:

new algo.Cache(
  // k is a balancer, v is a target
  (k  ) => k.select(),
  (k,v) => k.deselect(v),
)
Enter fullscreen mode Exit fullscreen mode

Conclusion

Pipy is an open-source, extremely fast, and lightweight network traffic processor which can be used in a variety of use cases ranging from edge routers, load balancing & proxying (forward/reverse), API gateways, Static HTTP Servers, Service mesh sidecars, and many other applications. Pipy is in active development and maintained by full-time committers and contributors, though still an early version, it has been battle-tested and in production use by several commercial clients.

Step-by-step tutorials and documentation can be found on Pipy website or accessed via Pipy admin console web UI. The community is welcome to contribute to Pipy development, give it a try for their particular use-case, provide their feedback and insights.

Top comments (0)