02-05-2017, 12:39 PM
In computing, load balancing improves the distribution of workloads across multiple computing resources, such as computers, a cluster of computers, network links, central processing units, or disk drives. Load balancing is intended to optimize resource usage, maximize performance, minimize response time, and avoid overloading any resource. The use of several components with load balancing instead of a single component can increase reliability and availability through redundancy. Load balancing often involves dedicated software or hardware, such as a multi-layer switch or a server process of the domain name system.
Load balancing differs from the channel link in which load balancing divides traffic between network interfaces into a network base (layer 4 OSI), whereas channel linking involves a division of traffic between physical interfaces to A lower level, 3) or on a data link (OSI model Layer 2) base with a protocol as a shortest route bridge.
Load balancing refers to the efficient distribution of incoming network traffic through a backend server group, also known as a server farm or a set of servers.
Modern high-traffic websites should serve hundreds of thousands, if not millions, of simultaneous requests from users or customers and return text, images, video, or application data correctly, all in a fast and reliable way . To scale cost effectively to meet these high volumes, the best modern computing practice usually requires adding more servers.
A load balancer acts as a "traffic agent" sitting in front of its servers and routes client requests on all servers capable of satisfying those requests in a way that maximizes the speed and capacity utilization and ensures that no servers are Overloaded, which could degrade performance. If a single server is moved, the load balancer redirects traffic to the remaining online servers. When a new server is added to the server pool, the load balancer automatically starts sending requests to it.
In this way, a load balancer performs the following functions:
• Distribute client requests or load the network efficiently on multiple servers
• Ensures high availability and reliability by sending requests only to servers that are online
• Provides flexibility to add or subtract servers as demand requires
Load Balancing Algorithms
Different load balancing algorithms provide different benefits; The choice of load balancing method depends on your needs:
• Round Robin: requests are distributed sequentially in the server group.
• Fewer connections: A new request is sent to the server with the fewest current connections to clients. The relative computing power of each server is taken into account to determine which has the lowest connections.
• IP Hash: The client IP address is used to determine which server receives the request.
Load balancing differs from the channel link in which load balancing divides traffic between network interfaces into a network base (layer 4 OSI), whereas channel linking involves a division of traffic between physical interfaces to A lower level, 3) or on a data link (OSI model Layer 2) base with a protocol as a shortest route bridge.
Load balancing refers to the efficient distribution of incoming network traffic through a backend server group, also known as a server farm or a set of servers.
Modern high-traffic websites should serve hundreds of thousands, if not millions, of simultaneous requests from users or customers and return text, images, video, or application data correctly, all in a fast and reliable way . To scale cost effectively to meet these high volumes, the best modern computing practice usually requires adding more servers.
A load balancer acts as a "traffic agent" sitting in front of its servers and routes client requests on all servers capable of satisfying those requests in a way that maximizes the speed and capacity utilization and ensures that no servers are Overloaded, which could degrade performance. If a single server is moved, the load balancer redirects traffic to the remaining online servers. When a new server is added to the server pool, the load balancer automatically starts sending requests to it.
In this way, a load balancer performs the following functions:
• Distribute client requests or load the network efficiently on multiple servers
• Ensures high availability and reliability by sending requests only to servers that are online
• Provides flexibility to add or subtract servers as demand requires
Load Balancing Algorithms
Different load balancing algorithms provide different benefits; The choice of load balancing method depends on your needs:
• Round Robin: requests are distributed sequentially in the server group.
• Fewer connections: A new request is sent to the server with the fewest current connections to clients. The relative computing power of each server is taken into account to determine which has the lowest connections.
• IP Hash: The client IP address is used to determine which server receives the request.