Load balancing (computing)

Load balancing (computing)

Load balancing is a computer networking methodology to distribute workload across multiple computers or a computer cluster, network links, central processing units, disk drives, or other resources, to achieve optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The load balancing service is usually provided by dedicated software or hardware, such as a multilayer switch or a Domain Name System server.

Contents

Internet-based services

One of the most common applications of load balancing is to provide a single Internet service from multiple servers, sometimes known as a server farm. Commonly, load-balanced systems include popular web sites, large Internet Relay Chat networks, high-bandwidth File Transfer Protocol sites, Network News Transfer Protocol (NNTP) servers and Domain Name System (DNS) servers. Lately, some load balancers evolved to support databases; these are called database load balancers.

For Internet services, the load balancer is usually a software program that is listening on the port where external clients connect to access services. The load balancer forwards requests to one of the "backend" servers, which usually replies to the load balancer. This allows the load balancer to reply to the client without the client ever knowing about the internal separation of functions. It also prevents clients from contacting backend servers directly, which may have security benefits by hiding the structure of the internal network and preventing attacks on the kernel's network stack or unrelated services running on other ports.

Some load balancers provide a mechanism for doing something special in the event that all backend servers are unavailable. This might include forwarding to a backup load balancer, or displaying a message regarding the outage.

An alternate method of load balancing, which does not necessarily require a dedicated software or hardware node, is called round robin DNS. In this technique, multiple IP addresses are associated with a single domain name; clients are expected to choose which server to connect to. Unlike the use of a dedicated load balancer, this technique exposes to clients the existence of multiple backend servers. The technique has other advantages and disadvantages, depending on the degree of control over the DNS server and the granularity of load balancing desired.

Another, more effective technique for load-balancing using DNS, is to delegate www.example.org as a sub-domain whose zone is served by each of the same servers that are serving the web site. This technique works particularly well where individual servers are spread geographically on the Internet. For example,

one.example.org A 192.0.2.1
two.example.org A 203.0.113.2
www.example.org NS one.example.org
www.example.org NS two.example.org 

However, the zone file for www.example.org on each server is different such that each server resolves its own IP Address as the A-record.[1] On server one the zone file for www.example.org reports:

@ in a 192.0.2.1

On server two the same zone file contains:

@ in a 203.0.113.2

This way, when a server is down, its DNS will not respond and the web service does not receive any traffic. If the line to one server is congested, the unreliability of DNS ensures less HTTP traffic reaches that server. Furthermore, the quickest DNS response to the resolver is nearly always the one from the network's closest server, ensuring geo-sensitive load-balancing. A short TTL on the A-record helps to ensure traffic is quickly diverted when a server goes down. Consideration must be given the possibility that this technique may cause individual clients to switch between individual servers in mid-session.

A variety of scheduling algorithms are used by load balancers to determine which backend server to send a request to. Simple algorithms include random choice or round robin. More sophisticated load balancers may take into account additional factors, such as a server's reported load, recent response times, up/down status (determined by a monitoring poll of some kind), number of active connections, geographic location, capabilities, or how much traffic it has recently been assigned. High-performance systems may use multiple layers of load balancing.

In addition to using dedicated hardware load balancers, software-only solutions are available, including open source options. Examples of the latter include the Apache web server's mod_proxy_balancer extension, Varnish, or the Pound reverse proxy and load balancer. Gearman can be used to distribute appropriate computer tasks to multiple computers, so large tasks can be done more quickly.

In a Multitier architecture, terminology for designs behind a load balancer or network dispatcher may include Bowties and Stovepipes. A stovepipe presents a situation such that a transaction that is dispatched at a top tier follows a static path through the stack of devices and software behind the load balancer to its final destination. Alternatively, if Bowties are used, at each tier the transaction could take one of many paths after being serviced by the applications at a particular tier. Network diagrams with transaction flows resemble Stovepipes or Bowties, or hybrid architectures based on need at each tier.

Persistence

An important issue when operating a load-balanced service is how to handle information that must be kept across the multiple requests in a user's session. If this information is stored locally on one backend server, then subsequent requests going to different backend servers would not be able to find it. This might be cached information that can be recomputed, in which case load-balancing a request to a different backend server just introduces a performance issue.

One solution to the session data issue is to send all requests in a user session consistently to the same backend server. This is known as persistence or stickiness. A significant downside to this technique is its lack of automatic failover: if a backend server goes down, its per-session information becomes inaccessible, and any sessions depending on it are lost. The same problem is usually relevant to central database servers; even if web servers are "stateless" and not "sticky", the central database is (see below).

Assignment to a particular server might be based on a username, client IP address, or by random assignment. Because of changes of the client's perceived address resulting from DHCP, network address translation, and web proxies this method may be unreliable. Random assignments must be remembered by the load balancer, which creates a burden on storage. If the load balancer is replaced or fails, this information may be lost, and assignments may need to be deleted after a timeout period or during periods of high load to avoid exceeding the space available for the assignment table. The random assignment method also requires that clients maintain some state, which can be a problem, for example when a web browser has disabled storage of cookies. Sophisticated load balancers use multiple persistence techniques to avoid some of the shortcomings of any one method.

Another solution is to keep the per-session data in a database. Generally this is bad for performance since it increases the load on the database: the database is best used to store information less transient than per-session data. To prevent a database from becoming a single point of failure, and to improve scalability, the database is often replicated across multiple machines, and load balancing is used to spread the query load across those replicas. Microsoft's ASP.net State Server technology is an example of a session database. All servers in a web farm store their session data on State Server and any server in the farm can retrieve the data.

Fortunately there are more efficient approaches. In the very common case where the client is a web browser, per-session data can be stored in the browser itself. One technique is to use a browser cookie, suitably time-stamped and encrypted. Another is URL rewriting. Storing session data on the client is generally the preferred solution: then the load balancer is free to pick any backend server to handle a request. However, this method of state-data handling is not really suitable for some complex business logic scenarios, where session state payload is very big or recomputing it with every request on a server is not feasible, and URL rewriting has major security issues, since the end-user can easily alter the submitted URL and thus change session streams. Encrypted client side cookies are arguably just as insecure since unless all transmission is over HTTPS, they are very easy to copy or decrypt for man in the middle attacks.

Load balancer features

Hardware and software load balancers may have a variety of special features.

  • Asymmetric load: A ratio can be manually assigned to cause some backend servers to get a greater share of the workload than others. This is sometimes used as a crude way to account for some servers having more capacity than others and may not always work as desired.
  • Priority activation: When the number of available servers drops below a certain number, or load gets too high, standby servers can be brought online
  • SSL Offload and Acceleration: Depending on the workload, processing the encryption and authentication requirements of an SSL request can become a major part of the demand on the Web Server's CPU and as the demand increases the users will see slower response times. To remove this demand from the Web Server a Load Balancer may be used to terminate the SSL at the Load Balancer. Some Load Balancer appliances include specialized hardware to process SSL. When a Load Balancer terminates the SSL connections the requests are converted from HTTPS to HTTP in the Load Balancer before being passed to the Web Server. So long as the Load Balancer itself is not overloaded this feature will not noticeably degrade the performance perceived by the end users. The downside of this approach is that all of the SSL processing is concentrated among a single device (the Load Balancer) which can become a new bottleneck. When this feature is not used the SSL overhead is distributed among the Web Servers. For these reasons it is important to compare the total cost of a Load Balancer Appliance, which is often quite high, to that of the servers hosting the Web Servers, which are often running on inexpensive commodity servers, before deciding to use this feature. Adding a few web servers may be significantly cheaper than upgrading a Load Balancer. Also, some server vendors such as Oracle/Sun now incorporate cryptographic acceleration hardware into some models such as the T2000 which reduce the CPU burden and response time needed by SSL requests. One clear benefit to SSL offloading in the Load Balancer is that it enables the ability for the Load Balancer to do load balancing or content switching based on data in the HTTPS request.
  • Distributed Denial of Service (DDoS) attack protection: load balancers can provide features such as SYN cookies and delayed-binding (the back-end servers don't see the client until it finishes its TCP handshake) to mitigate SYN flood attacks and generally offload work from the servers to a more efficient platform.
  • HTTP compression: reduces amount of data to be transferred for HTTP objects by utilizing gzip compression available in all modern web browsers. The larger the response and the further away the client is the more this feature will improve response times. The tradeoff is that this feature puts additional CPU demand on the Load Balancer and it is a feature which could be done by web servers instead.
  • TCP offload: different vendors use different terms for this, but the idea is that normally each HTTP request from each client is a different TCP connection. This feature utilizes HTTP/1.1 to consolidate multiple HTTP requests from multiple clients into a single TCP socket to the back-end servers.
  • TCP buffering: the load balancer can buffer responses from the server and spoon-feed the data out to slow clients, allowing the server to move on to other tasks.
  • Direct Server Return: an option for asymmetrical load distribution, where request and reply have different network paths.
  • Health checking: the balancer will poll servers for application layer health and remove failed servers from the pool.
  • HTTP caching: the load balancer can store static content so that some requests can be handled without contacting the web servers.
  • Content filtering: some load balancers can arbitrarily modify traffic on the way through.
  • HTTP security: some load balancers can hide HTTP error pages, remove server identification headers from HTTP responses, and encrypt cookies so end users can't manipulate them.
  • Priority queuing: also known as rate shaping, the ability to give different priority to different traffic.
  • Content-aware switching: most load balancers can send requests to different servers based on the URL being requested.
  • Client authentication: authenticate users against a variety of authentication sources before allowing them access to a website.
  • Programmatic traffic manipulation: at least one load balancer allows the use of a scripting language to allow custom load balancing methods, arbitrary traffic manipulations, and more.
  • Firewall: direct connections to backend servers are prevented, for network security reasons
  • Intrusion prevention system: offer application layer security in addition to network/transport layer offered by firewall security.

Use in telecommunications

Load balancing can be useful in applications with redundant communications links. For example, a company may have multiple Internet connections ensuring network access if one of the connections fails.

A failover arrangement would mean that one link is designated for normal use, while the second link is used only if the primary link fails.

Using load balancing, both links can be in use all the time. A device or program monitors the availability of all links and selects the path for sending packets. Use of multiple links simultaneously increases the available bandwidth.

Many telecommunications companies have multiple routes through their networks or to external networks. They use sophisticated load balancing to shift traffic from one path to another to avoid network congestion on any particular link, and sometimes to minimize the cost of transit across external networks or improve network reliability.

Another way of using load balancing is in network monitoring activities. Load balancers can be used to split huge data flows into several subflows and use several network analyzers, each reading a part of the original data. This is very useful for monitoring fast networks like 10GbE or STM64, where complex processing of the data may not be possible at wire speed.

Relationship to failover

Load balancing is often used to implement failover — the continuation of a service after the failure of one or more of its components. The components are monitored continually (e.g., web servers may be monitored by fetching known pages), and when one becomes non-responsive, the load balancer is informed and no longer sends traffic to it. And when a component comes back on line, the load balancer begins to route traffic to it again. For this to work, there must be at least one component in excess of the service's capacity. This is much less expensive and more flexible than failover approaches where a single live component is paired with a single backup component that takes over in the event of a failure. Some types of RAID systems can also utilize hot spare for a similar effect.

Vendors

See also

References

  1. ^ IPv4 Address Record (A)

External links


Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Load balancing — or load distribution may refer to: Load balancing (computing), balancing a workload amongst multiple computer devices Load balancing (electrical power), the storing of excess electrical power by power stations during low demand periods, for… …   Wikipedia

  • Cluster Computing — NASA Computercluster Ein Computercluster, meist einfach Cluster (engl. „Schwarm“, „Gruppe“, „Haufen“), bezeichnet eine Anzahl von vernetzten Computern, die von außen in vielen Fällen als ein Computer gesehen werden können. In der Regel sind die… …   Deutsch Wikipedia

  • Cluster (computing) — A computer cluster is a group of linked computers, working together closely so that in many respects they form a single computer. The components of a cluster are commonly, but not always, connected to each other through fast local area networks.… …   Wikipedia

  • Data Intensive Computing — is a class of parallel computing applications which use a data parallel approach to processing large volumes of data typically terabytes or petabytes in size and typically referred to as Big Data. Computing applications which devote most of their …   Wikipedia

  • Parallel computing — Programming paradigms Agent oriented Automata based Component based Flow based Pipelined Concatenative Concurrent computing …   Wikipedia

  • Cloud computing — Les principaux acteurs du cloud computing Le cloud computing[1], informatique en nuage ou infonuagique est un concept qui consiste à déporter sur des serveurs distants des traitements informatiques traditionnellement localisés sur des serveurs lo …   Wikipédia en Français

  • Edge computing — provides application processing load balancing capacity to corporate and other large scale web servers. It is like an application cache, where the cache is in the Internet itself. Static web sites being cached on mirror sites is not a new concept …   Wikipedia

  • Continuous Computing — Type Private Industry Technology Founded 1998 …   Wikipedia

  • Mirror (computing) — This article is about mirror sites. For information about hard disk mirrors, see disk mirror. In computing, a mirror is an exact copy of a data set. On the Internet, a mirror site is an exact copy of another Internet site. Mirror sites are most… …   Wikipedia

  • RoS (computing) — RoS is the abbreviation for the computing term Request of Service (or requests of service in its plural form) . It is used to refer to a request for a specific service or response from a dormant application running on a computer cluster. Request… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”