- Congestive collapse
Congestive collapse (or congestion collapse) is a condition which a
packet switched computer networkcan reach, when little or no useful communication is happening due to congestion.
When a network is in such a condition, it has settled (under overload) into a stable state where traffic demand is high but little useful
throughputis available, and there are high levels of packet delayand loss (caused by routersdiscarding packets because their output queues are too full).
Congestion collapse was identified as a possible problem as far back as 1984 (RFC 896, dated
6 January). It was first observed on the early internet in October 1986, when the NSFnetphase-I backbone dropped three orders of magnitude from its capacity of 32 kbit/s to 40 bit/s, and continued to occur until end nodes started implementing Van Jacobson's congestion controlbetween 1987 and 1988.
When more packets were sent than could be handled by intermediate routers, the intermediate routers discarded many packets, expecting the end points of the network to retransmit the information. However, early TCP implementations had very bad retransmission behavior. When this packet loss occurred, the end points sent "extra" packets that repeated the information lost; doubling the data rate sent, exactly the opposite of what should be done during congestion. This pushed the entire network into a 'congestion collapse' where most packets were lost and the resultant throughput was negligible.
Congestion collapse generally occurs at choke points in the network, where the total incoming bandwidth to a node exceeds the outgoing bandwidth. Connection points between a
local area networkand a wide area networkare the most likely choke points. A DSL modem is the most common small network example, with between 10 and 1000 Mbit/sof incoming bandwidth and at most 8 Mbit/s of outgoing bandwidth.
The prevention of congestion collapse requires two major components:
# A mechanism in
routersto reorder or drop packets under overload,
# End-to-end flow control mechanisms designed into the end points which respond to congestion and behave appropriately.
The correct end point behaviour is usually still to repeat dropped information, but progressively slow the rate that information is repeated. Provided all end points do this, the congestion lifts and good use of the network occurs, and the end points all get a fair share of the available bandwidth. Other strategies such as 'slow start' ensure that new connections don't overwhelm the router before the congestion detection can kick in.
The most common router mechanisms used to prevent congestive collapses are
fair queueingin its various forms, and random early detection, or RED, where packets are randomly dropped before congestion collapse actually occurs, triggering the end points to slow transmission more progressively. Fair queueing is most useful in routers at choke points with a small number of connections passing through them. Larger routers must rely on RED.
Some end-to-end protocols are better behaved under congested conditions than others. TCP is perhaps the best behaved. The first TCP implementations to handle congestion well were developed in 1984Fact|date=April 2007, but it was not until
Van Jacobson's inclusion of an open source solution in Berkeley UNIX (" BSD") in 1988 that good TCP implementations became widespread.
UDP does not, in itself, have any congestion control mechanism at all. Protocols built atop UDP must handle congestion in their own way. Protocols atop UDP which transmit at a fixed rate, independent of congestion, can be troublesome. Real-time streaming protocols, including many
Voice over IPprotocols, have this property. Thus, special measures, such as quality-of-service routing, must be taken to keep packets from being dropped from streams.
In general, congestion in pure datagram networks must be kept out at the periphery of the network, where the mechanisms described above can handle it. Congestion in the
Internet backboneis very difficult to deal with. Fortunately, cheap fiber-optic lines have reduced costs in the Internet backbone. The backbone can thus be provisioned with enough bandwidth to (usually) keep congestion at the periphery.
ide effects of congestive collapse avoidance
The protocols that avoid congestive collapse are based on the idea that essentially all data loss on the internet is caused by congestion. This is true in nearly all cases; errors during transmission are rare on today's fiber based internet. However, this causes
WiFinetworks to have poor throughput in some cases since wireless networks are susceptible to data loss. The TCP connections running over WiFi see the data loss and tend to believe that congestion is occurring when it isn't and erroneously reduce the data rate sent.
The slow start protocol performs badly for short-lived connections. Older
web browserswould create many consecutive short-lived connections to the web server, and would open and close the connection for each file requested. This kept most connections in the slow start mode, which resulted in poor response time.
To avoid this problem, modern browsers either open multiple connections simultaneously or reuse one connection for all files requested from a particular web server.
* [http://tools.ietf.org/html/rfc2914 RFC 2914] - Congestion Control Principles, Sally Floyd, September, 2000
* [http://tools.ietf.org/html/rfc896 RFC 896] - "Congestion Control in IP/TCP", John Nagle, 6 January, 1984
*Introduction to " [http://ee.lbl.gov/papers/congavoid.pdf Congestion Avoidance and Control] ", Van Jacobson and Michael J. Karels, November, 1988
Network congestion avoidance
Sorcerer's Apprentice Syndrome
* Cascade failure (Internet)
Wikimedia Foundation. 2010.
Look at other dictionaries:
Network congestion — In data networking and queueing theory, network congestion occurs when a link or node is carrying so much data that its quality of service deteriorates. Typical effects include queueing delay, packet loss or the blocking of new connections. A… … Wikipedia
Admission control — is a network Quality of Service (QoS) procedurecite book | author = Ferguson P., Huston G. | title = Quality of Service: Delivering QoS on the Internet and in Corporate Networks | publisher = John Wiley Sons, Inc. | date = 1998 | id = ISBN 0 471… … Wikipedia
Sorcerer's Apprentice Syndrome — (SAS) is a particularly bad network protocol flaw, discovered in the original versions of TFTP. It was named after the Sorcerer s Apprentice segment of the animated film Fantasia , because the details of its operation closely resemble the… … Wikipedia
Series of tubes — Ted Stevens, Alaskan Senator, referred to the Internet as a series of tubes Series of tubes is a phrase coined originally as an analogy by then United States Senator Ted Stevens (R Alaska) to describe the Internet in the context of opposing… … Wikipedia
Congestion control — This article concerns telecommunications traffic. For road traffic, see traffic congestion. Congestion control concerns controlling traffic entry into a telecommunications network, so as to avoid congestive collapse by attempting to avoid… … Wikipedia
Transport layer — The OSI model 7 Application layer 6 Presentation layer 5 Session layer 4 Transport layer 3 Network layer 2 … Wikipedia
List of dog diseases — This list of dog diseases is a continuously updated selection of diseases and other conditions found in the dog. Some of these diseases are unique to dogs or closely related species, while others are found in other animals, including humans. Not… … Wikipedia
Boris Yeltsin — Yeltsin redirects here. For other uses, see Yeltsin (disambiguation). Boris Nikolayevich Yeltsin Борис Николаевич Ельцин 1st President of the Russian Federation In office 25 December 1991 – … Wikipedia
Amrinone — Systematic (IUPAC) name 5 amino 3,4 bipyridin 6(1H) one Clinical data AHFS/Drugs.com … Wikipedia
Mechanical ventilation — In architecture and climate control, mechanical or forced ventilation is the use of powered equipment, e.g. fans and blowers, to move air see ventilation (architecture). Mechanical ventilation Intervention … Wikipedia