TCP Offload Engine

TCP Offload Engine

TCP Offload Engine or TOE is a technology used in network interface cards to offload processing of the entire TCP/IP stack to the network controller. It is primarily used with high-speed network interfaces, such as gigabit Ethernet and 10 gigabit Ethernet, where processing overhead of the network stack becomes significant.

The term, TOE, is often used to refer to the NIC itself, although it more accurately refers only to the integrated circuit included on the card which processes the TCP headers. TOEs are often suggested as a way to reduce the overhead associated with new protocols like iSCSI.

Purpose

Originally TCP was designed for unreliable low speed networks (such as early dial-up modems) but with the growth of the Internet in terms of internet backbone transmission speeds (Optical Carrier, gigabit Ethernet and 10 gigabit Ethernet links) and faster and more reliable access mechanisms (such as Digital Subscriber Line and cable modems) it is now sometimes used in datacenters and desktop PC environments at speeds over 1 gigabit per second. The TCP software implementations on host systems require extensive computing power. Full Duplex Gigabit TCP communication using software processing alone is enough to consume more than 80% of a 2.4 GHz Pentium 4 processor (see Freed Up CPU Cycles), resulting in little or no processing resources left for the applications to run on the system.

As TCP is a connection-oriented protocol, this adds to the complexity and processing overhead of the protocol, these aspects include:
* Connection establishment using the 3 Way Handshake, this involves a number of messages passing between the connection initiator and the connection responder prior to any data flowing between the two endpoints.
* Acknowledgment of packets as they are received by the far end, adding to the message flow between the endpoints and thus the protocol load.
* Checksum and Sequence number calculations - again a burden on a general purpose CPU to perform.
* Sliding window calculations for packet acknowledgement and congestion control.
* Connection termination.

Moving some or all of these aspected to dedicated hardware, a TCP Offload Engine, frees up the system's main CPU for other tasks. As of 2008, very few consumer network interface cards support TOE, however the number of Servers with either a TOE enabled network interface card or Mother Board TOE enabled chip is increasing.

Freed Up Processor Cycles

A generally accepted rule of thumb is that 1 hertz of CPU processing is required to send or receive 1 bit of TCP/IP data. For example 5 Gb/sec (625 MB/s) of network traffic requires 5 GHz of CPU Processing. This implies that 2 entire cores of a 2.5 GHz multi-core processor will be required to handle the TCP/IP processing associated with 5 Gb/sec of TCP/IP traffic. Since Ethernet (10Ge in this example) is bidirectional it is possible to send and receive 10 Gb/sec (for an aggregate throughput of 20 Gb/sec). Using the 1 Hz/ bit rule this equates to 8 - 2.5 GHz cores. (Few if any current day Servers have the requirement to move 10 Gb/s in both directions but not so long ago 1 Gb/sec full duplex was thought to be more than enough bandwidth.)

Many of the CPU cycles used for TCP/IP processing are "Freed Up" by TCP/IP Offload and may be used by the CPU (usually a Server CPU) to perform other tasks such a file system processing (in a File Server) or Indexing (in a Backup Media Server). In other words a Server with TCP/IP Offload can do more Server work than a server with non TCP/IP Offload NICS.

Reduction of PCI traffic

In addition to the protocol overhead that TOE can address, it can also address some architectural issues that affect a large percentage of host based (Server and PC) endpoints. Currently most end point hosts are PCI bus based, which provides a standard interface for the addition of certain peripherals such as Network Interfaces to Servers and PCs. PCI is inefficient for transferring small bursts of data from host memory across the PCI bus to the network interface ICs but its efficiency improves as the data burst size increases. Within the TCP protocol, a large number of small packets are created (e.g acknowledgements) and as these are typically generated on the host CPU and transmitted across the PCI bus and out the network physical interface, this impacts the host computer IO throughput.

A TOE solution, located on the network interface, is located on the other side of the PCI bus from the CPU host so it can address this I/O efficiency issue, as the data to be sent across the TCP connection can be sent to the TOE from the CPU across the PCI bus using large data burst sizes with none of the smaller TCP packets having to traverse the PCI bus.

History

One of the first recorded patents (USPTO #5,355,453 [ [http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=5355453.PN.&OS=PN/5355453&RS=PN/5355453 United States Patent: 5355453 "Parallel I/O network file server architecture category"] ] ) for the concept of Network Stack Offload was issued to Auspex Systems in the early 1990 under the name 'Parallel I/O network file server architecture category' This became know as Functional Multi-Processing (FMP). Under FMP, Network Processing, File Processing and Storage Processing are each executed on a separate Functional Processing Card. (As opposed to Symmetric multiprocessing which executes all three functions, together with user applications and all other processor tasks, on (increasing numbers of) general-purpose processors). The Auspex Network Processor performed full UDP Offload. (UDP is much simpler than TCP but the basic concept of Network Stack Offload still applied). The founder of Auspex Systems, Larry Boucher and a number of Auspex Engineers founded Alacritech in 1997 with the idea of extending the concept of Network Stack Offload to TCP and implementing it in Custom Silicon.

Alacritech introduced the first Parallel Stack Full Offload Network Card in early 1999. The company’s SLIC (Session Layer Interface Card) was the predecessor to its current TOE offerings. Alacritech holds 26 Patents in the area of TCP/IP Offload. #6,247,060 [ [http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=6247060.PN.&OS=PN/6247060&RS=PN/6247060 United States Patent: 6247060 "Passing a Communication Block from Host to a Local Device such that a message is processed on the Device"] ] “Passing a Communication Block from Host to a Local Device such that a message is processed on the Device” was issued on 6/12/2001. In 2005 Microsoft licensed Alacritech's Patent Base and along with Alacritech created the Partial TCP Offload Architecture which has become know as TCP Chimney Offload (See Types of TCP/IP Offload). TCP Chimney Offload centers on the Alacritech "Communication Block Passing Patent". At the same time, Broadcom also obtained a license to build TCP Chimney Offload Chips.

An original TOE implementation was developed and a patent applied for (USPTO #0040042487 [ [http://appft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.html&r=1&f=G&l=50&d=PG01&p=1&S1=20040042487.PGNR.&OS=DN/20040042487&RS=DN/20040042487 United States Patent Application: 20040042487 "Network traffic accelerator system and method"] ] ) by Valentin Ossman, who later founded Tehuti Networks Ltd. based on his patented technology. A patent (cite web |url=http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=6,996,070.PN.&OS=PN/6,996,070&RS=PN/6,996,070 |title=United States Patent: 6996070 |accessdate=2008-02-20 |format= |work=) was granted on Dec. 25th, 2007 titled "System and method for TCP/IP offload independent of bandwidth delay product" and another one (cite web |url=http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=7,313,623.PN.&OS=PN/7,313,623&RS=PN/7,313,623 |title=United States Patent: 7313623 |accessdate=2008-02-20 |format= |work=) on Feb. 7th, 2006 titled "TCP/IP offload device with reduced sequential processing". Valentin Ossman is also credited with the introduction of the acronym TOE.

Types of TCP/IP Offload

Parallel Stack Full Offload

Parallel Stack Full Offload gets its name from the concept of two parallel TCP/IP Stacks. The first is the main host stack which is included with the host OS. The 2nd or "Parallel Stack" is connected between the Application Layer (Using the Internet protocol suite naming conventions) and the Transport Layer (TCP) using a "Vampire Tap". The Vampire tap intercepts TCP connection requests by applications and is responsible for TCP connection management as well as TCP data transfer. Many of the Criticisms in the following section relate to this type of TCP Offload.

HBA Full Offload

HBA Full Offload is found in iSCSI Host Bus Adapters which present themselves as Disk Controllers to the Host System while connecting (via TCP/IP) to an iSCSI Storage Device. This type of TCP Offload not only offloads TCP/IP processing but it also offloads the iSCSI Initiator Function. Because the HBA appears to the host as a Disk Controller it can only be used with iSCSI devices and is not appropriate for general TCP/IP Offload.

TCP Chimney Partial Offload

TCP Chimney Offload addresses the major Security Criticism of Parallel Stack Full Offload. In Partial Offload the main System Stack controls all connections to the host. After a connection has been established between the local host (usually a Server) and a foreign host (usually a Client) the connection and its State are passed to the TCP Offload Engine. The heavy lifting of data transmit and receive is handled by the Offload Device. Almost all TCP Offload Engines use some type of TCP/IP hardware implementation to perform the data transfer without host CPU intervention. When the connection is closed the Connection State is returned from the Offload Engine to the main System Stack. Maintaining control of TCP connections allows the main System Stack to implement and control Connection Security.

Criticism

TOE has many vocal opponents, particularly in the F/LOSS community. Some of the criticisms include [ [http://www.linux-foundation.org/en/Net:TOE Net:TOE] Explanation why Linux doesn't support TOE] :
* "Security" - because TOE is implemented in hardware, patches must be applied to the TOE firmware, instead of just software, to address any security vulnerabilities found in a particular TOE implementation. This is further compounded by the newness and vendor-specificity of this hardware, as compared to a well tested TCP/IP stack as is found in an operating system that does not use TOE.
* "Limitations" of hardware - because connections are buffered and processed on the TOE chip, resource starvation can more easily occur as compared to the generous cpu and memory available to the operating system.
* "Complexity" - TOE breaks the assumption that kernels make about having access to all resources at all times - details such as memory used by open connections are not available with TOE. TOE also requires very large changes to a networking stack in order to be supported properly, and even when that is done, features like QoS and packet filtering typically do not work.
* "Proprietary" - TOE is implemented differently by each hardware vendor. This means more code must be rewritten to deal with the various TOE implementations, at a cost of the aforementioned complexity and, possibly, security. Furthermore, TOE firmware cannot be easily modified since it is closed-source.

uppliers

Much of the current work on TOE technology is by manufacturers of 10 Gigabit Ethernet interface cards, such as Alacritech, Broadcom Corporation, Chelsio Communications, LeWiz Communications, Neterion Technologies, NetXen Inc. and Tehuti Networks Ltd.

ee also

* large segment offload (LSO)
* large receive offload (LRO)
* Scalable Networking Pack

References

External links

* Article: [http://www.acmqueue.com/modules.php?name=Content&pa=showpage&pid=154 TCP Offload to the Rescue] by Andy Currid at [http://www.acmqueue.com/ ACM Queue]
* [http://appft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=1&f=G&l=50&s1=%2220040042487%22.PGNR.&OS=DN/20040042487&RS=DN/20040042487 Patent Application 20040042487 ]
* Retrieved 23 July 2006
* [http://www.10gea.org/tcp-ip-offload-engine-toe.htm Introduction to the TCP/IP offload Engine ]


Wikimedia Foundation. 2010.

Игры ⚽ Поможем сделать НИР

Look at other dictionaries:

  • TCP Offload Engine — oder TCP/IP Offload Engine (TOE) ist eine Technologie, die bei Netzwerkkarten Verwendung findet, mit dem Ziel, den Hauptprozessor (CPU) von rechenintensiven Aufgaben des Protokollstacks des TCP/IP (TCP/IP Stack) zu entlasten. Hierzu gehört… …   Deutsch Wikipedia

  • TCP Offload Engine — или TOE это технология, встраиваемая в сетевые адаптеры для разгрузки центрального процессора и возложения функций по обработке сетевых пакетов стека протоколов TCP/IP на контроллер сетевого адаптера. Как правило, применяется в высокоскоростных… …   Википедия

  • TCP/IP Offload Engine — TCP Offload Engine oder TCP/IP Offload Engine (TOE) ist eine Technologie, die bei Netzwerkkarten Verwendung findet, mit dem Ziel, den Hauptprozessor (CPU) von rechenintensiven Aufgaben des Protokollstacks des TCP/IP (TCP/IP Stack) zu entlasten.… …   Deutsch Wikipedia

  • TCP offload — Engine или TOE это технология, встраиваемая в сетевые адаптеры для разгрузки центрального процессора и возложения функций по обработке сетевых пакетов стека протоколов TCP/IP на контроллер сетевого адаптера. Как правило, применяется в… …   Википедия

  • TCP Chimney Offload — TCP Offload Engine или TOE это технология, встраиваемая в сетевые адаптеры для разгрузки центрального процессора и возложения функций по обработке сетевых пакетов стека протоколов TCP/IP на контроллер сетевого адаптера. Как правило, применяется в …   Википедия

  • Механизм разгрузки TCP — TCP Offload Engine или TOE это технология, встраиваемая в сетевые адаптеры для разгрузки центрального процессора и возложения функций по обработке сетевых пакетов стека протоколов TCP/IP на контроллер сетевого адаптера. Как правило, применяется в …   Википедия

  • Механизм разгрузки TCP/IP — TCP Offload Engine или TOE это технология, встраиваемая в сетевые адаптеры для разгрузки центрального процессора и возложения функций по обработке сетевых пакетов стека протоколов TCP/IP на контроллер сетевого адаптера. Как правило, применяется в …   Википедия

  • Разгрузка TCP — TCP Offload Engine или TOE это технология, встраиваемая в сетевые адаптеры для разгрузки центрального процессора и возложения функций по обработке сетевых пакетов стека протоколов TCP/IP на контроллер сетевого адаптера. Как правило, применяется в …   Википедия

  • Разгрузка TCP/IP — TCP Offload Engine или TOE это технология, встраиваемая в сетевые адаптеры для разгрузки центрального процессора и возложения функций по обработке сетевых пакетов стека протоколов TCP/IP на контроллер сетевого адаптера. Как правило, применяется в …   Википедия

  • Large segment offload — In computer networking, large segment offload (LSO) is a technique for increasing outbound throughput of high bandwidth network connections by reducing CPU overhead. It works by queuing up large buffers and letting the NIC split them into… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”