Remote Direct Memory Access

Remote Direct Memory Access

Remote Direct Memory Access (RDMA) allows data to move directly from the memory of one computer into that of another without involving either one's operating system. This permits high-throughput, low-latency networking, which is especially useful in massively parallel computer clusters. RDMA relies on a special philosophy in using DMA.

RDMA supports zero-copy networking by enabling the network adapter to transfer data directly to or from application memory, eliminating the need to copy data between application memory and the data buffers in the operating system. Such transfers require no work to be done by CPUs, caches, or context switches, and transfers continue in parallel with other system operations. When an application performs an RDMA Read or Write request, the application data is delivered directly to the network, reducing latency and enabling fast message transfer.

This strategy presents several problems related to the fact that the target node is not notified of the completion of the request (1-sided communications). The common way to notify it is to change a memory byte when the data has been delivered, but it requires the target to poll on this byte. Not only does this polling consume CPU cycles, but the memory footprint and the latency increases linearly with the number of possible other nodes which limits use of RDMA in High-Performance Computing (HPC) in favor of MPI.

The Send/Recv model used by other zero-copy HPC interconnects such as Myrinet or Quadrics does not have any of these problems and presents as good performance since their native programming interface is very similar to MPI.

RDMA reduces the need for protocol overhead, which can squeeze out the capacity to move data across a network, reducing performance, limiting how fast an application can get the data it needs, and restricting the size and scalability of a cluster.

However, one must be aware that there also may exist some overhead given the need for memory registration. zero-copy protocols indeed usually imply to make sure thatthe memory area involved in the communications will be kept in main memory, at least during the duration of the transfer. One must for instance make sure that this memory will not be swapped out. Else, the DMA engine might use out-dated data, thus raising the risk of memory corruption. The usual way is to pin memory down so that it will be kept in main memory, but this creates a somehow unexpected overhead since this memory registration is very expensive, thus increasing the latency linearly with the size of the data. In order to address that issue, there are several attitudes that were adopted :
* deferring memory registration out of the critical path, thus somehow hiding the latency increase.
* using caching techniques to keep data pinned as long as possible so that the overhead could be reduced for application performing communications in the same memory area several times.
* pipelining memory registration and data transfer as done on Infiniband or Myrinet for instance.
* somehow getting rid of the need for registration as Quadrics high-speed networks does.

RDMA’s acceptance is also limited by the need to install a different networking infrastructure. New standards enable Ethernet RDMA implementation at the physical layer and TCP/IP as the transport, combining the performance and latency advantages of RDMA with a low-cost, standards-based solution. The RDMA Consortium and the DAT Collaborative [ [ DAT Collaborative website.] ] have played key roles in the development of RDMA protocols and APIs for consideration by standards groups such as the Internet Engineering Task Force and the Interconnect Software Consortium. [ [ The Interconnect Software Consortium website.] ] Software vendors such as Oracle Corporation support these APIs in their latest products, and network adapters that implement RDMA over Ethernet are being developed.

Common RDMA implementations include the Virtual Interface Architecture, InfiniBand, and iWARP.


External links

* [ RDMA Consortium]
* [ A Critique of RDMA] for High-Performance Computing

Wikimedia Foundation. 2010.

Look at other dictionaries:

  • Direct memory access — (DMA) is a feature of modern computers that allows certain hardware subsystems within the computer to access system memory independently of the central processing unit (CPU). Without DMA, the CPU using programmed input/output is typically fully… …   Wikipedia

  • Direct Access File System — DAFS redirects here. For other uses, see DAFS (disambiguation). Direct Access File System (DAFS) is a network file system that is based on NFSv4 and the Virtual Interface (VI) data transfer mechanism. DAFS uses remote direct memory access (RDMA)… …   Wikipedia

  • Memory virtualization — In computer science, memory virtualization decouples volatile random access memory (RAM) resources from individual systems in the data center, and then aggregates those resources into a virtualized memory pool available to any computer in the… …   Wikipedia

  • MEMORY — holocaust literature in european languages historiography of the holocaust holocaust studies Documentation, Education, and Resource Centers memorials and monuments museums film survivor testimonies Holocaust Literature in European Languages The… …   Encyclopedia of Judaism

  • Direct access storage device — In mainframe computers and some minicomputers, a direct access storage device, or DASD (  /ˈdæ …   Wikipedia

  • memory abnormality — Introduction       any of the disorders that affect the ability to remember.       Disorders of memory must have been known to the ancients and are mentioned in several early medical texts, but it was not until the closing decades of the 19th… …   Universalium

  • Sockets Direct Protocol — The Sockets Direct Protocol (SDP) is a networking protocol originally defined by the Software Working Group (SWG) of the InfiniBand Trade Association. Originally designed for InfiniBand, SDP now has been redefined as a transport agnostic protocol …   Wikipedia

  • Microsoft Forefront Unified Access Gateway — Unified Access Gateway Original author(s) Microsoft corporation …   Wikipedia

  • Common Access Card — An example DoD Common Access Card The Common Access Card (CAC) is a United States Department of Defense (DoD) smart card issued as standard identification for active duty military personnel, reserve personnel, civilian employees, other non DoD… …   Wikipedia

  • Microsoft Access — Microsoft Office Access 2010 running on Windows 7 Developer(s) Microsoft Corporation …   Wikipedia