Stream processing

Stream processing

Stream processing is a computer programming paradigm, related to SIMD, that allows some applications to more easily exploit a limited form of parallel processing. Such applications can use multiple computational units, such as the floating point units on a GPU, without explicitly managing allocation, synchronization, or communication among those units.

The stream processing paradigm simplifies parallel software and hardware by restricting the parallel computation that can be performed. Given a set of data (a "stream"), a series of operations ("kernel functions") are applied to each element in the stream. "Uniform streaming", where one kernel function is applied to all elements in the stream, is typical. Kernel functions are usually pipelined, and local on-chip memory is reused to minimize external memory bandwidth. Since the kernel and stream abstractions expose data dependencies, compiler tools can fully automate and optimize on-chip management tasks. Stream processing hardware can use scoreboarding, for example, to launch DMAs at runtime, when dependencies become known. The elimination of manual DMA management reduces software complexity, and the elimination of hardware caches reduces the amount of die area not dedicated to computational units such as ALUs.


Stream processing is essentially a compromise, driven by a data-centric model that works very well for traditional DSP or GPU-type applications (such as image, video and digital signal processing) but less so for general purpose processing with more randomized data access (such as databases). By sacrificing some flexibility in the model, the implications allow easier, faster and more efficient execution. Depending on the context, processor design may be tuned for maximum efficiency or a trade-off for flexibility.

Stream processing is especially suitable for applications that exhibit three application characteristicsFact|date=June 2008:

  • Compute Intensity, the number of arithmetic operations per I/O or global memory reference. In many signal processing applications today it is well over 50:1 and increasing with algorithmic complexity.
  • Data Parallelism exists in a kernel if the same function is applied to all records of an input stream and a number of records can be processed simultaneously without waiting for results from previous records.
  • Data Locality is a specific type of temporal locality common in signal and media processing applications where data is produced once, read once or twice later in the application, and never read again. Intermediate streams passed between kernels as well as intermediate data within kernel functions can capture this locality directly using the stream processing programming model.

Look at other dictionaries:

  • Comparison of MPI, OpenMP, and Stream Processing — MPI= MPI is a language independent communications protocol used to program parallel computers. Both point to point and collective communication are supported. MPI is a message passing application programmer interface, together with protocol and… …   Wikipedia

  • Event Stream Processing — Event Stream Processing, or ESP, is a set of technologies designed to assist the construction of event driven information systems. ESP technologies include event visualization, event databases, event driven middleware, and event processing… …   Wikipedia

  • Stream X-Machine — The Stream X machine (SXM) is a model of computation introduced by Gilbert Laycock in his 1993 PhD thesis, The Theory and Practice of Specification Based Software Testing .Gilbert Laycock (1993) The Theory and Practice of Specification Based… …   Wikipedia

  • Stream Processors, Inc — Infobox Company company name = Stream Processors Incorporated (SPI) company company type=Private foundation = 2004 location=455 DeGuigne Drive Sunnyvale, California flagicon|USA USA key people =Bill Dally, Co Founder and Chairman Chip Stearns,… …   Wikipedia

  • Stream (computing) — In computing, the term stream is used in a number of ways, in all cases referring to a succession of data elements made available over time.*On Unix and related systems based on the C programming language, a stream is a source or sink of data,… …   Wikipedia

  • Stream processor — There are multiple articles regarding stream processor:*Stream processing a technique used to accelerate the processing of many types of video and image computations. *Stream Processors, Inc a semiconductor company that has commercialized stream… …   Wikipedia

  • Stream processors — There are multiple articles regarding the term stream processors:*Stream Processing a technique used to accelerate the processing of many types of video and image computations. *Stream Processors, Inc a semiconductor company that has… …   Wikipedia

  • Stream-Prozessor — Als Streamprozessor wird eine spezielle Art des Koprozessors bezeichnet und gleichzeitig auch eine kleine skalare Recheneinheit, die in solchen Koprozessoren auftreten kann. Inhaltsverzeichnis 1 Streamprozessor als Koprozessor 1.1 Geschichtliche… …   Deutsch Wikipedia

  • Processing plant —   A surface installation designed to separate and recover natural gas liquids from a stream of produced natural gas through the processes of condensation, absorption, adsorption, refrigeration, or other methods and to control the quality of… …   Energy terms

  • Graphics processing unit — GPU redirects here. For other uses, see GPU (disambiguation). GeForce 6600GT (NV43) GPU A graphics processing unit or GPU (also occasionally called visual processing unit or VPU) is a specialized circuit designed to rapidly manipulate and alter… …   Wikipedia