Data parallelism

Data parallelism

Data parallelism (also known as loop-level parallelism) is a form of parallelization of computing across multiple processors in parallel computing environments. Data parallelism focuses on distributing the data across different parallel computing nodes. It contrasts to task parallelism as another form of parallelism.

Contents

Description

In a multiprocessor system executing a single set of instructions (SIMD), data parallelism is achieved when each processor performs the same task on different pieces of distributed data. In some situations, a single execution thread controls operations on all pieces of data. In others, different threads control the operation, but they execute the same code.

For instance, consider a 2-processor system (CPUs A and B) in a parallel environment, and we wish to do a task on some data d. It is possible to tell CPU A to do that task on one part of d and CPU B on another part simultaneously, thereby reducing the duration of the execution. The data can be assigned using conditional statements as described below. As a specific example, consider adding two matrices. In a data parallel implementation, CPU A could add all elements from the top half of the matrices, while CPU B could add all elements from the bottom half of the matrices. Since the two processors work in parallel, the job of performing matrix addition would take one half the time of performing the same operation in serial using one CPU alone.

Data parallelism emphasizes the distributed (parallelized) nature of the data, as opposed to the processing (task parallelism). Most real programs fall somewhere on a continuum between task parallelism and data parallelism.

Example

The program below expressed in pseudocode—which applies some arbitrary operation, foo, on every element in the array d—illustrates data parallelism:[nb 1]

define foo
    if CPU = "a"
       lower_limit := 1
       upper_limit := round(d.length/2)
    else if CPU = "b"
       lower_limit := round(d.length/2) + 1
       upper_limit := d.length

for i from lower_limit to upper_limit by 1
   foo(d[i])

If the above example program is executed on a 2-processor system the runtime environment may execute it as follows:

  • In an SPMD system, both CPUs will execute the code.
  • In a parallel environment, both will have access to d.
  • A mechanism is presumed to be in place whereby each CPU will create its own copy of lower_limit and upper_limit that is independent of the other.
  • The if clause differentiates between the CPUs. CPU "a" will read true on the if; and CPU "b" will read true on the else if, thus having their own values of lower_limit and upper_limit.
  • Now, both CPUs execute foo(d[i]), but since each CPU has different values of the limits, they operate on different parts of d simultaneously, thereby distributing the task among themselves. Obviously, this will be faster than doing it on a single CPU.

This concept can be generalized to any number of processors. However, when the number of processors increases, it may be helpful to restructure the program in a similar way (where cpuid is an integer between 1 and the number of CPUs, and acts as a unique identifier for every CPU):

for i from cpuid to d.length by number_of_cpus
   foo(d[i])

For example, on a 2-processor system CPU A (cpuid 1) will operate on odd entries and CPU B (cpuid 2) will operate on even entries.

JVM Example

Similar to the previous example, Data Parallelism is also possible using the Java Virtual Machine JVM (using Ateji PX, an extension of Java).

The code below illustrates Data parallelism on the JVM: Branches in a parallel composition can be quantified. This is used to perform the || operator[1] on all elements of an array or a collection:

[
   // increment all array elements in parallel
   || (int i : N) array[i]++;
]

The equivalent sequential code would be:

[
   // increment all array elements one after the other
   for(int i : N) array[i]++;
]

Quantification can introduce an arbitrary number of generators (iterators) and filters. Here is how we would update the upper left triangle of a matrix:

[
   ||(int i:N, int j:N, if i+j<N) matrix[i][j]++;
]

Notes

  1. ^ Some input data (e.g. when d.length evaluates to 1 and round rounds towards zero [this is just an example, there are no requirements on what type of rounding is used]) will lead to lower_limit being greater than upper_limit, it's assumed that the loop will exit immediately (i.e. zero iterations will occur) when this happens.

References

  1. ^ http://www.ateji.com/px/patterns.html#data Data Parallelism using Ateji PX, an extension of Java


See also


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

  • Data Intensive Computing — is a class of parallel computing applications which use a data parallel approach to processing large volumes of data typically terabytes or petabytes in size and typically referred to as Big Data. Computing applications which devote most of their …   Wikipedia

  • Data-centric programming language — defines a category of programming languages where the primary function is the management and manipulation of data. A data centric programming language includes built in processing primitives for accessing data stored in sets, tables, lists, and… …   Wikipedia

  • Parallelism — may refer to:* Angle of parallelism, the angle at one vertex of a right hyperbolic triangle that has two hyperparallel sides * Conscious parallelism, price fixing between competitors in an oligopoly that occurs without an actual spoken agreement… …   Wikipedia

  • Data dependency — A data dependency in computer science is a situation in which a program statement (instruction) refers to the data of a preceding statement. In compiler theory, the technique used to discover data dependencies among statements (or instructions)… …   Wikipedia

  • Task parallelism — (also known as function parallelism and control parallelism) is a form of parallelization of computer code across multiple processors in parallel computing environments. Task parallelism focuses on distributing execution processes (threads)… …   Wikipedia

  • Instruction level parallelism — (ILP) is a measure of how many of the operations in a computer program can be performed simultaneously. Consider the following program: 1. e = a + b 2. f = c + d 3. g = e * fOperation 3 depends on the results of operations 1 and 2, so it cannot… …   Wikipedia

  • Psycho-Physical Parallelism —     Psycho Physical Parallelism     † Catholic Encyclopedia ► Psycho Physical Parallelism     A doctrine which states that the relation between mental processes, on the one hand, and physical, physiological, or cerebral processes on the other, is …   Catholic encyclopedia

  • Memory-level parallelism — or MLP is a term in computer architecture referring to the ability to have pending multiple memory operations, in particular cache misses or translation lookaside buffer misses, at the same time. In a single processor, MLP may be considered a… …   Wikipedia

  • Concurrent data structure — In computer science, a concurrent data structure is a particular way of storing and organizing data for access by multiple computing threads (or processes) on a computer. Historically, such data structures were used on uniprocessor machines with… …   Wikipedia

  • Parallel computing — Programming paradigms Agent oriented Automata based Component based Flow based Pipelined Concatenative Concurrent computing …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”