Jump to content

Explicit parallelism

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Balix (talk | contribs) at 04:29, 3 January 2006. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In computer programming, explicit parallelism is the representation of concurrent computations by means of primitives in the form of special-purpose directives or function calls. Most parallel primitives are related to process sinchronization, communication or task partitioning. As they seldom contribute to actually carry out the intendend computation of the program, their computational cost is often contabilized as parallelization overhead.

The advantage of explicit parallel programming is the absolute programmer control over the parallel execution. A skilled parallel programmer takes advantage of explicit parallelism to produce very efficient code. However, programming with explicit parallelism is often difficult, especially for non computing professionals, because of the extra work involved in planning the task division and sinchronization of concurrent processes.

In some instances, explicit parallelism may be avoided with the use of an optimizing compiler that automatically extracts the parallelism inherent to computations (see implicit parallelism).

Programming with explicit parallelism