Jump to content

Explicit parallelism

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Vanished user 09a18a8c3ed303b15ad9aa4fe245c66c (talk | contribs) at 05:46, 4 February 2024 (Programming languages that support explicit parallelism: ce). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In computer programming, explicit parallelism is the representation of concurrent computations by means of primitives in the form of operators, special-purpose directives, or function calls. Most parallel primitives are related to process synchronization, communication or task partitioning. As they seldom contribute to actually carry out the intended computation of the program but, rather, structure it, their computational cost is often considered as parallelization overhead.

The advantage of explicit parallel programming is the programmer control over the parallel execution. A skilled parallel programmer may take advantage of explicit parallelism to produce efficient code for a given target computation environment. However, programming with explicit parallelism is often difficult, especially for non-computing specialists, because of the extra work involved in planning the task division and synchronization of concurrent processes.

In some instances, explicit parallelism may be avoided with the use of an optimizing compiler that automatically deduces the parallelism inherent to computations, known as implicit parallelism.

Programming languages that support explicit parallelism

Some of the programming languages that support explicit parallelism are: