Jump to content

Scalable parallelism

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Davew haverford (talk | contribs) at 18:52, 1 June 2009 (A bit more than a stub. I may add a bit more later.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Scalable Parallelism refers to software for which large numbers of processors can be employed to solve larger problems, i.e. software for which Gustafson's Law holds. Consider a program whose execution time is dominated by a loop that updates each element of an array. If we can execute all iterations concurrently (as a parallel loop), it is often possible to make effective use of twice as many processors for a problem of array size 2N as for a problem of array size N. As in this example, scalable parallelism is typically a form of data parallelism.

See Also