Jump to content

Scalable parallelism

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Davew haverford (talk | contribs) at 19:45, 1 June 2009 (changed capitalization to what's used on wikipedia titles). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Scalable Parallelism refers to software for which large numbers of processors can be employed to solve larger problems, i.e. software for which Gustafson's law holds. Consider a program whose execution time is dominated by a loop that updates each element of an array. If we can execute all iterations concurrently (as a parallel loop), it is often possible to make effective use of twice as many processors for a problem of array size 2N as for a problem of array size N. As in this example, scalable parallelism is typically a form of data parallelism. This form of parallelism is often the target of automatic parallelization of loops.

Distributed computing systems and non-uniform memory access architectures are typically the most easily scaled to large numbers of processors, and thus would seem a natural target for software that exhibits scalable parallelism. However, applications with scalable parallelism may not have parallelism of sufficiently coarse grain to run effectively on such systems (unless the software is embarrassingly parallel. In our example above, if each array element update requires only the prior value of that array element, then N elements can be updated on N processors without any communication (the loop is embarrassingly parallel). However, each each element's value depends on neighboring values, we can make effective use of N processors for N values only if communication costs do not overwhelm saved computation time.

See Also