Jump to content

Task parallelism

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Khazadum (talk | contribs) at 19:58, 12 February 2007 (create). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Task Parallelism is a form of parallelization of computer code. It is meant to distribute computing across multiple processors in parallel computing environments. It contrasts to Data Parallelism as another form of parallelism.

Description

In a multiprocessor system executing a single set of instructions (SIMD), task parallelism is achieved when each processor performs a different task on same ot different data. For instance, if we are running code on a 2-processor system (CPUs "a" & "b") in a parallel environment and we wish to do tasks "A" and "B" , it is possible to tell CPU "a" to do task "A" and CPU "b" to do task 'B" simultaneously, thereby reducing the runtime of the execution. The tasks can be assigned using conditional statements as described below.

Example

The pseudocode below illustrates task parallelism:

program:
...
if CPU="a" then
   do task "A"
else if CPU="b" then
        do task "B"
end if
...
end program

The goal of the program is to do some net total task ("A+B"). If we write the code as above and launch it on a 2-processor system, then the runtime environment will execute it as follows.

  • In a SIMD system, both CPUs will execute the code.
  • In a parallel environment, both will have access to the same data.
  • The "if" clause differentiates between the CPU's. CPU "a" will read true on the "if" and CPU "b" will read true on the "else if", thus having their own task.
  • Now, both CPU's execute separate code blocks simultaneously, performing different tasks simultaneously.


Code executed by CPU "a":

program:
...
do task "A"
...
end program


Code executed by CPU "b":

program:
...
do task "B"
...
end program

This concept can now be generalized to any number of processors.

References

  • Quinn Michael J, Parallel Programming in C with MPI and OpenMP McGraw-Hill Inc. 2004. ISBN 0-07-058201-7