Jump to content

Task parallelism

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by StitchProgramming (talk | contribs) at 13:28, 25 February 2011 (JVM Example). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Task parallelism (also known as function parallelism and control parallelism) is a form of parallelization of computer code across multiple processors in parallel computing environments. Task parallelism focuses on distributing execution processes (threads) across different parallel computing nodes. It contrasts to data parallelism as another form of parallelism.

Description

In a multiprocessor system, task parallelism is achieved when each processor executes a different thread (or process) on the same or different data. The threads may execute the same or different code. In the general case, different execution threads communicate with one another as they work. Communication takes place usually to pass data from one thread to the next as part of a workflow.

As a simple example, if we are running code on a 2-processor system (CPUs "a" & "b") in a parallel environment and we wish to do tasks "A" and "B" , it is possible to tell CPU "a" to do task "A" and CPU "b" to do task 'B" simultaneously, thereby reducing the runtime of the execution. The tasks can be assigned using conditional statements as described below.

Task parallelism emphasizes the distributed (parallelized) nature of the processing (i.e. threads), as opposed to the data (data parallelism). Most real programs fall somewhere on a continuum between Task parallelism and Data parallelism {{citation}}: Empty citation (help).

Example

The pseudocode below illustrates task parallelism:

program:
...
if CPU="a" then
   do task "A"
else if CPU="b" then
   do task "B"
end if
...
end program

The goal of the program is to do some net total task ("A+B"). If we write the code as above and launch it on a 2-processor system, then the runtime environment will execute it as follows.

  • In an SPMD system, both CPUs will execute the code.
  • In a parallel environment, both will have access to the same data.
  • The "if" clause differentiates between the CPU's. CPU "a" will read true on the "if" and CPU "b" will read true on the "else if", thus having their own task.
  • Now, both CPU's execute separate code blocks simultaneously, performing different tasks simultaneously.

Code executed by CPU "a":

program:
...
do task "A"
...
end program

Code executed by CPU "b":

program:
...
do task "B"
...
end program

This concept can now be generalized to any number of processors.

JVM Example

Similar to the previous example, Task Parallelism is also possible using the Java Virtual Machine JVM.

The code below illustrates task parallelism on the JVM: Statements or blocks of statements can be composed in parallel using the || operator[1] inside a parallel block, introduced with square brackets:

[
   || a++;
   || b++;
]

or in short form:

[ a++; || b++; ]

Each parallel statement within the composition is called a branch. We purposedly avoid using the terms task or process which mean very different things in different contexts.

Languages

Examples of (fine-grained) task-parallel languages can be found in the realm of Hardware Description Languages like Verilog and VHDL, which can also be considered as representing a "code static" software paradigm where the program has a static structure and the data is changing - as against a "data static" model where the data is not changing (or changing slowly) and the processing (applied methods) change (e.g. database search).[clarification needed]

References

  1. ^ http://www.ateji.com/px/patterns.html#task Task Parallelism in Java using Ateji PX
  • Quinn Michael J, Parallel Programming in C with MPI and OpenMP McGraw-Hill Inc. 2004. ISBN 0-07-058201-7
  • D. Kevin Cameron coined terms "data static" and "code static".


See also