Jump to content

Granularity (parallel computing)

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Acshetty (talk | contribs) at 03:02, 13 September 2016. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In parallel computing, granularity (or grain size) of a task is a measure of the amount of work (or computation) which is performed by that task.[1]

Another definition of granularity takes into account the communication overhead between multiple processing elements. It defines granularity as the ratio of computation time to communication time.[2]

If Tcomp is the computation time and Tcomm denotes the communication time, then the Granularity G of a task can be calculated as:

Granularity is usually measured in terms of the number of instructions executed in a particular task.[1] Alternately, granularity can also be specified in terms of the execution time of a program, combining the computation time and communication time.[1]

Types of parallelism

Depending on the amount of work which is performed by a parallel task, parallelism can be classified into three categories: fine-grained, medium-grained and coarse-grained parallelism.

Fine-grained parallelism

In fine-grained parallelism, a program is broken down to a large number of small tasks. These tasks are assigned individually to many processing elements. The amount of work associated with a parallel task is low and the work is evenly distributed among the processing elements. Hence, fine-grained parallelism facilitates load balancing.[3]

As each task processes less data, the number of processing elements required to perform the complete processing is high. This in turn, increases the communication and synchronization overhead.

Fine-grained parallelism is best exploited in architectures which support fast communication. Shared memory architecture which has a low communication overhead is most suitable for fine-grained parallelism.

It is difficult for programmers to detect parallelism in a program, therefore, it is usually the compilers' responsibility to detect fine-grained parallelism.[1]

An example of a fine-grained system (from outside the parallel computing domain) is the system of neurons in our brain. [4]

Examples of fine-grained parallel computers: Connection Machine (CM-2) and J-Machine are examples of fine-grain parallel computers that have grain size in the range of 4-5 μs.[1]

Coarse-grained parallelism

In coarse-grained parallelism, a program is split into large tasks. Due to this, a large amount of computation takes place in processing elements. This might result in load imbalance, wherein certain tasks process bulk of the data while others might be idle. Further, coarse-grained parallelism fails to exploit the parallelism in the program as most of the computation is performed sequentially on a processing element. The advantage of this type of parallelism is low communication and synchronization overhead.

Message-passing architecture takes a long time to communicate data among processes which makes it suitable for coarse-grained parallelism. [1]

Examples of coarse-grained parallel computers: Cray Y-MP is an example of coarse-grained parallel computer which has a grain size of about 20s.[1]

Medium-grained parallelism

Medium-grained parallelism is used relatively to fine-grained and coarse-grained parallelism. Medium-grained parallelism is When tasks are split in such a manner that some of the parallelism is sacrificed in order to reduce the communication overhead, then the resultant tasks are medium-grained.

Examples of medium-grained parallel computers: Intel iPSC is an example of medium-grained parallel computer which has a grain size of about 10ms.[1]

Example

Consider a 10*10 image that needs to be processed. Assume there are 100 processors that are responsible for processing the 10*10 image. Ignoring the communication overhead, the 100 processors can process the 10*10 image in 1 clock cycle. Each processor is working on 1 pixel of the image and then communicates the output to other processors. This is an example of fine-grained parallelism. Now, consider that we have 25 processors processing the 10*10 image. The processing of the image will now take 4 clock cycles. This is an example of medium-grained parallelism. Further, if we reduce the processors to 2, then the processing will take 50 clock cycles. Each processor need to process 50 elements which increases the computation time, but the communication overhead decreases as the number of processors which share data decreases. This case illustrates coarse-grained parallelism.

Pseudo code for 100 processors Pseudo code for 25 processors Pseudo code for 2 processors
void main()
{
  switch (Processor_ID)
  {
    case 1: Compute element 1; break;
    case 2: Compute element 2; break;
    case 3: Compute element 3; break;
    .
    .
    .
    .
    case 100: Compute element 100; 
              break;
  }
}
void main()
{
  switch (Processor_ID)
  {
    case 1: Compute elements 1-4; break;
    case 2: Compute elements 5-8; break;
    case 3: Compute elements 9-12; break;
    .
    .
    case 25: Compute elements 97-100; 
             break;
  }
}
void main()
{
  switch (Processor_ID)
  {
    case 1: Compute elements 1-50; 
            break;
    case 2: Compute elements 51-100; 
            break;
  }
}
Computation time - 1 clock cycle Computation time - 4 clock cycles Computation time - 50 clock cycles

Levels of parallelism

Granularity is closely tied to the level of processing. A program can be broken down into 4 levels of parallelism -

  1. Instruction level.
  2. Loop level
  3. Sub-routine level and
  4. Program-level

The highest amount of parallelism is achieved at instruction level, followed by loop-level parallelism. At instruction and loop level, fine-grained parallelism is achieved. Typical grain size at instruction-level is 20 instructions, while the grain-size at loop-level is 500 instructions. [1]

At the sub-routine (or procedure) level the grain size is typically a few thousand instructions. Medium-grained parallelism is achieved at sub-routine level.

At program-level, parallel execution of programs takes place. Granularity can be in the range of tens of thousands of instructions. [1] Coarse-grained parallelism is used at this level.

The below table shows the relationship between levels of parallelism, grain size and degree of parallelism

Levels Grain Size Parallelism
Instruction level Fine Highest
Loop level Fine Moderate
Sub-routine level Medium Moderate
Program level Coarse Least

Impact of granularity on performance

Granularity affects the performance of parallel computers. Using fine grains or small tasks results in more parallelism and hence increases the speedup. However, synchronization overhead, scheduling strategies etc. can negatively impact the performance of fine-grained tasks. Increasing parallelism alone cannot give the best performance.[5]

In order to reduce the communication overhead, granularity can be increased. Coarse grained tasks have less communication overhead but they often cause load imbalance. Hence optimal performance is achieved between the two extremes of fine-grained and coarse-grained parallelism.[6]

See Also

Citations

  1. ^ a b c d e f g h i j Hwang, Kai. Advanced Computer Architecture: Parallelism,Scalability,Programmability (1st ed.). McGraw-Hill Higher Education. ISBN 0070316228.
  2. ^ Kwiatkowski, Jan (9 September 2001). "Evaluation of Parallel Programs by Measurement of Its Granularity". Parallel Processing and Applied Mathematics. Springer Berlin Heidelberg: 145–153. doi:10.1007/3-540-48086-2_16.
  3. ^ Barney, Blaise. Introduction to Parallel Computing.
  4. ^ Miller, Russ; Stout, Quentin F. (1996). Parallel Algorithms for Regular Architectures: Meshes and Pyramids. Cambridge, Mass.: MIT Press. pp. 5–6. ISBN 9780262132336.
  5. ^ Chen, Ding-Kai; Su, Hong-Men; Yew, Pen-Chung (1 January 1990). "The Impact of Synchronization and Granularity on Parallel Systems". Proceedings of the 17th Annual International Symposium on Computer Architecture. ACM: 239–248. doi:10.1145/325164.325150.
  6. ^ Yeung, Donald; Dally, William J.; Agarwal, Anant. "How to Choose the Grain Size of a Parallel Computer". {{cite journal}}: Cite journal requires |journal= (help)