Jump to content

Parallel external memory

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 141.52.104.93 (talk) at 14:50, 22 January 2019. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Parallel external memory (model)

In computer science, a parallel external memory (PEM) model is a cache-aware, external-memory abstract machine.[1] It is the parallel-computing analogy to the single-processor external memory (EM) model. In a similar way, it is the cache-aware analogy to the parallel random-access machine (PRAM). The PEM model consists of a number of processors, together with their respective private caches and a shared main memory.

Model

Definition

The parallel external memory (PEM) model[1] is a combination of the external memory (EM) model and the parallel random access memory (PRAM) model. The parallel external memory (PEM) model is a computation model which consists of P processors and a two-level memory hierarchy. This memory hierarchy consists of a large external memory (main memory) of size N and P small internal memories (caches). The main memory is shared by all the processors. Each cache is exclusive to a single processor. A processor cannot access another’s cache. The caches have a size M which is partitioned in blocks of size B. The processors can only perform operations on data which are in their cache. The data can be transferred between the main memory and the cache in blocks of size B.

I/O complexity

The complexity measure of the PEM model is the I/O complexity[1], which determines the number of parallel blocks transfers between the main memory and the cache. During a parallel block transfer each processor can transfer a block. So if P processors load parallelly a data block of size B form the main memory into their caches, it is considered as an I/O complexity of not . A program in the PEM model should minimize the data transfer between main memory and caches and operate as much as possible on the data in the caches.

Read / Write conflicts

In the PEM model, there is no direct communication network between the P processors. The processors have to communicate indirectly over the main memory. If multiple processors try to access the same block in main memory concurrently read/write conflicts[1] occur. Like in the PRAM model three different variations of this problem are considered:

  • Concurrent Read Concurrent Write (CRCW): The same block in main memory can be read and written by multiple processors concurrently.
  • Concurrent Read Exclusive Write (CREW): The same block in main memory can be read by multiple processors concurrently. Only one processor can write to a block at a time.
  • Exclusive Read Exclusive Write (EREW): The same block in main memory cannot be read or written by multiple processors concurrently. Only one processor can access a block at a time.

The following two algorithms[1] solve the CREW and EREW problem if P ≤ B processors write to the same block simultaneously. A first approach is to serialize the writes. Only one processor after the other writs to the block. This results in a total of P parallel block transfers. A second approach needs parallel block transfers and an additional block for each processor. The main idea is to schedule the writes in a binary tree fashion and gradually combine the data into a single block. In the first round P processors combine their blocks into P/2 blocks. Then P/2 processors combine their blocks into P/4. This procedure is continued until all the data is combined in one block.

Examples

Prefixsum

Let A be an ordered set of N elements. The prefix sum of A is an ordered set B of N elements, with and . If the input set A is located in continuous main memory, the prefix sum of A can be calculated in the PEM model with the optimal I/O complexity.[1] This optimal I/O complexity can be accomplished by simulating an optimal PRAM prefix sum algorithm in the PEM model.[1]

Multiway partitioning

Let be a vector of d-1 pivots sorted in increasing order. Let be am unordered set of N elements. A d-way partition[1] of is a set , where and for . is called the i-th bucket. The number of elements in is greater than and smaller than . In the following algorithm[1] the input is partitioned into N/P-sized contiguous segments in main memory. The processor i primarily works on the segment .

for each processor i in parallel do
 Read the vector of pivots  into the cache.
 Partition  into d buckets and let vector  bet the number of items in each bucket.
end for
Run PEM prefix sum on the set of vectors  simultaneously.
for each processor i in parallel do
 Write elements  into memory locations offset appropriately by  and .
end for
Using the prefix sums stored in  the last processor P calculates the vector  of bucket sizes and returns it.

If the vector of pivots M and the input set A are located in contiguous memory, then the d-way partitioning problem can be solved in the PEM model with I/O complexity. The content of the final buckets have to be located in contiguous memory.

Selection

The selection problem is about finding the k-th smallest item in an unordered list of size . The following code[1] makes use of PRAMSORT which is a PRAM optimal sorting algorithm which runs in , and SELECT, which is a cache optimal single-processor selection algorithm.

if  then 
  
  return 
end if 

for each processor  in parallel do 
  //Find median of each 
  
end for 

// Sort medians


// Partition around median of medians


if  then 
  return 
else 
  return 
end if

Under the assumption that the input is stored in contiguous memory, PEMSELECT has an I/O complexity of:


Distribution sort

Distribution sort partitions an input list of size into disjoint buckets of similar size. Every bucket is then sorted recursively and the results are combined into a fully sorted list.

If the task is delegated to a cache-optimal single-processor sorting algorithm.

Otherwise the following algorithm[1] is used:

// Sample  elements from 
for each processor  in parallel do
  if  then
    
    Load  in -sized pages and sort pages individually
  else
    
    Load and sort  as single page
  end if
  Pick every 'th element from each sorted memory page into contiguous vector  of samples
end for 

in parallel do
  Combine vectors  into a single contiguous vector 
  Make  copies of : 
end do

// Find  pivots 
for  to  in parallel do
  
end for

Pack pivots in contiguous array 

// Partition around pivots into buckets 


// Recursively sort buckets
for  to  in parallel do
  recursively call  on bucket of size 
  using  processors responsible for elements in bucket 
end for

The I/O complexity of PEMDISTSORT is:

where

If the number of processors is chosen that and the I/O complexity is then:

Other PEM algorithms

PEM Algorithm I/O complexity Constraints
Mergesort[1]
List ranking[2]
Euler tour[2]
Expression tree evaluation[2]
Finding a MST[2]

Where is the time it takes to sort items with processors in the PEM model.

See also

  • ...

References

  1. ^ a b c d e f g h i j k l Arge, Lars; Goodrich, Michael T.; Nelson, Michael; Sitchinava, Nodari (2008). "Fundamental parallel algorithms for private-cache chip multiprocessors". Proceedings of the twentieth annual symposium on Parallelism in algorithms and architectures - SPAA '08. New York, New York, USA: ACM Press. doi:10.1145/1378533.1378573. ISBN 9781595939739.
  2. ^ a b c d Arge, Lars; Goodrich, Michael T.; Sitchinava, Nodari (2010). "Parallel external memory graph algorithms". 2010 IEEE International Symposium on Parallel & Distributed Processing (IPDPS). IEEE. doi:10.1109/ipdps.2010.5470440. ISBN 9781424464425.