Jump to content

Message Passing Interface

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by JenniferForUnity (talk | contribs) at 02:42, 6 October 2006 (Added an "implementations" section with a Python stub and an OCAML stub.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The Message Passing Interface (MPI) is a computer communications protocol. It is a de facto standard for communication among the nodes running a parallel program on a distributed memory system. MPI implementations consist of a library of routines that can be called from Fortran, C, C++ and Ada programs. The advantage of MPI over older message passing libraries is that it is both portable (because MPI has been implemented for almost every distributed memory architecture) and fast (because each implementation is optimized for the hardware on which it runs). Often compared with PVM and at one stage merged with PVM to form PVMMPI.

Implementations

There are at least three known attempts to implement MPI for Python: PyPar, PyMPI, and The MPI submodule of ScientificPython. PyPar (and possibly ScientificPython's module as well?) is designed to work like a typical module used with nothing but an import statement (and covers a subset of the spec) while PyMPI is a variant python interpreter which implements more of the spec and automagically works with compiled code that needs to make MPI calls. Source

The OCamlMPI Module implements a large subset of MPI functions and is in active use in scientific computing. To get a sense of it's maturity: it was reported on caml-list that an eleven thousand line OCaml program was "MPI-ified", using the module, with an additional 500 lines of code and slight restructuring and has run with excellent results on up to 170 nodes in a supercomputer.

Example program

Here is "Hello World" in MPI. Actually we send a "hello" message to each processor, manipulate it trivially, send the results back to the main processor, and print the messages out.

/*
 test of MPI
*/
#include <mpi.h>
#include <stdio.h>
#include <string.h>

int main(int argc, char *argv[])
{
  char idstr[32];
  char buff[128];
  int numprocs;
  int myid;
  int i;
  MPI_Status stat; 

  MPI_Init(&argc,&argv); 
  MPI_Comm_size(MPI_COMM_WORLD,&numprocs); 
  MPI_Comm_rank(MPI_COMM_WORLD,&myid); 

  if(myid == 0)
  {
    printf("We have %d processors\n", numprocs);
    for(i=1;i<numprocs;i++)
    {
      sprintf(buff, "Hello %d! ", i);
      MPI_Send(buff, 128, MPI_CHAR, i, 0, MPI_COMM_WORLD);
    }
    for(i=1;i<numprocs;i++)
    {
      MPI_Recv(buff, 128, MPI_CHAR, i, 0, MPI_COMM_WORLD, &stat);
      printf("%s\n", buff);
    }
  }
  else
  {
    MPI_Recv(buff, 128, MPI_CHAR, 0, 0, MPI_COMM_WORLD, &stat);
    sprintf(idstr, "Processor %d ", myid);
    strcat(buff, idstr);
    strcat(buff, "reporting for duty\n");
    MPI_Send(buff, 128, MPI_CHAR, 0, 0, MPI_COMM_WORLD);
  }

  MPI_Finalize();
  return 0;
}

See also

References

This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later.