Message Passing Interface
The Message Passing Interface (MPI) is a computer communications protocol. It is a de facto standard for communication among the nodes running a parallel program on a distributed memory system. MPI implementations consist of a library of routines that can be called from Fortran, C, C++ and Ada programs. The advantage of MPI over older message passing libraries is that it is both portable (because MPI has been implemented for almost every distributed memory architecture) and fast (because each implementation is optimized for the hardware on which it runs). Often compared with PVM and at one stage merged with PVM to form PVMMPI.
Implementations
There are at least three known attempts to implement MPI for Python: PyPar, PyMPI, and The MPI submodule of ScientificPython. PyPar (and possibly ScientificPython's module as well?) is designed to work like a typical module used with nothing but an import statement (and covers a subset of the spec) while PyMPI is a variant python interpreter which implements more of the spec and automagically works with compiled code that needs to make MPI calls. Source
The OCamlMPI Module implements a large subset of MPI functions and is in active use in scientific computing. To get a sense of it's maturity: it was reported on caml-list that an eleven thousand line OCaml program was "MPI-ified", using the module, with an additional 500 lines of code and slight restructuring and has run with excellent results on up to 170 nodes in a supercomputer.
Example program
Here is "Hello World" in MPI. Actually we send a "hello" message to each processor, manipulate it trivially, send the results back to the main processor, and print the messages out.
/* test of MPI */ #include <mpi.h> #include <stdio.h> #include <string.h> int main(int argc, char *argv[]) { char idstr[32]; char buff[128]; int numprocs; int myid; int i; MPI_Status stat; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); if(myid == 0) { printf("We have %d processors\n", numprocs); for(i=1;i<numprocs;i++) { sprintf(buff, "Hello %d! ", i); MPI_Send(buff, 128, MPI_CHAR, i, 0, MPI_COMM_WORLD); } for(i=1;i<numprocs;i++) { MPI_Recv(buff, 128, MPI_CHAR, i, 0, MPI_COMM_WORLD, &stat); printf("%s\n", buff); } } else { MPI_Recv(buff, 128, MPI_CHAR, 0, 0, MPI_COMM_WORLD, &stat); sprintf(idstr, "Processor %d ", myid); strcat(buff, idstr); strcat(buff, "reporting for duty\n"); MPI_Send(buff, 128, MPI_CHAR, 0, 0, MPI_COMM_WORLD); } MPI_Finalize(); return 0; }
See also
- Open MPI
- LAM/MPI
- OpenMP
- MPICH
- Unified Parallel C
- Occam programming language
- Linda (coordination language)
- Parallel Virtual Machine
- Calculus of Communicating Systems
- Calculus of Broadcasting Systems
- Actor model
External links
- MPI specification
- MPI DMOZ category
- Open MPI web site
- LAM/MPI web site
- MPICH
- SCore MPI
- Scali MPI
- HP-MPI
- MVAPICH: MPI over InfiniBand
- Parawiki page for MPI
- Global Arrays
- PVM/MPI Users' Group Meeting (2006 edition)
- MPI Samples
- MPICH over Myrinet (GM, classic driver)
- MPICH over Myrinet (MX, next-gen driver)
- Parallel Programming with MatlabMPI
- MPI Tutorial
- Parallel Programming with MPI
- MacMPI
- MPI over SCTP
- IPython for parallel programing: slides from Scipy'05 - discusses MPI weaknesses, describes alternative via python (IPython)
References
This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later.