Difference between revisions of "MPI"
Line 6: | Line 6: | ||
MPI-1[ref]--the first incarnation of the standard--arrived in 1994 in response to the need for a '''portable''' means to program the growing number of distributed memory computers appearing in the marketplace. '''MPI''' stands for '''M'''essage '''P'''assing '''I'''nterface, and as its name suggests, it is an API, rather than a new programming language. At the time of writing, MPI can be used in C, C++, Fortran77 and Fortran90/95 programs. We will see that MPI-1 contained little on the topic I/O. This was rectified in 1997 with the arrival of MPI-2[ref], which contained the MPI-IO standard (supporting parallel I/O) along with additional functionality to support the dynamic creation of processes and also one-sided communication models. | MPI-1[ref]--the first incarnation of the standard--arrived in 1994 in response to the need for a '''portable''' means to program the growing number of distributed memory computers appearing in the marketplace. '''MPI''' stands for '''M'''essage '''P'''assing '''I'''nterface, and as its name suggests, it is an API, rather than a new programming language. At the time of writing, MPI can be used in C, C++, Fortran77 and Fortran90/95 programs. We will see that MPI-1 contained little on the topic I/O. This was rectified in 1997 with the arrival of MPI-2[ref], which contained the MPI-IO standard (supporting parallel I/O) along with additional functionality to support the dynamic creation of processes and also one-sided communication models. | ||
− | We can extend Flynn's original Taxonomy[ref] with the acronym SPMD--'''S'''ingle '''P'''rogram '''M'''ultiple '''D'''ata. This emphasises the fact that using e.g. MPI, we can write single programs that will execute on computers comprised of multiple compute elements, each with its own--'''not shared'''--memory space. | + | We can extend Flynn's original Taxonomy[ref] with the acronym '''SPMD'''--'''S'''ingle '''P'''rogram '''M'''ultiple '''D'''ata. This emphasises the fact that using e.g. MPI, we can write single programs that will execute on computers comprised of multiple compute elements, each with its own--'''not shared'''--memory space. |
=Hello World= | =Hello World= |
Revision as of 14:05, 19 July 2010
MPI: Message passing for distributed memory computing
Introduction
MPI-1[ref]--the first incarnation of the standard--arrived in 1994 in response to the need for a portable means to program the growing number of distributed memory computers appearing in the marketplace. MPI stands for Message Passing Interface, and as its name suggests, it is an API, rather than a new programming language. At the time of writing, MPI can be used in C, C++, Fortran77 and Fortran90/95 programs. We will see that MPI-1 contained little on the topic I/O. This was rectified in 1997 with the arrival of MPI-2[ref], which contained the MPI-IO standard (supporting parallel I/O) along with additional functionality to support the dynamic creation of processes and also one-sided communication models.
We can extend Flynn's original Taxonomy[ref] with the acronym SPMD--Single Program Multiple Data. This emphasises the fact that using e.g. MPI, we can write single programs that will execute on computers comprised of multiple compute elements, each with its own--not shared--memory space.
Hello World
These programs assume that all processes can write to the screen. This is not a safe assumption.
Send and Receive
triple (address, count, type)
Synchronisation, Blocking and the role of Buffers
Independent 'compute elements' Synchronised communication requires that both sender and receiver are ready. Through the introduction of a buffer, a sender can deposit a message before the receiver is ready. MPI_Recv() only returns when the message has been received, however. Hence the term blocking.
A Common Bug
If all processes are waiting to receive prior to sending, then we will have deadlock.