High Performance Linux Clusters with OSCAR, Rocks, OpenMosix, and MPI [Electronic resources] نسخه متنی

اینجــــا یک کتابخانه دیجیتالی است

با بیش از 100000 منبع الکترونیکی رایگان به زبان فارسی ، عربی و انگلیسی

High Performance Linux Clusters with OSCAR, Rocks, OpenMosix, and MPI [Electronic resources] - نسخه متنی

Joseph D. Sloan

| نمايش فراداده ، افزودن یک نقد و بررسی
افزودن به کتابخانه شخصی
ارسال به دوستان
جستجو در متن کتاب
بیشتر
تنظیمات قلم

فونت

اندازه قلم

+ - پیش فرض

حالت نمایش

روز نیمروز شب
جستجو در لغت نامه
بیشتر
لیست موضوعات
توضیحات
افزودن یادداشت جدید








14.1 More on Point-to-Point Communication


In Chapter 13, you were introduced to
point-to-point communication, the communication between a pair of
cooperating processes. The two most basic commands used for
point-to-point communication are MPI_Send and
MPI_Recv. Several variations on these commands
that can be helpful in some contexts are described in this section.


14.1.1 Non-Blocking Communication


One major difference among
point-to-point commands is how they handle buffering and the
potential for blocking. MPI_Send is said to be a
blocking command since it will wait to return until the send buffer
can be reclaimed. At a minimum, the message has to be copied into a
system buffer before MPI_Send will return.
Similarly, MPI_Recv blocks until the receive
buffer actually contains the contents of the message.


14.1.1.1 MPI_Isend and MPI_Irecv

Although more complicated to use,
non-blocking versions of MPI_Send and
MPI_Recv are included in MPI. These are
MPI_Isend and MPI_Irecv. (The
"I" denotes an immediate return.)
With the non-blocking versions, the communication operation is begun
or, in the parlance, a message is

posted . At
some later point, the program must explicitly complete the operation.
Several functions are provided to complete the operation, the
simplest being MPI_Wait and
MPI_Test.

MPI_Isend takes the same arguments as
MPI_Send with one exception.
MPI_Isend has had one additional parameter at the
end of its parameter list. This is a request handle, an opaque object
that is used in future references to this message exchange. That is,
the handle identifies the pending operation. (Handles are of type
MPI_Request.) In MPI_Irecv the
status parameter, which is now found in MPI_Wait,
has been replaced by a request handle. Otherwise, the parameters to
MPI_Irecv are the same as
MPI_Recv.


14.1.1.2 MPI_Wait

MPI_Wait takes two arguments. The first is the
request handle just described; the second is a status variable, which
contains the same information and is used in exactly the same way as
in MPI_Recv. MPI_Wait blocks
until the operation identified by the request handle completes. When
it returns, the request handle is set to a special constant,
MPI_REQUEST_NULL, indicating that there is no
longer a pending operation associated with the request handle.

Code for MPI_Irecv and MPI_Wait
might look something like this fragment:

...
int datum1, datum2;
MPI_Status status;
MPI_Request handle;
if (processId = = 0)
{
MPI_Send(&datum1, 1, MPI_INT, 1, 1, MPI_COMM_WORLD, &handle);
...
}
else
{
MPI_Irecv(&datum2, 1, MPI_INT, 0, 1, MPI_COMM_WORLD, &handle);
...
MPI_Wait(&handle, &status);
}
...

In this example, the contents of datum1 are
received in datum2. As shown here, it is OK to mix
blocking and non-blocking commands. For example, you can use
MPI_Send to send a message that will be received
by MPI_Irecv as shown in the example.


14.1.1.3 MPI_Test

MPI_Test is a non-blocking alternative to
MPI_Wait. It takes three arguments: the request
handle, a flag, and a status variable. If the exchange is complete,
the value returned in the flag variable is true,
the request handle is set to MPI_REQUEST_NULL, and
the status variable will contain information about the exchange. If
the flag is still set to false, then the exchange
hasn't completed, the request variable is unchanged,
and the status variable is undefined.


14.1.1.4 MPI_Iprobe

If you want to check up on messages
without actually receiving them, use MPI_Iprobe.
(There is also a blocking variant called
MPI_Probe.) MPI_Iprobe can be
called multiple times without actually receiving the message. Once
you know the exchange has finished, you can use
MPI_Test or MPI_Wait to
actually receive the message. MPI_Iprobe takes
five arguments: the rank of the source, the message tag, the
communicator, a flag, and a status object. If the flag is
true, the message has been received and the status
object can be examined. If false, the status is
undefined.


14.1.1.5 MPI_Cancel

If you have a pending, non-blocking
communication operation, it can be aborted with the
MPI_Cancel command. MPI_Cancel
takes a request handle as its only argument. You might use
MPI_Cancel in conjunction with
MPI_Iprobe. If you don't like the
status information returned by MPI_Iprobe, you can
use MPI_Cancel to abort the exchange.


14.1.1.6 MPI_Sendrecv and MPI_Sendrecv_replace

If you need to
exchange information between a pair of processes, you can use
MPI_Sendrecv or
MPI_Sendrecv_replace. With the former, both the
send and receive buffers must be distinct. With the latter, the
received message overwrites the sent message. These are both blocking
commands.

While these examples should give you an idea of some of the functions
available, there are other point-to-point functions not described
here. For example, there is a set of commands to create and
manipulate persistent connections similar to communication ports
(MPI_Send_init, MPI_Start,
etc.). You can specify dummy sources and destinations for messages
(MPI_PROC_NULL). There are variants on
MPI_Wait and MPI_Test for
processing lists of pending communication operations
(MPI_Testany, MPI_Testall,
MPI_Testsome, MPI_Waitany,
etc.) Additional communication modes are also supported:
synchronous-mode communication and ready-mode
communication.


/ 142