IBM Books

MPI Programming and Subroutine Reference


Descriptions of Subroutines

This chapter includes descriptions of the subroutines available for parallel programming. The subroutines are listed in alphabetical order. For each subroutine, a purpose, C synopsis, Fortran synopsis, description, notes, and error conditions are provided. Review the following sample subroutine before proceeding to better understand how the subroutine descriptions are structured.

A_SAMPLE, A_Sample

Purpose

Shows how the subroutines described in this book are structured.

C Synopsis

Header file mpi.h supplies ANSI-C prototypes for every function described in the message passing subroutine section of this manual.

#include <mpi.h>
int A_Sample (one or more parameters);

In the C prototype, a declaration of void * indicates that a pointer to any datatype is allowable.

Fortran Synopsis

include 'mpif.h'
A_SAMPLE (ONE OR MORE PARAMETERS);

In the Fortran routines, formal parameters are described using a subroutine prototype format, even though Fortran does not support prototyping. The term CHOICE indicates that any Fortran datatype is valid.

Parameters

Argument or parameter definitions appear below:

parameter1

parameter description (type)

...

parameter4

parameter description (type)

Parameter types:

IN - call uses but does not update an argument
OUT - call returns information via an argument but does not use its input value
INOUT - call uses and updates an argument

Description

This section contains a more detailed description of the subroutine or function.

Notes

If applicable, this section contains notes about the IBM MPI implementation and its relationship to the requirements of the MPI Standard. The IBM implementation intends to comply fully with the requirements of the MPI Standard. There are issues, however, which the Standard leaves open to the implementation's choice.

Errors

For non-file-handle errors, a single list appears here.

For errors on a file handle, up to 3 lists appear:

In almost every routine, the C version is invoked as a function returning integer. In the Fortran version, the routine is called as a subroutine; that is, it has no return value. The Fortran version includes a return code parameter IERROR as the last parameter.

Related Information

This section contains a list of related functions or routines in this book.

For both C and Fortran, the Message-Passing Interface (MPI) uses the same spelling for function names. The only distinction is the capitalization. For the purpose of clarity, when referring to a function without specifying Fortran or C version, all uppercase letters are used.

Fortran refers to Fortran 77 (F77) bindings, which are officially supported for MPI. However, F77 bindings for MPI can be used by Fortran 90. Fortran 90 and High Performance Fortran (HPF) offer array section and assumed shape arrays as parameters on calls. These are not safe with MPI.

MPE_IALLGATHER, MPE_Iallgather

Purpose

Performs a nonblocking allgather operation.

C Synopsis

#include <mpi.h>
int MPE_Iallgather(void* sendbuf,int sendcount,MPI_Datatype sendtype,
    void* recvbuf,int recvcount,MPI_Datatype recvtype,MPI_Comm comm,
    MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPE_IALLGATHER(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE,
    CHOICE RECVBUF,INTEGER RECVCOUNT,INTEGER RECVTYPE,INTEGER COMM,
    INTEGER REQUEST,INTEGER IERROR)

Parameters

sendbuf

is the starting address of the send buffer (choice) (IN)

sendcount

is the number of elements in the send buffer (integer) (IN)

sendtype

is the datatype of the send buffer elements (handle) (IN)

recvbuf

is the address of the receive buffer (choice) (OUT)

recvcount

is the number of elements received from any task (integer) (IN)

recvtype

is the datatype of the receive buffer elements (handle) (IN)

comm

is the communicator (handle) (IN)

request

is the communication request (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine is a nonblocking version of MPI_ALLGATHER. It performs the same function as MPI_ALLGATHER except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.

Notes

The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of your applications, but applications that use them will not be directly portable to other MPI implementations.

Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective communication routines generally do, tasks running at different speeds do not waste time waiting for each other.

When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.

The nonblocking collective routines can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.

Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.

Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid communicator

Invalid communicator type
must be intracommunicator

Invalid count(s)
count < 0

Invalid datatype(s)

Type not committed

Unequal message length

MPI not initialized

MPI already finalized

Develop mode error if:

Inconsistent message length

Related Information

MPI_ALLGATHER

MPE_IALLGATHERV, MPE_Iallgatherv

Purpose

Performs a nonblocking allgatherv operation.

C Synopsis

#include <mpi.h>
int MPE_Iallgatherv(void* sendbuf,int sendcount,
    MPI_Datatype sendtype,void* recvbuf,int recvcounts,
    int *displs,MPI_Datatype recvtype,
    MPI_Comm comm,MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPE_IALLGATHERV(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE,
     CHOICE RECVBUF,INTEGER RECVCOUNTS(*),INTEGER DISPLS(*),
     INTEGER RECVTYPE,INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)

Parameters

sendbuf

is the starting address of the send buffer (choice) (IN)

sendcount

is the number of elements in the send buffer (integer) (IN)

sendtype

is the datatype of the send buffer elements (handle) (IN)

recvbuf

is the address of the receive buffer (choice) (OUT)

recvcounts

integer array (of length group size) that contains the number of elements received from each task (IN)

displs

integer array (of length group size). Entry i specifies the displacement (relative to recvbuf) at which to place the incoming data from task i (IN)

recvtype

is the datatype of the receive buffer elements (handle) (IN)

comm

is the communictor (handle) (IN)

request

is the communication request (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine is a nonblocking version of MPI_ALLGATHERV. It performs the same function as MPI_ALLGATHERV except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.

Notes

The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.

Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective routines generally do, tasks running at different speeds do not waste time waiting for each other.

When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.

The nonblocking collective routines can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.

Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.

Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid communicator

Invalid communicator type
must be intracommunicator

Invalid count(s)
count < 0

Invalid datatype(s)

Type not committed

Unequal message length

MPI not initialized

MPI already finalized

Develop mode error if:

None

Related Information

MPI_ALLGATHERV

MPE_IALLREDUCE, MPE_Iallreduce

Purpose

Performs a nonblocking allreduce operation.

C Synopsis

#include <mpi.h>
int MPE_Iallreduce(void* sendbuf,void* recvbuf,int count,
       MPI_Datatype datatype,MPI_Op op,MPI_Comm comm,
       MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPE_IALLREDUCE(CHOICE SENDBUF,CHOICE RECVBUF,INTEGER COUNT,
    INTEGER DATATYPE,INTEGER OP,INTEGER COMM,INTEGER REQUEST,
    INTEGER IERROR)

Parameters

sendbuf

is the starting address of the send buffer (choice) (IN)

recvbuf

is the starting address of the receive buffer (choice) (OUT)

count

is the number of elements in the send buffer (integer) (IN)

datatype

is the datatype of elements in the send buffer (handle) (IN)

op

is the reduction operation (handle) (IN)

comm

is the communicator (handle) (IN)

request

is the communication request (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine is a nonblocking version of MPI_ALLREDUCE. It performs the same function as MPI_ALLREDUCE except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.

Notes

The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.

Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective routines generally do, tasks running at different speeds do not waste time waiting for each other.

When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.

The nonblocking collective routines can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.

Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.

Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid count
count < 0

Invalid datatype

Type not committed

Invalid op

Invalid communicator

Invalid communicator type
must be intracommunicator

Unequal message length

MPI not initialized

MPI already finalized

Develop mode error if:

Inconsistent op

Inconsistent datatype

Inconsistent message length

Related Information

MPI_ALLREDUCE

MPE_IALLTOALL, MPE_Ialltoall

Purpose

Performs a nonblocking alltoall operation.

C Synopsis

#include <mpi.h>
int MPE_Ialltoall(void* sendbuf,int sendcount,MPI_Datatype sendtype,
    void* recvbuf,int recvcount,MPI_Datatype recvtype,MPI_Comm comm,
    MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPE_IALLTOALL(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE,
    CHOICE RECVBUF,INTEGER RECVCOUNT,INTEGER RECVTYPE,INTEGER COMM,
    INTEGER REQUEST,INTEGER IERROR)

Parameters

sendbuf

is the starting address of the send buffer (choice) (IN)

sendcount

is the number of elements sent to each task (integer) (IN)

sendtype

is the datatype of the send buffer elements (handle) (IN)

recvbuf

is the address of the receive buffer (choice) (OUT)

recvcount

is the number of elements received from any task (integer) (IN)

recvtype

is the datatype of the receive buffer elements (handle) (IN)

comm

is the communicator (handle) (IN)

request

is the communication request (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine is a nonblocking version of MPI_ALLTOALL. It performs the same function as MPI_ALLTOALL except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.

Notes

The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.

Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective routines generally do, tasks running at different speeds do not waste time waiting for each other.

When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.

Nonblocking collective function can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.

Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.

Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid count(s)
count < 0

Invalid datatype(s)

Type not committed

Invalid communicator

Invalid communicator type
must be intracommunicator

Unequal message lengths

MPI not initialized

MPI already finalized

Develop mode error if:

Inconsistent message lengths

Related Information

MPI_ALLTOALL

MPE_IALLTOALLV, MPE_Ialltoallv

Purpose

Performs a nonblocking alltoallv operation.

C Synopsis

#include <mpi.h>
int MPE_Ialltoallv(void* sendbuf,int *sendcounts,int *sdispls,
    MPI_Datatype sendtype,void* recvbuf,int *recvcounts,int *rdispls,
    MPI_Datatype recvtype,MPI_Comm comm,MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPE_ALLTOALLV(CHOICE SENDBUF,INTEGER SENDCOUNTS(*),
    INTEGER SDISPLS(*),INTEGER SENDTYPE,CHOICE RECVBUF,
    INTEGER RECVCOUNTS(*),INTEGER RDISPLS(*),INTEGER RECVTYPE,
    INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)

Parameters

sendbuf

is the starting address of the send buffer (choice) (IN)

sendcounts

integer array (of length group size) specifying the number of elements to send to each task (IN)

sdispls

integer array (of length group size). Entry j specifies the displacement relative to sendbuf from which to take the outgoing data destined for task j. (IN)

sendtype

is the datatype of the send buffer elements (handle) (IN)

recvbuf

is the address of the receive buffer (choice) (OUT)

recvcounts

integer array (of length group size) specifying the number of elements that can be received from each task (IN)

rdispls

integer array (of length group size). Entry i specifies the displacement relative to recvbuf at which to place the incoming data from task i. (IN)

recvtype

is the datatype of the receive buffer elements (handle) (IN)

comm

is the communicator (handle) (IN)

request

is the communication request (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine is a nonblocking version of MPI_ALLTOALLV. It performs the same function as MPI_ALLTOALLV except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.

Notes

The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.

Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective routines generally do, tasks running at different speeds do not waste time waiting for each other.

When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.

The nonblocking collective routines can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.

Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.

Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid count(s)
count < 0

Invalid datatype(s)

Type not committed

Invalid communicator

Invalid communicator type
must be intracommunicator

A send and receive have unequal message lengths

MPI not initialized

MPI already finalized

Related Information

MPI_ALLTOALLV

MPE_IBARRIER, MPE_Ibarrier

Purpose

Performs a nonblocking barrier operation.

C Synopsis

#include <mpi.h>
int MPE_Ibarrier(MPI_Comm comm,MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPE_IBARRIER(INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)

Parameters

comm

is a communicator (handle) (IN)

request

is the communication request (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine is a nonblocking version of MPI_BARRIER. It returns immediately, without blocking, but will not complete (via MPI_WAIT or MPI_TEST) until all group members have called it.

Notes

The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.

When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.

A typical use of MPE_IBARRIER is to make a call to it, and then periodically test for completion with MPI_TEST. Completion indicates that all tasks in comm have arrived at the barrier. Until then, computation can continue.

Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.

Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid communicator

Invalid communicator type
must be intracommunicator

MPI not initialized

MPI already finalized

Related Information

MPI_BARRIER

MPE_IBCAST, MPE_Ibcast

Purpose

Performs a nonblocking broadcast operation.

C Synopsis

#include <mpi.h>
int MPE_Ibcast(void* buffer,int count,MPI_Datatype datatype,
    int root,MPI_Comm comm,MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPE_IBCAST(CHOICE BUFFER,INTEGER COUNT,INTEGER DATATYPE,INTEGER ROOT,
    INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)

Parameters

buffer

is the starting address of the buffer (choice) (INOUT)

count

is the number of elements in the buffer (integer) (IN)

datatype

is the datatype of the buffer elements (handle) (IN)

root

is the rank of the root task (integer) (IN)

comm

is the communicator (handle) (IN)

request

is the communication request (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine is a nonblocking version of MPI_BCAST. It performs the same function as MPI_BCAST except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.

Notes

The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.

Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective routines generally do, tasks running at different speeds do not waste time waiting for each other.

When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.

The nonblocking collective routines can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.

Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.

Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Error Conditions:

Invalid communicator

Invalid communicator type
must be intracommunicator

Invalid count
count < 0

Invalid datatype

Type not committed

Invalid root
root < 0 or root >= groupsize

Unequal message length

MPI not initialized

MPI already finalized

Develop mode error if:

Inconsistent root

Inconsistent message length

Related Information

MPI_BCAST

MPE_IGATHER, MPE_Igather

Purpose

Performs a nonblocking gather operation.

C Synopsis

#include <mpi.h>
int MPE_Igather(void* sendbuf,int sendcount,MPI_Datatype sendtype,
    void* recvbuf,int recvcount,MPI_Datatype recvtype,int root,
    MPI_Comm comm,MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPE_IGATHER(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE,
    CHOICE RECVBUF,INTEGER RECVCOUNT,INTEGER RECVTYPE,INTEGER ROOT,
    INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)

Parameters

sendbuf

is the starting address of the send buffer (choice) (IN)

sendcount

is the number of elements in the send buffer (integer) (IN)

sendtype

is datatype of the send buffer elements (integer) (IN)

recvbuf

is the address of the receive buffer (choice, significant only at root) (OUT)

recvcount

is the number of elements for any single receive (integer, significant only at root) (IN)

recvtype

is the datatype of the receive buffer elements (handle, significant at root) (IN)

root

is the rank of the receiving task (integer) (IN)

comm

is the communicator (handle) (IN)

request

is the communication request (handle) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine is a nonblocking version of MPI_GATHER. It performs the same function as MPI_GATHER except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.

Notes

The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.

Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective routines generally do, tasks running at different speeds do not waste time waiting for each other.

When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.

The nonblocking collective routines can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.

Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.

Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid communicator

Invalid communicator type
must be intracommunicator

Invalid count(s)
count < 0

Invalid datatype(s)

Type not committed

Invalid root
root <0 or root >= groupsize

Unequal message lengths

MPI not initialized

MPI already finalized

Develop mode error if:

Inconsistent root

Inconsistent message lengths

Related Information

MPI_GATHER

MPE_IGATHERV, MPE_Igatherv

Purpose

Performs a nonblocking gatherv operation.

C Synopsis

#include <mpi.h>
int MPE_Igatherv(void* sendbuf,int sendcount,MPI_Datatype sendtype,
    void* recvbuf,int recvcounts,int *displs,MPI_Datatype recvtype,
    int root,MPI_Comm comm,MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPE_IGATHERV(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE,
    CHOICE RECVBUF,INTEGER RECVCOUNTS(*),INTEGER DISPLS(*),
    INTEGER RECVTYPE,INTEGER ROOT,INTEGER COMM,INTEGER REQUEST,
    INTEGER IERROR)

Parameters

sendbuf

is the starting address of the send buffer (choice) (IN)

sendcount

is the number of elements to be sent (integer) (IN)

sendtype

is the datatype of the send buffer elements (handle) (IN)

recvbuf

is the address of the receive buffer (choice, significant only at root) (OUT)

recvcounts

integer array (of length group size) that contains the number of elements received from each task (significant only at root) (IN)

displs

integer array (of length group size). Entry i specifies the displacement relative to recvbuf at which to place the incoming data from task i (significant only at root) (IN)

recvtype

is the datatype of the receive buffer elements (handle, significant only at root) (IN)

root

is the rank of the receiving task (integer) (IN)

comm

is the communicator (handle) (IN)

request

is the communication request (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine is a nonblocking version of MPI_GATHERV. It performs the same function as MPI_GATHERV except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.

Notes

The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.

Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective routines generally do, tasks running at different speeds do not waste time waiting for each other.

When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.

The nonblocking collective routines can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.

Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.

Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid communicator

Invalid communicator type
must be intracommunicator

Invalid count(s)

Invalid datatype(s)

Type not committed

Invalid root
root < 0 or root >= groupsize

A send and receive have unequal message lengths

MPI not initialized

MPI already finalized

Develop mode error if:

Inconsistent root

Related Information

MPI_GATHERV

MPE_IREDUCE, MPE_Ireduce

Purpose

Performs a nonblocking reduce operation.

C Synopsis

#include <mpi.h>
int MPE_Ireduce(void* sendbuf,void* recvbuf,int count,
    MPI_Datatype datatype,MPI_Op op,int root,MPI_Comm comm,
    MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPE_IREDUCE(CHOICE SENDBUF,CHOICE RECVBUF,INTEGER COUNT,
    INTEGER DATATYPE,INTEGER OP,INTEGER ROOT,INTEGER COMM,
    INTEGER REQUEST,INTEGER IERROR)

Parameters

sendbuf

is the address of the send buffer (choice) (IN)

recvbuf

is the address of the receive buffer (choice, significant only at root) (OUT)

count

is the number of elements in the send buffer (integer) (IN)

datatype

is the datatype of elements of the send buffer (handle) (IN)

op

is the reduction operation (handle) (IN)

root

is the rank of the root task (integer) (IN)

comm

is the communicator (handle) (IN)

request

is the communication request (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine is a nonblocking version of MPI_REDUCE. It performs the same function as MPI_REDUCE except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.

Notes

The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.

Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective routines generally do, tasks running at different speeds do not waste time waiting for each other.

When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.

The nonblocking collective routines can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.

Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.

Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid count
count < 0

Invalid datatype

Type not committed

Invalid op

Invalid root
root < 0 or root > = groupsize

Invalid communicator

Invalid communicator type
must be intracommunicator

Unequal message lengths

MPI not initialized

MPI already finalized

Develop mode error if:

Inconsistent op

Inconsistent datatype

Inconsistent root

Inconsistent message length

Related Information

MPI_REDUCE

MPE_IREDUCE_SCATTER, MPE_Ireduce_scatter

Purpose

Performs a nonblocking reduce_scatter operation.

C Synopsis

#include <mpi.h>
int MPE_Ireduce_scatter(void* sendbuf,void* recvbuf,int *recvcounts,
    MPI_Datatype datatype,MPI_Op op,MPI_Comm comm,
    MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPE_IREDUCE_SCATTER(CHOICE SENDBUF,CHOICE RECVBUF,
     INTEGER RECVCOUNTS(*),INTEGER DATATYPE,INTEGER OP,
     INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)

Parameters

sendbuf

is the starting address of the send buffer (choice) (IN)

recvbuf

is the starting address of the receive buffer (choice) (OUT)

recvcounts

integer array specifying the number of elements in result distributed to each task. Must be identical on all calling tasks. (IN)

datatype

is the datatype of elements in the input buffer (handle) (IN)

op

is the reduction operation (handle) (IN)

comm

is the communicator (handle) (IN)

request

is the communication request (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine is a nonblocking version of MPI_REDUCE_SCATTER. It performs the same function as MPI_REDUCE_SCATTER except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.

Notes

The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.

Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective routines generally do, tasks running at different speeds do not waste time waiting for each other.

When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.

The nonblocking collective routines can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.

Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.

Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid recvcount(s)
recvcounts(i) < 0

Invalid datatype

Type not committed

Invalid op

Invalid communicator

Invalid communicator type
must be intracommunicator

Unequal message lengths

MPI not initialized

MPI already finalized

Develop mode error if:

Inconsistent op

Inconsistent datatype

Related Information

MPI_REDUCE_SCATTER

MPE_ISCAN, MPE_Iscan

Purpose

Performs a nonblocking scan operation.

C Synopsis

#include <mpi.h>
int MPE_Iscan(void* sendbuf,void* recvbuf,int count,
    MPI_Datatype datatype,MPI_Op op,MPI_Comm comm,
    MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPE_ISCAN(CHOICE SENDBUF,CHOICE RECVBUF,INTEGER COUNT,
    INTEGER DATATYPE,INTEGER OP,INTEGER COMM,INTEGER REQUEST,
    INTEGER IERROR)

Parameters

sendbuf

is the starting address of the send buffer (choice) (IN)

recvbuf

is the starting address of the receive buffer (choice) (OUT)

count

is the number of elements in sendbuf (integer) (IN)

datatype

is the datatype of elements in sendbuf (handle) (IN)

op

is the reduction operation (handle) (IN)

comm

is the communicator (IN)

request

is the communication request (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine is a nonblocking version of MPI_SCAN. It performs the same function as MPI_SCAN except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.

Notes

The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.

Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective routines generally do, tasks running at different speeds do not waste time waiting for each other.

When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.

The nonblocking collective routines can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.

Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.

Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid count
count < 0

Invalid datatype

Type not committed

Invalid op

Invalid communicator

Invalid communicator type
must be intracommunicator

Unequal message lengths

MPI not initialized

MPI already finalized

Develop mode error if:

Inconsistent op

Inconsistent datatype

Inconsistent message length

Related Information

MPI_SCAN

MPE_ISCATTER, MPE_Iscatter

Purpose

Performs a nonblocking scatter operation.

C Synopsis

#include <mpi.h>
int MPE_Iscatter(void* sendbuf,int sendcount,MPI_Datatype sendtype,
    void* recvbuf,int recvcount,MPI_Datatype recvtype,int root,
    MPI_Comm comm,MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPE_ISCATTER(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE,
    CHOICE RECVBUF,INTEGER RECVCOUNT,INTEGER RECVTYPE,INTEGER ROOT,
    INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)

Parameters

sendbuf

is the address of the send buffer (choice, significant only at root) (IN)

sendcount

is the number of elements to be sent to each task (integer, significant only at root) (IN)

sendtype

is the datatype of the send buffer elements (handle, significant only at root) (IN)

recvbuf

is the address of the receive buffer (choice) (OUT)

recvcount

is the number of elements in the receive buffer (integer) (IN)

recvtype

is the datatype of the receive buffer elements (handle) (IN)

root

is the rank of the sending task (integer) (IN)

comm

is the communicator (handle) (IN)

request

communication request (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine is a nonblocking version of MPI_SCATTER. It performs the same function as MPI_SCATTER except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.

Notes

The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.

Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective routines generally do, tasks running at different speeds do not waste time waiting for each other.

When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.

The nonblocking collective routines can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.

Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.

Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid communicator

Invalid communicator type
must be intracommunicator

Invalid count(s)
count < 0

Invalid datatype(s)

Type not committed

Invalid root
root < 0 or root >= groupsize

Unequal message lengths

MPI not initialized

MPI already finalized

Develop mode error if:

Inconsistent root

Inconsistent message length

Related Information

MPI_SCATTER

MPE_ISCATTERV, MPE_Iscatterv

Purpose

Performs a nonblocking scatterv operation.

C Synopsis

#include <mpi.h>
int MPE_Iscatterv(void* sendbuf,int *sendcounts,int *displs,
    MPI_Datatype sendtype,void* recvbuf,int recvcount,
    MPI_Datatype recvtype,int root,MPI_Comm comm,MPI_Comm *request);

Fortran Synopsis

include 'mpif.h'
MPE_ISCATTERV(CHOICE SENDBUF,INTEGER SENDCOUNTS(*),INTEGER DISPLS(*),
    INTEGER SENDTYPE,CHOICE RECVBUF,INTEGER RECVCOUNT,INTEGER RECVTYPE,
    INTEGER ROOT,INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)

Parameters

sendbuf

is the address of the send buffer (choice, significant only at root) (IN)

sendcounts

integer array (of length group size) that contains the number of elements to send to each task (significant only at root) (IN)

displs

integer array (of length group size). Entry i specifies the displacement relative to sendbuf from which to take the outgoing data to task i (significant only at root) (IN)

sendtype

is the datatype of the send buffer elements (handle, significant only at root) (IN)

recvbuf

is the address of the receive buffer (choice) (OUT)

recvcount

is the number of elements in the receive buffer (integer) (IN)

recvtype

is the datatype of the receive buffer elements (handle) (IN)

root

is the rank of the sending task (integer) (IN)

comm

is the communicator (handle) (IN)

request

is the communication request (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine is a nonblocking version of MPI_SCATTERV. It performs the same function as MPI_SCATTERV except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.

Notes

The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.

Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective routines generally do, tasks running at different speeds do not waste time waiting for each other.

When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.

The nonblocking collective routines can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.

Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.

Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid communicator

Invalid communicator type
must be intracommunicator

Invalid count(s)
count < 0

Invalid datatype(s)

Type not committed

Invalid root
root < 0 or root >= groupsize

Unequal message lengths

MPI not initialized

MPI already finalized

Develop mode error if:

Inconsistent root

Related Information

MPI_SCATTERV

MPI_ABORT, MPI_Abort

Purpose

Forces all tasks of an MPI job to terminate.

C Synopsis

#include <mpi.h>
int MPI_Abort(MPI_Comm comm,int errorcode);

Fortran Synopsis

include 'mpif.h'
MPI_ABORT(INTEGER COMM,INTEGER ERRORCODE,INTEGER IERROR)

Parameters

comm

is the communicator of the tasks to abort. (IN)

errorcode

is the error code returned to the invoking environment. (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine forces an MPI program to terminate all tasks in the job. comm currently is not used. All tasks in the job are aborted. The low order 8 bits of errorcode are returned as an AIX return code.

Notes

MPI_ABORT causes all tasks to exit immediately.

Errors

MPI already finalized

MPI not initialized

MPI_ADDRESS, MPI_Address

Purpose

Returns the address of a variable in memory.

C Synopsis

#include <mpi.h>
int MPI_Address(void* location,MPI_Aint *address);

Fortran Synopsis

include 'mpif.h'
MPI_ADDRESS(CHOICE LOCATION,INTEGER ADDRESS,INTEGER IERROR)

Parameters

location

is the location in caller memory (choice) (IN)

address

is the address of location (integer) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine returns the byte address of location.

Notes

On the IBM RS/6000 SP, this is equivalent to address= (MPI_Aint) location in C, but the MPI_ADDRESS routine is portable to machines with less straightforward addressing.

Errors

MPI not initialized

MPI already finalized

Related Information

MPI_TYPE_INDEXED
MPI_TYPE_HINDEXED
MPI_TYPE_STRUCT

MPI_ALLGATHER, MPI_Allgather

Purpose

Gathers individual messages from each task in comm and distributes the resulting message to each task.

C Synopsis

#include <mpi.h>
int MPI_Allgather(void* sendbuf,int sendcount,MPI_Datatype sendtype,
    void* recvbuf,int recvcount,MPI_Datatype recvtype,MPI_Comm comm);

Fortran Synopsis

include 'mpif.h'
MPI_ALLGATHER(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE,
    CHOICE RECVBUF,INTEGER RECVCOUNT,INTEGER RECVTYPE,INTEGER COMM,
    INTEGER IERROR)

Parameters

sendbuf

is the starting address of the send buffer (choice) (IN)

sendcount

is the number of elements in the send buffer (integer) (IN)

sendtype

is the datatype of the send buffer elements (handle) (IN)

recvbuf

is the address of the receive buffer (choice) (OUT)

recvcount

is the number of elements received from any task (integer) (IN)

recvtype

is the datatype of the receive buffer elements (handle) (IN)

comm

is the communicator (handle) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_ALLGATHER is similar to MPI_GATHER except that all tasks receive the result instead of just the root.

The block of data sent from task j is received by every task and placed in the jth block of the buffer recvbuf.

The type signature associated with sendcount, sendtype at a task must be equal to the type signature associated with recvcount, recvtype at any other task.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid communicator

Invalid communicator type
must be intracommunicator

Invalid count(s)
count < 0

Invalid datatype(s)

Type not committed

Unequal message length

MPI not initialized

MPI already finalized

Develop mode error if:

Inconsistent message length

Related Information

MPE_IALLGATHER
MPI_ALLGATHER
MPI_GATHER

MPI_ALLGATHERV, MPI_Allgatherv

Purpose

Collects individual messages from each task in comm and distributes the resulting message to all tasks. Messages can have different sizes and displacements.

C Synopsis

#include <mpi.h>
int MPI_Allgatherv(void* sendbuf,int sendcount,MPI_Datatype sendtype,
    void* recvbuf,int *recvcounts,int *displs,MPI_Datatype recvtype,
    MPI_Comm comm);

Fortran Synopsis

include 'mpif.h'
MPI_ALLGATHERV(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE,
    CHOICE RECVBUF,INTEGER RECVCOUNTS(*),INTEGER DISPLS(*),
    INTEGER RECVTYPE,INTEGER COMM,INTEGER IERROR)

Parameters

sendbuf

is the starting address of the send buffer (choice) (IN)

sendcount

is the number of elements in the send buffer (integer) (IN)

sendtype

is the datatype of the send buffer elements (handle) (IN)

recvbuf

is the address of the receive buffer (choice) (OUT)

recvcounts

integer array (of length group size) that contains the number of elements received from each task (IN)

displs

integer array (of length group size). Entry i specifies the displacement (relative to recvbuf) at which to place the incoming data from task i (IN)

recvtype

is the datatype of the receive buffer elements (handle) (IN)

comm

is the communictor (handle) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine collects individual messages from each task in comm and distributes the resulting message to all tasks. Messages can have different sizes and displacements.

The block of data sent from task j is recvcounts[j] elements long, and is received by every task and placed in recvbuf at offset displs[j].

The type signature associated with sendcount, sendtype at task j must be equal to the type signature of recvcounts[j], recvtype at any other task.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid communicator

Invalid communicator type
must be intracommunicator

Invalid count(s)
count < 0

Invalid datatype(s)

Type not committed

Unequal message lengths

MPI not initialized

MPI already finalized

Develop mode error if:

None

Related Information

MPE_IALLGATHERV
MPI_ALLGATHER

MPI_ALLREDUCE, MPI_Allreduce

Purpose

Applies a reduction operation to the vector sendbuf over the set of tasks specified by comm and places the result in recvbuf on all of the tasks in comm.

C Synopsis

#include <mpi.h>
int MPI_Allreduce(void* sendbuf,void* recvbuf,int count,
    MPI_Datatype datatype,MPI_Op op,MPI_Comm comm);

Fortran Synopsis

include 'mpif.h'
MPI_ALLREDUCE(CHOICE SENDBUF,CHOICE RECVBUF,INTEGER COUNT,
    INTEGER DATATYPE,INTEGER OP,INTEGER COMM,INTEGER IERROR)

Parameters

sendbuf

is the starting address of the send buffer (choice) (IN)

recvbuf

is the starting address of the receive buffer (choice) (OUT)

count

is the number of elements in the send buffer (integer) (IN)

datatype

is the datatype of elements in the send buffer (handle) (IN)

op

is the reduction operation (handle) (IN)

comm

is the communicator (handle) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine applies a reduction operation to the vector sendbuf over the set of tasks specified by comm and places the result in recvbuf on all of the tasks.

This routine is similar to MPI_REDUCE except the result is returned to the receive buffer of all the group members.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Notes

See Appendix D. "Reduction Operations".

Errors

Invalid count
count < 0

Invalid datatype

Type not committed

Invalid op

Invalid communicator

Invalid communicator type
must be intracommunicator

Unequal message lengths

MPI not initialized

MPI already finalized

Develop mode error if:

Inconsistent op

Inconsistent datatype

Inconsistent message length

Related Information

MPE_IALLREDUCE
MPI_REDUCE
MPI_REDUCE_SCATTER
MPI_OP_CREATE

MPI_ALLTOALL, MPI_Alltoall

Purpose

Sends a distinct message from each task to every task.

C Synopsis

#include <mpi.h>
int MPI_Alltoall(void* sendbuf,int sendcount,MPI_Datatype sendtype,
    void* recvbuf,int recvcount,MPI_Datatype recvtype,
    MPI_Comm comm):

Fortran Synopsis

include 'mpif.h'
MPI_ALLTOALL(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE,
    CHOICE RECVBUF,INTEGER RECVCOUNT,INTEGER RECVTYPE,INTEGER COMM,
    INTEGER IERROR)

Parameters

sendbuf

is the starting address of the send buffer (choice) (IN)

sendcount

is the number of elements sent to each task (integer) (IN)

sendtype

is the datatype of the send buffer elements (handle) (IN)

recvbuf

is the address of the receive buffer (choice) (OUT)

recvcount

is the number of elements received from any task (integer) (IN)

recvtype

is the datatype of the receive buffer elements (handle) (IN)

comm

is the communicator (handle) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_ALLTOALL sends a distinct message from each task to every task.

The jth block of data sent from task i is received by task j and placed in the ith block of the buffer recvbuf.

The type signature associated with sendcount, sendtype, at a task must be equal to the type signature associated with recvcount, recvtype at any other task. This means the amount of data sent must be equal to the amount of data received, pair wise between every pair of tasks. The type maps can be different.

All arguments on all tasks are significant.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Unequal lengths

Invalid count(s)
count < 0

Invalid datatype(s)

Type not committed

Invalid communicator

Invalid communicator type
must be intracommunicator

Unequal message lengths

MPI not initialized

MPI already finalized

Develop mode error if:

Inconsistent message lengths

Related Information

MPE_IALLTOALL
MPI_ALLTOALLV

MPI_ALLTOALLV, MPI_Alltoallv

Purpose

Sends a distinct message from each task to every task. Messages can have different sizes and displacements.

C Synopsis

#include <mpi.h>
int MPI_Alltoallv(void* sendbuf,int *sendcounts,int *sdispls,
    MPI_Datatype sendtype,void* recvbuf,int *recvcounts,int *rdispls,
    MPI_Datatype recvtype,MPI_Comm comm);

Fortran Synopsis

include 'mpif.h'
MPI_ALLTOALLV(CHOICE SENDBUF,INTEGER SENDCOUNTS(*),
    INTEGER SDISPLS(*),INTEGER SENDTYPE,CHOICE RECVBUF,
    INTEGER RECVCOUNTS(*),INTEGER RDISPLS(*),INTEGER RECVTYPE,
    INTEGER COMM,INTEGER IERROR)

Parameters

sendbuf

is the starting address of the send buffer (choice) (IN)

sendcounts

integer array (of length group size) specifying the number of elements to send to each task (IN)

sdispls

integer array (of length group size). Entry j specifies the displacement relative to sendbuf from which to take the outgoing data destined for task j. (IN)

sendtype

is the datatype of the send buffer elements (handle) (IN)

recvbuf

is the address of the receive buffer (choice) (OUT)

recvcounts

integer array (of length group size) specifying the number of elements to be received from each task (IN)

rdispls

integer array (of length group size). Entry i specifies the displacement relative to recvbuf at which to place the incoming data from task i. (IN)

recvtype

is the datatype of the receive buffer elements (handle) (IN)

comm

is the communicator (handle) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_ALLTOALLV sends a distinct message from each task to every task. Messages can have different sizes and displacements.

This routine is similar to MPI_ALLTOALL with the following differences. MPI_ALLTOALLV allows you the flexibility to specify the location of the data for the send with sdispls and the location of where the data will be placed on the receive with rdispls.

The block of data sent from task i is sendcounts[j] elements long, and is received by task j and placed in recvbuf at offset offset rdispls[i]. These blocks do not have to be the same size.

The type signature associated with sendcount[j], sendtype at task i must be equal to the type signature associated with recvcounts[i], recvtype at task j. This means the amount of data sent must be equal to the amount of data received, pair wise between every pair of tasks. Distinct type maps between sender and receiver are allowed.

All arguments on all tasks are significant.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid count(s)
count < 0

Invalid datatype(s)

Type not committed

Invalid communicator

Invalid communicator type
must be intracommunicator

A send and receive hand unequal message lengths

MPI not initialized

MPI already finalized

Related Information

MPE_IALLTOALLV
MPI_ALLTOALL

MPI_ATTR_DELETE, MPI_Attr_delete

Purpose

Removes an attribute value from a communicator.

C Synopsis

#include <mpi.h>
int MPI_Attr_delete(MPI_Comm comm,int keyval);

Fortran Synopsis

include 'mpif.h'
MPI_ATTR_DELETE(INTEGER COMM,INTEGER KEYVAL,INTEGER IERROR)

Parameters

comm

is the communicator that the attribute is attached (handle) (IN)

keyval

is the key value of the deleted attribute (integer) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine deletes an attribute from cache by key. MPI_ATTR_DELETE also invokes the attribute delete function delete_fn specified when the keyval is created.

Errors

A delete_fn did not return MPI_SUCCESS

Invalid communicator

Invalid keyval
keyval is undefined

Invalid keyval
keyval is predefined

MPI not initialized

MPI already finalized

Related Information

MPI_KEYVAL_CREATE

MPI_ATTR_GET, MPI_Attr_get

Purpose

Retrieves an attribute value from a communicator.

C Synopsis

#include <mpi.h>
int MPI_Attr_get(MPI_Comm comm,int keyval,void *attribute_val,
    int *flag);

Fortran Synopsis

include 'mpif.h'
MPI_ATTR_GET(INTEGER COMM,INTEGER KEYVAL,INTEGER ATTRIBUTE_VAL,
    LOGICAL FLAG,INTEGER IERROR)

Parameters

comm

is the communicator to which attribute is attached (handle) (IN)

keyval

is the key value (integer) (IN)

attribute_val

is the attribute value unless flag = false (OUT)

flag

is true if an attribute value was extracted and false if no attribute is associated with the key. (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This function retrieves an attribute value by key. If there is no key with value keyval, the call is erroneous. However, the call is valid if there is a key value keyval, but no attribute is attached on comm for that key. In this case, the call returns flag = false.

Notes

The implementation of the MPI_ATTR_PUT and MPI_ATTR_GET involves saving a single word of information in the communicator. The languages C and Fortran have different approaches to using this capability:

In C: As the programmer, you normally define a struct which holds arbitrary "attribute" information. Before calling MPI_ATTR_PUT, you allocate some storage for the attribute structure and then call MPI_ATTR_PUT to record the address of this structure. You must assure that the structure remains intact as long as it may be useful. As the programmer, you will also declare a variable of type "pointer to attribute structure" and pass the address of this variable when calling MPI_ATTR_GET. Both MPI_ATTR_PUT and MPI_ATTR_GET take a void* parameter but this does not imply the same parameter is passed to either one.

In Fortran: MPI_ATTR_PUT records an INTEGER*4 and MPI_ATTR_GET returns the INTEGER*4. As the programmer, you may choose to encode all attribute information in this integer or maintain a some kind of database in which the integer can index. Either of these approaches will port to other MPI implementations.

XL Fortran has an additional feature which will allow some of the same function a C programmer would use. This is the POINTER type which is described in the IBM XL Fortran Compiler V3.2 for AIX Language Reference Use of this will impact the program's portability.

Errors

Invalid communicator

Invalid keyval
keyval is undefined

MPI not initialized

MPI already finalized

Related Information

MPI_ATTR_PUT

MPI_ATTR_PUT, MPI_Attr_put

Purpose

Stores an attribute value in a communicator.

C Synopsis

#include <mpi.h>
int MPI_Attr_put(MPI_Comm comm,int keyval,void* attribute_val);

Fortran Synopsis

include 'mpif.h'
MPI_ATTR_PUT(INTEGER COMM,INTEGER KEYVAL,INTEGER ATTRIBUTE_VAL,
INTEGER IERROR)

Parameters

comm

is the communicator to which attribute will be attached (handle) (IN)

keyval

is the key value as returned by MPI_KEYVAL_CREATE (integer) (IN)

attribute_val

is the attribute value (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine stores the attribute value for retrieval by MPI_ATTR_GET. Any previous value is deleted with the attribute delete_fn being called and the new value is stored. If there is no key with value keyval, the call is erroneous.

Notes

The implementation of the MPI_ATTR_PUT and MPI_ATTR_GET involves saving a single word of information in the communicator. The languages C and Fortran have different approaches to using this capability:

In C: As the programmer, you normally define a struct which holds arbitrary "attribute" information. Before calling MPI_ATTR_PUT, you allocate some storage for the attribute structure and then call MPI_ATTR_PUT to record the address of this structure. You must assure that the structure remains intact as long as it may be useful. As the programmer, you will also declare a variable of type "pointer to attribute structure" and pass the address of this variable when calling MPI_ATTR_GET. Both MPI_ATTR_PUT and MPI_ATTR_GET take a void* parameter, but this does not imply the same parameter is passed to either one.

In Fortran: MPI_ATTR_PUT records an INTEGER*4 and MPI_ATTR_GET returns the INTEGER*4. As the programmer, you may choose to encode all attribute information in this integer or maintain a some kind of database in which the integer can index. Either of these approaches will port to other MPI implementations.

XL Fortran has an additional feature which will allow some of the same function a C programmer would use. This is the POINTER type which is described in the IBM XL Fortran Compiler V3.2 for AIX Language Reference Use of this will impact the program's portability.

Errors

A delete_fn did not return MPI_SUCCESS

Invalid communicator

Invalid keyval
keyval is undefined

Predefined keyval
cannot modify predefined attributes

MPI not initialized

MPI already finalized

Related Information

MPI_ATTR_GET
MPI_KEYVAL_CREATE

MPI_BARRIER, MPI_Barrier

Purpose

Blocks each task in comm until all tasks have called it.

C Synopsis

#include <mpi.h>
int MPI_Barrier(MPI_Comm comm);

Fortran Synopsis

include 'mpif.h'
MPI_BARRIER(INTEGER COMM,INTEGER IERROR)

Parameters

comm

is a communicator (handle) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine blocks until all tasks have called it. Tasks cannot exit the operation until all group members have entered.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid communicator

Invalid communicator type
must be intracommunicator

MPI not initialized

MPI already finalized

Related Information

MPE_IBARRIER

MPI_BCAST, MPI_Bcast

Purpose

Broadcasts a message from root to all tasks in comm.

C Synopsis

#include <mpi.h>
int MPI_Bcast(void* buffer,int count,MPI_Datatype datatype,
    int root,MPI_Comm comm);

Fortran Synopsis

include 'mpif.h'
MPI_BCAST(CHOICE BUFFER,INTEGER COUNT,INTEGER DATATYPE,INTEGER ROOT,
      INTEGER COMM,INTEGER IERROR)

Parameters

buffer

is the starting address of the buffer (choice) (INOUT)

count

is the number of elements in the buffer (integer) (IN)

datatype

is the datatype of the buffer elements (handle) (IN)

root

is the rank of the root task (integer) (IN)

comm

is the communicator (handle) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine broadcasts a message from root to all tasks in comm. The contents of root's communication buffer is copied to all tasks on return.

The type signature of count, datatype on any task must be equal to the type signature of count, datatype at the root. This means the amount of data sent must be equal to the amount of data received, pair wise between each task and the root. Distinct type maps between sender and receiver are allowed.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid communicator

Invalid communicator type
must be intracommunicator

Invalid count
count < 0

Invalid datatype

Type not committed

Invalid root
root < 0 or root >= groupsize

Unequal message lengths

MPI not initialized

MPI already finalized

Develop mode error if:

Inconsistent root

Inconsistent message length

Related Information

MPE_IBCAST

MPI_BSEND, MPI_Bsend

Purpose

Performs a blocking buffered mode send operation.

C Synopsis

#include <mpi.h>
int MPI_Bsend(void* buf,int count,MPI_Datatype datatype,
      int dest,int tag,MPI_Comm comm);

Fortran Synopsis

include 'mpif.h'
MPI_BSEND(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER DEST,
      INTEGER TAG,INTEGER COMM,INTEGER IERROR)

Parameters

buf

is the initial address of the send buffer (choice) (IN)

count

is the number of elements in the send buffer (integer) (IN)

datatype

is the datatype of each send buffer element (handle) (IN)

dest

is the rank of destination (integer) (IN)

tag

is the message tag (integer) (IN)

comm

is the communicator (handle) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine is a blocking buffered mode send. This is a local operation. It does not depend on the occurrence of a matching receive in order to complete. If a send operation is started and no matching receive is posted, the outgoing message is buffered to allow the send call to complete.

Make sure you have enough buffer space available. An error occurs if the message must be buffered and there is there is insufficient buffer space.

Return from an MPI_BSEND does not guarantee the message was sent. It may remain in the buffer until a matching receive is posted. MPI_BUFFER_DETACH will block until all messages are received.

Errors

Invalid count
count < 0

Invalid datatype

Type not committed

Invalid destination
dest < 0 or dest > = groupsize

Invalid tag
tag < 0

Invalid comm

Insufficient buffer space

MPI not initialized

MPI already finalized

Related Information

MPI_IBSEND
MPI_SEND
MPI_BUFFER_ATTACH
MPI_BUFFER_DETACH

MPI_BSEND_INIT, MPI_Bsend_init

Purpose

Creates a persistent buffered mode send request.

C Synopsis

#include <mpi.h>
int MPI_Bsend_init(void* buf,int count,MPI_Datatype datatype,
    int dest,int tag,MPI_Comm comm,MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPI_BSEND_INIT(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,
      INTEGER DEST,INTEGER TAG,INTEGER COMM,INTEGER REQUEST,
      INTEGER IERROR)

Parameters

buf

is the initial address of the send buffer (choice) (IN)

count

is the number of elements to be sent (integer) (IN)

datatype

is the type of each element (handle) (IN)

dest

is the rank of the destination task (integer) (IN)

tag

is the message tag (integer) (IN)

comm

is the communicator (handle) (IN)

request

is the communication request (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine creates a persistent communication request for a buffered mode send operation. MPI_START or MPI_STARTALL must be called to activate the send.

Notes

See MPI_BSEND for additional information.

Because it is the MPI_START which initiates communication, any error related to insufficient buffer space occurs at the MPI_START.

Errors

Invalid count
count < 0

Invalid datatype

Type not committed

Invalid destination
dest < 0 or dest > &equals groupsize

Invalid tag
tag < 0

Invalid comm

MPI not initialized

MPI already finalized

Related Information

MPI_START
MPI_IBSEND

MPI_BUFFER_ATTACH, MPI_Buffer_attach

Purpose

Provides MPI with a buffer to use for buffering messages sent with MPI_BSEND and MPI_IBSEND.

C Synopsis

#include <mpi.h>
int MPI_Buffer_attach(void* buffer,int size);

Fortran Synopsis

include 'mpif.h'
MPI_BUFFER_ATTACH(CHOICE BUFFER,INTEGER SIZE,INTEGER IERROR)

Parameters

buffer

is the initial buffer address (choice) (IN)

size

is the buffer size in bytes (integer) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine provides MPI a buffer in the user's memory which is used for buffering outgoing messages. This buffer is used only by messages sent in buffered mode, and only one buffer is attached to a task at any time.

Notes

MPI uses part of the buffer space to store information about the buffered messages. The number of bytes required by MPI for each buffered message is given by MPI_BSEND_OVERHEAD.

If a buffer is already attached, it must be detached by MPI_BUFFER_DETACH before a new buffer can be attached.

Errors

Invalid size
size < 0

Buffer is already attached

MPI not initialized

MPI already finalized

Related Information

MPI_BUFFER_DETACH
MPI_BSEND
MPI_IBSEND

MPI_BUFFER_DETACH, MPI_Buffer_detach

Purpose

Detaches the current buffer.

C Synopsis

#include <mpi.h>
int MPI_Buffer_detach(void* buffer,int *size);

Fortran Synopsis

include 'mpif.h'
MPI_BUFFER_DETACH(CHOICE BUFFER,INTEGER SIZE,INTEGER IERROR)

Parameters

buffer

is the initial buffer address (choice) (OUT)

size

is the buffer size in bytes (integer) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine detaches the current buffer. Blocking occurs until all messages in the active buffer are transmitted. Once this function returns, you can reuse or deallocate the space taken by the buffer. There is an implicit MPI_BUFFER_DETACH inside MPI_FINALIZE. Because a buffer detach can block, the impicit detach creates some risk that an incorrect program will hang in MPI_FINALIZE.

If there is no active buffer, MPI acts as if a buffer of size 0 is associated with the task.

Notes

It is important to detach an attached buffer before it is deallocated. If this is not done, any buffered message may be lost.

In Fortran 77, the buffer argument for MPI_BUFFER_DETACH cannot return a useful value because Fortran 77 does not support pointers. If a fully portable MPI program written in Fortran calls MPI_BUFFER_DETACH, it either passes the name of the original buffer or a throwaway temp as the buffer argument.

If a buffer was attached, this implementation of MPI returns the address of the freed buffer in the first word of the buffer argument. If the size being returned is zero to four bytes, MPI_BUFFER_DETACH will not modify the buffer argument. This implementation is harmless for a program that uses either the original buffer or a throwaway temp of at least word size as buffer. It also allows the programmer who wants to use an XL Fortran POINTER as the buffer argument to do so. Using the POINTER type will affect portability.

Errors

MPI not initialized

MPI already finalized

Related Information

MPI_BUFFER_ATTACH
MPI_BSEND
MPI_IBSEND

MPI_CANCEL, MPI_Cancel

Purpose

Marks a nonblocking request for cancellation.

C Synopsis

#include <mpi.h>
int MPI_Cancel(MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPI_CANCEL(INTEGER REQUEST,INTEGER IERROR)

Parameters

request

is a communication request (handle) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine marks a nonblocking request for cancellation. The cancel call is local. It returns immediately; it can return even before the communication is actually cancelled. It is necessary to complete an operation marked for cancellation by using a call to MPI_WAIT or MPI_TEST (or any other wait or test call).

You can use MPI_CANCEL to cancel a persistent request in the same way it is used for nonpersistent requests. A successful cancellation cancels the active communication, but not the request itself. After the call to MPI_CANCEL and the subsequent call to MPI_WAIT or MPI_TEST, the request becomes inactive and can be activated for a new communication. It is erroneous to cancel an inactive persistent request.

The successful cancellation of a buffered send frees the buffer space occupied by the pending message.

Either the cancellation succeeds or the operation succeeds, but not both. If a send is marked for cancellation, then either the send completes normally, in which case the message sent was received at the destination task, or the send is successfully cancelled, in which case no part of the message was received at the destination. Then, any matching receive has to be satisfied by another send. If a receive is marked for cancellation, then the receive completes normally or the receive is successfully cancelled, in which case no part of the receive buffer is altered. Then, any matching send has to be satisfied by another receive.

If the operation has been cancelled successfully, information to that effect is returned in the status argument of the operation that completes the communication, and may be retrieved by a call to MPI_TEST_CANCELLED.

Notes

Nonblocking collective communication requests cannot be cancelled. MPI_CANCEL may be called on non-blocking file operation requests. The eventual call to MPI_TEST_CANCELLED will show that the cancellation did not succeed.

Errors

Invalid request

CCL request

Cancel inactive persistent request

MPI not initialized

MPI already finalized

Related Information

MPI_TEST_CANCELLED
MPI_WAIT

MPI_CART_COORDS, MPI_Cart_coords

Purpose

Translates task rank in a communicator into cartesian task coordinates.

C Synopsis

#include <mpi.h>
MPI_Cart_coords(MPI_Comm comm,int rank,int maxdims,int *coords);

Fortran Synopsis

include 'mpif.h'
MPI_CART_COORDS(INTEGER COMM,INTEGER RANK,INTEGER MAXDIMS,
      INTEGER COORDS(*),INTEGER IERROR)

Parameters

comm

is a communicator with cartesian topology (handle) (IN)

rank

is the rank of a task within group comm (integer) (IN)

maxdims

is the length of array coords in the calling program (integer) (IN)

coords

is an integer array specifying the cartesian coordinates of a task. (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine translates task rank in a communicator into task coordinates.

Notes

Task coordinates in a cartesian structure begin their numbering at 0. Row-major numbering is always used for the tasks in a cartesian structure.

Errors

MPI not initialized

MPI already finalized

Invalid communicator

No topology

Invalid topology
type must be cartesian

Invalid rank
rank < 0 or rank > = groupsize

Invalid array size
maxdims < 0

Related Information

MPI_CART_RANK
MPI_CART_CREATE

MPI_CART_CREATE, MPI_Cart_create

Purpose

Creates a communicator containing topology information.

C Synopsis

#include <mpi.h>
int MPI_Cart_create(MPI_Comm comm_old,int ndims,int *dims,
    int *periods,int reorder,MPI_Comm *comm_cart);

Fortran Synopsis

include 'mpif.h'
MPI_CART_CREATE(INTEGER COMM_OLD,INTEGER NDIMS,INTEGER DIMS(*),
    INTEGER PERIODS(*),INTEGER REORDER,INTEGER COMM_CART,INTEGER IERROR)

Parameters

comm_old

is the input communicator (handle) (IN)

ndims

is the number of cartesian dimensions in grid (integer) (IN)

dims

is an integer array of size ndims specifying the number of tasks in each dimension (IN)

periods

is a logical array of size ndims specifying if the grid is periodic or not in each dimension (IN)

reorder

if true, ranking may be reordered. If false, then rank in comm_cart must be the same as in comm_old. (logical) (IN)

comm_cart

is a communicator with new cartesian topology (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine creates a new communicator containing cartesian topology information defined by ndims, dims, periods and reorder. MPI_CART_CREATE returns a handle for this new communicator in comm_cart. If there are more tasks in comm than required by the grid, some tasks are returned comm_cart = MPI_COMM_NULL. comm_old must be an intracommunicator.

Notes

The reorder argument is ignored.

Errors

MPI not initialized

Conflicting collective operations on communicator

MPI already finalized

Invalid communicator

Invalid communicator type
must be intracommunicator

Invalid ndims
ndims < 0 or ndims > groupsize

Invalid dimension

Related Information

MPI_CART_SUB
MPI_GRAPH_CREATE

MPI_CART_GET, MPI_Cart_get

Purpose

Retrieves cartesian topology information from a communicator.

C Synopsis

#include <mpi.h>
MPI_Cart_get(MPI_Comm comm,int maxdims,int *dims,int *periods,int *coords);

Fortran Synopsis

include 'mpif.h'
MPI_CART_GET(INTEGER COMM,INTEGER MAXDIMS,INTEGER DIMS(*),
      INTEGER PERIODS(*),INTEGER COORDS(*),INTEGER IERROR)

Parameters

comm

is a communicator with cartesian topology (handle) (IN)

maxdims

is the length of dims, periods, and coords in the calling program (integer) (IN)

dims

is the number of tasks for each cartesian dimension (array of integer) (OUT)

periods

is a logical array specifying if each cartesian dimension is periodic or not. (OUT)

coords

is the coordinates of the calling task in the cartesian structure (array of integer) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine retrieves the cartesian topology information associated with a communicator in dims, periods and coords.

Errors

MPI not initialized

MPI already finalized

Invalid communicator

No topology

Invalid topology type
must be cartesian

Invalid array size
maxdims < 0

Related Information

MPI_CARTDIM_GET
MPI_CART_CREATE

MPI_CART_MAP, MPI_Cart_map

Purpose

Computes placement of tasks on the physical machine.

C Synopsis

#include <mpi.h>
MPI_Cart_map(MPI_Comm comm,int ndims,int *dims,int *periods,
      int *newrank);

Fortran Synopsis

include 'mpif.h'
MPI_CART_MAP(INTEGER COMM,INTEGER NDIMS,INTEGER DIMS(*),
      INTEGER PERIODS(*),INTEGER NEWRANK,INTEGER IERROR)

Parameters

comm

is the input communicator (handle) (IN)

ndims

is the number of dimensions of the cartesian structure (integer) (IN)

dims

is an integer array of size ndims specifying the number of tasks in each coordinate direction (IN)

periods

is a logical array of size ndims specifying the periodicity in each coordinate direction (IN)

newrank

is the reordered rank or MPI_UNDEFINED if the calling task does not belong to the grid (integer) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_CART_MAP allows MPI to compute an optimal placement for the calling task on the physical machine by reordering the tasks in comm.

Notes

No reordering is done by this function; it would serve no purpose on an SP. MPI_CART_MAP returns newrank as the original rank of the calling task if it belongs to the grid, or MPI_UNDEFINED if it does not.

Errors

MPI not initialized

MPI already finalized

Invalid communicator

Invalid communicator type
must be intracommunicator

Invalid ndims
ndims < 1 or ndims > groupsize

Invalid dimension
ndims[i] <= 0

Invalid grid size
n< 0 or n > groupsize, where n is the product of dims[i]

MPI_CART_RANK, MPI_Cart_rank

Purpose

Translates task coordinates into a task rank.

C Synopsis

#include <mpi.h>
MPI_Cart_rank(MPI_Comm comm,int *coords,int *rank);

Fortran Synopsis

include 'mpif.h'
MPI_CART_RANK(INTEGER COMM,INTEGER COORDS(*),INTEGER RANK,
      INTEGER IERROR)

Parameters

comm

is a communicator with cartesian topology (handle) (IN)

coords

is an integer array of size ndims specifying the cartesian coordinates of a task (IN)

rank

is an integer specifying the rank of specified task (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine translates cartesian task coordinates into a task rank.

For dimension i with periods(i) = true, if the coordinate coords(i) is out of range, that is, coords(i) < 0 or coords(i) >equiv. dims(i), it is shifted back to the interval 0 >equiv. coords(i) < dims(i) automatically. Out of range coordinates are erroneous for non-periodic dimensions.

Notes

Task coordinates in a cartesian structure begin their numbering at 0. Row-major numbering is always used for the tasks in a cartesian structure.

Errors

MPI not initialized

MPI already finalized

Invalid communicator

No topology

Invalid topology type
must be cartesian

Invalid coordinates
refer to Description above

Related Information

MPI_CART_CREATE
MPI_CART_COORDS

MPI_CART_SHIFT, MPI_Cart_shift

Purpose

Returns shifted source and destination ranks for a task.

C Synopsis

#include <mpi.h>
MPI_Cart_shift(MPI_Comm comm,int direction,int disp,
      int *rank_source,int *rank_dest);

Fortran Synopsis

include 'mpif.h'
MPI_CART_SHIFT(INTEGER COMM,INTEGER DIRECTION,INTEGER DISP,
      INTEGER RANK_SOURCE,INTEGER RANK_DEST,INTEGER IERROR)

Parameters

comm

is a communicator with cartesian topology (handle) (IN)

direction

is the coordinate dimension of shift (integer) (IN)

disp

is the displacement (>0 = upward shift, <0 = downward shift) (integer) (IN)

rank_source

is the rank of the source task (integer) (OUT)

rank_dest

is the rank of the destination task (integer) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine shifts the local rank along a specified coordinate dimension to generate source and destination ranks.

rank_source is obtained by subtracting disp from the nth coordinate of the local task, where n is equal to direction. Similarly, rank_dest is obtained by adding disp to the nth coordinate. Coordinate dimensions (direction) are numbered starting with 0.

If the dimension specified by direction is non-periodic, off-end shifts result in the value MPI_PROC_NULL being returned for rank_source and/or rank_dest.

Notes

In C and Fortran, the coordinate is identified by counting from 0. For example, Fortran A(X,Y) or C A[x] [y] both have x as direction 0.

Errors

MPI not initialized

MPI already finalized

Invalid communicator

Invalid topology type
must be cartesian

No topology

Related Information

MPI_CART_RANK
MPI_CART_COORDS
MPI_CART_CREATE

MPI_CART_SUB, MPI_Cart_sub

Purpose

Partitions a cartesian communicator into lower-dimensional subgroups.

C Synopsis

#include <mpi.h>
MPI_Cart_sub(MPI_Comm comm,int *remain_dims,MPI_Comm *newcomm);

Fortran Synopsis

include 'mpif.h'
MPI_CART_SUB(INTEGER COMM,LOGICAL REMAIN_DIMS(*),INTEGER NEWCOMM,
      INTEGER IERROR)

Parameters

comm

is a communicator with cartesian topology (handle) (IN)

remain_dims

the ith entry of remain_dims specifies whether the ith dimension is kept in the subgrid or is dropped. (logical vector) (IN)

newcomm

is the communicator containing the subgrid that includes the calling task (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

If a cartesian topology was created with MPI_CART_CREATE, you can use the function MPI_CART_SUB:

(This function is closely related to MPI_COMM_SPLIT.)

For example, MPI_CART_CREATE (..., comm) defined a 2 × 3 × 4 grid. Let remain_dims = (true, false, true). Then a call to:

    MPI_CART_SUB(comm,remain_dims,comm_new),

creates three communicators. Each has eight tasks in a 2 × 4 cartesian topology. If remain_dims = (false, false, true), then the call to:

    MPI_CART_SUB(comm,remain_dims,comm_new),

creates six non-overlapping communicators, each with four tasks in a one-dimensional cartesian topology.

Errors

MPI not initialized

MPI already finalized

Invalid communicator

Invalid topology
must be cartesian

No topology

Related Information

MPI_CART_CREATE
MPI_COMM_SPLIT

MPI_CARTDIM_GET, MPI_Cartdim_get

Purpose

Retrieves the number of cartesian dimensions from a communicator.

C Synopsis

#include <mpi.h>
MPI_Cartdim_get(MPI_Comm comm,int *ndims);

Fortran Synopsis

include 'mpif.h'
MPI_CARTDIM_GET(INTEGER COMM,INTEGER NDIMS,INTEGER IERROR)

Parameters

comm

is a communicator with cartesian topology (handle) (IN)

ndims

is an integer specifying the number of dimensions of the cartesian topology (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine retrieves the number of dimensions in a cartesian topology.

Errors

Invalid communicator

No topology

Invalid topology type
must be cartesian

MPI not initialized

MPI already finalized

Related Information

MPI_CART_GET
MPI_CART_CREATE

MPI_COMM_COMPARE, MPI_Comm_compare

Purpose

Compares the groups and context of two communicators.

C Synopsis

#include <mpi.h>
int MPI_Comm_compare(MPI_Comm comm1,MPI_Comm comm2,int *result);

Fortran Synopsis

include 'mpif.h'
MPI_COMM_COMPARE(INTEGER COMM1,INTEGER COMM2,INTEGER RESULT,INTEGER IERROR)

Parameters

comm1

is the first communicator (handle) (IN)

comm2

is the second communicator (handle) (IN)

result

is an integer specifying the result. The defined values are: MPI_IDENT, MPI_CONGRUENT, MPI_SIMILAR, and MPI_UNEQUAL. (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine compares the groups and contexts of two communicators. The following is an explanation of each MPI_COMM_COMPARE defined value:

MPI_IDENT
comm1 and comm2 are handles for the identical object

MPI_CONGRUENT
the underlying groups are identical in constituents and rank order (both local and remote groups for intercommunications), but are different in context

MPI_SIMILAR
the group members of both communicators are the same but are different in rank order (both local and remote groups for intercommunications),

MPI_UNEQUAL
if none of the above.

Errors

Invalid communicator(s)

MPI not initialized

MPI already finalized

Related Information

MPI_GROUP_COMPARE

MPI_COMM_CREATE, MPI_Comm_create

Purpose

Creates a new intracommunicator with a given group.

C Synopsis

#include <mpi.h>
int MPI_Comm_create(MPI_Comm comm,MPI_Group group,MPI_Comm *newcomm);

Fortran Synopsis

include 'mpif.h'
MPI_COMM_CREATE(INTEGER COMM,INTEGER GROUP,INTEGER NEWCOMM,
      INTEGER IERROR)

Parameters

comm

is the communicator (handle) (IN)

group

is Group which is a subset of the group of comm (handle) (IN)

newcomm

is the new communicator (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_COMM_CREATE is a collective function that is invoked by all tasks in the group associated with comm. This routine creates a new intracommunicator newcomm with communication group defined by group and a new context. Cached information is not propagated from comm to newcomm.

For tasks that are not in group, MPI_COMM_NULL is returned. The call is erroneous if group is not a subset of the group associated with comm. The call is executed by all tasks in comm even if they do not belong to the new group.

This call applies only to intracommunicators.

Notes

MPI_COMM_CREATE provides a way to subset a group of tasks for the purpose of separate MIMD computation with separate communication space. You can use newcomm in subsequent calls to MPI_COMM_CREATE or other communicator constructors to further subdivide a computation into parallel sub-computations.

Errors

Conflicting collective operations on communicator

Invalid communicator

Invalid group
group is not a subset of the group associated with comm

MPI not initialized

MPI already finalized

Related Information

MPI_COMM_DUP
MPI_COMM_SPLIT

MPI_COMM_DUP, MPI_Comm_dup

Purpose

Creates a new communicator that is a duplicate of an existing communicator.

C Synopsis

#include <mpi.h>
int MPI_Comm_dup(MPI_Comm comm,MPI_Comm *newcomm);

Fortran Synopsis

include 'mpif.h'
MPI_COMM_DUP(INTEGER COMM,INTEGER NEWCOMM,INTEGER IERROR)

Parameters

comm

is the communicator (handle) (IN)

newcomm

is the copy of comm (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_COMM_DUP is a collective function that is invoked by the group associated with comm. This routine duplicates the existing communicator comm with its associated key values.

For each key value the respective copy callback function determines the attribute value associated with this key in the new communicator. One action that a copy callback may take is to delete the attribute from the new communicator. Returns in newcomm a new communicator with the same group and any copied cached information, but a new context.

This call applies to both intra and inter communicators.

Notes

Use this operation to produce a duplicate communication space that has the same properties as the original communicator. This includes attributes and topologies.

This call is valid even if there are pending point to point communications involving the communicator comm.

Remember that MPI_COMM_DUP is collective on the input communicator, so it is erroneous for a thread to attempt to duplicate a communicator that is simultaneously involved in an MPI_COMM_DUP or any collective on some other thread.

Errors

Conflicting collective operations on communicator

A copy_fn did not return MPI_SUCCESS

A delete_fn did not return MPI_SUCCESS

Invalid communicator

MPI not initialized

MPI already finalized

Related Information

MPI_KEYVAL_CREATE

MPI_COMM_FREE, MPI_Comm_free

Purpose

Marks a communicator for deallocation.

C Synopsis

#include <mpi.h>
int MPI_Comm_free(MPI_Comm *comm);

Fortran Synopsis

include 'mpif.h'
MPI_COMM_FREE(INTEGER COMM,INTEGER IERROR)

Parameters

comm

is the communicator to be freed (handle) (INOUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This collective function marks either an intra or an inter communicator object for deallocation. MPI_COMM_FREE sets the handle to MPI_COMM_NULL. Actual deallocation of the communicator object occurs when active references to it have completed. The delete callback functions for all cached attributes are called in arbitrary order. The delete functions are called immediately and not deferred until deallocation.

Errors

A delete_fn did not return MPI_SUCCESS

Invalid communicator

MPI not initialized

MPI already finalized

Related Information

MPI_KEYVAL_CREATE

MPI_COMM_GROUP, MPI_Comm_group

Purpose

Returns the group handle associated with a communicator.

C Synopsis

#include <mpi.h>
int MPI_Comm_group(MPI_Comm comm,MPI_Group *group);

Fortran Synopsis

include 'mpif.h'
MPI_COMM_GROUP(INTEGER COMM,INTEGER GROUP,INTEGER IERROR)

Parameters

comm

is the communicator (handle) (IN)

group

is the group corresponding to comm (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine returns the group handle associated with a communicator.

Notes

If comm is an intercommunicator, then group is set to the local group. To determine the remote group of an intercommunicator, use MPI_COMM_REMOTE_GROUP.

Errors

Invalid communicator

MPI not initialized

MPI already finalized

Related Information

MPI_COMM_REMOTE_GROUP

MPI_COMM_RANK, MPI_Comm_rank

Purpose

Returns the rank of the local task in the group associated with a communicator.

C Synopsis

#include <mpi.h>
int MPI_Comm_rank(MPI_Comm comm,int *rank);

Fortran Synopsis

include 'mpif.h'
MPI_COMM_RANK(INTEGER COMM,INTEGER RANK,INTEGER IERROR)

Parameters

comm

is the communicator (handle) (IN)

rank

is an integer specifying the rank of the calling task in group of comm (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine returns the rank of the local task in the group associated with a communicator.

You can use this routine with MPI_COMM_SIZE to determine the amount of concurrency available for a specific job. MPI_COMM_RANK indicates the rank of the task that calls it in the range from 0...size - 1, where size is the return value of MPI_COMM_SIZE.

This routine is a shortcut to accessing the communicator's group with MPI_COMM_GROUP, computing the rank using MPI_GROUP_RANK and freeing the temporary group by using MPI_GROUP_FREE.

If comm is an intercommunicator, rank is the rank of the local task in the local group.

Errors

Invalid communicator

MPI not initialized

MPI already finalized

Related Information

MPI_GROUP_RANK

MPI_COMM_REMOTE_GROUP, MPI_Comm_remote_group

Purpose

Returns the handle of the remote group of an intercommunicator.

C Synopsis

#include <mpi.h>
int MPI_Comm_remote_group(MPI_Comm comm,MPI_group *group);

Fortran Synopsis

include 'mpif.h'
MPI_COMM_REMOTE_GROUP(INTEGER COMM,MPI_GROUP GROUP,INTEGER IERROR)

Parameters

comm

is the intercommunicator (handle) (IN)

group

is the remote group corresponding to comm. (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine is a local operation that returns the handle of the remote group of an intercommunicator.

Notes

To determine the local group of an intercommunicator, use MPI_COMM_GROUP.

Errors

Invalid communicator

Invalid communicator type
it must be intercommunicator

MPI not initialized

MPI already finalized

Related Information

MPI_COMM_GROUP

MPI_COMM_REMOTE_SIZE, MPI_Comm_remote_size

Purpose

Returns the size of the remote group of an intercommunicator.

C Synopsis

#include <mpi.h>
int MPI_Comm_remote_size(MPI_Comm comm,int *size);

Fortran Synopsis

include 'mpif.h'
MPI_COMM_REMOTE_SIZE(INTEGER COMM,INTEGER SIZE,INTEGER IERROR)

Parameters

comm

is the intercommunicator (handle) (IN)

size

is an integer specifying the number of tasks in the remote group of comm. (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine is a local operation that returns the size of the remote group of an intercommunicator.

Notes

To determine the size of the local group of an intercommunicator, use MPI_COMM_SIZE.

Errors

Invalid communicator

Invalid communicator type
it must be intercommunicator

MPI not initialized

MPI already finalized

Related Information

MPI_COMM_SIZE

MPI_COMM_SIZE, MPI_Comm_size

Purpose

Returns the size of the group associated with a communicator.

C Synopsis

#include <mpi.h>
int MPI_Comm_size(MPI_Comm comm,int *size);

Fortran Synopsis

include 'mpif.h'
MPI_COMM_SIZE(INTEGER COMM,INTEGER SIZE,INTEGER IERROR)

Parameters

comm

is the communicator (handle) (IN)

size

is an integer specifying the number of tasks in the group of comm (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine returns the size of the group associated with a communicator. This routine is a shortcut to:

If comm is an intercommunicator, size will be the size of the local group. To determine the size of the remote group of an intercommunicator, use MPI_COMM_REMOTE_SIZE.

You can use this routine with MPI_COMM_RANK to determine the amount of concurrency available for a specific library or program. MPI_COMM_RANK indicates the rank of the task that calls it in the range from 0...size - 1, where size is the return value of MPI_COMM_SIZE. The rank and size information can then be used to partition work across the available tasks.

Notes

This function indicates the number of tasks in a communicator. For MPI_COMM_WORLD, it indicates the total number of tasks available.

Errors

Invalid communicator

MPI not initialized

MPI already finalized

Related Information

MPI_GROUP_SIZE
MPI_COMM_GROUP
MPI_COMM_RANK
MPI_COMM_REMOTE_SIZE
MPI_GROUP_FREE

MPI_COMM_SPLIT, MPI_Comm_split

Purpose

Splits a communicator into multiple communicators based on color and key.

C Synopsis

#include <mpi.h>
int MPI_Comm_split(MPI_Comm comm,int color,int key,MPI_Comm *newcomm);

Fortran Synopsis

include 'mpif.h'
MPI_COMM_SPLIT(INTEGER COMM,INTEGER COLOR,INTEGER KEY,
    INTEGER NEWCOMM,INTEGER IERROR)

Parameters

comm

is the communicator (handle) (IN)

color

is an integer specifying control of subset assignment (IN)

key

is an integer specifying control of rank assignment (IN)

newcomm

is the new communicator (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_COMM_SPLIT is a collective function that partitions the group associated with comm into disjoint subgroups, one for each value of color. Each subgroup contains all tasks of the same color. Within each subgroup, the tasks are ranked in the order defined by the value of the argument key. Ties are broken according to their rank in the old group. A new communicator is created for each subgroup and returned in newcomm. If a task supplies the color value MPI_UNDEFINED, newcomm returns MPI_COMM_NULL. Even though this is a collective call, each task is allowed to provide different values for color and key.

This call applies only to intracommunicators.

The value of color must be greater than or equal to zero.

Errors

Conflicting collective operations on communicator

Invalid color
color < 0

Invalid communicator

Invalid communicator type
it must be intracommunicator

MPI not initialized

MPI already finalized

Related Information

MPI_CART_SUB

MPI_COMM_TEST_INTER, MPI_Comm_test_inter

Purpose

Returns the type of a communicator (intra or inter).

C Synopsis

#include <mpi.h>
int MPI_Comm_test_inter(MPI_Comm comm,int *flag);

Fortran Synopsis

include 'mpif.h'
MPI_COMM_TEST_INTER(INTEGER COMM,LOGICAL FLAG,INTEGER IERROR)

Parameters

comm

is the communicator (handle) (INOUT)

flag

communicator type (logical)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine is used to determine if a communicator is an inter or an intracommunicator.

If comm is an intercommunicator, the call returns true. If comm is an intracommunicator, the call returns false.

Notes

An intercommunicator can be used as an argument to some of the communicator access routines. However, intercommunicators cannot be used as input to some of the constructor routines for intracommunicators, such as MPI_COMM_CREATE.

Errors

Invalid communicator

MPI not initialized

MPI already finalized

MPI_DIMS_CREATE, MPI_Dims_create

Purpose

Defines a cartesian grid to balance tasks.

C Synopsis

#include <mpi.h>
MPI_Dims_create(int nnodes,int ndims,int *dims);

Fortran Synopsis

include 'mpif.h'
MPI_DIMS_CREATE(INTEGER NNODES,INTEGER NDIMS,INTEGER DIMS(*),
      INTEGER IERROR)

Parameters

nnodes

is an integer specifying the number of nodes in a grid (IN)

ndims

is an integer specifying the number of cartesian dimensions (IN)

dims

is an integer array of size ndims that specifies the number of nodes in each dimension. (INOUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine creates a cartesian grid with a given number of dimensions and a given number of nodes. The dimensions are constrained to be as close to each other as possible.

If dims[i] is a positive number when MPI_DIMS_CREATE is called, the routine will not modify the number of nodes in dimension i. Only those entries where dims[i]=0 are modified by the call.

Notes

MPI_DIMS_CREATE chooses dimensions so that the resulting grid is as close as possible to being an ndims-dimensional cube.

Errors

MPI not initialized

MPI already finalized

Invalid ndims
ndims < 0

Invalid nnodes
nnodes<0

Invalid dimension
dims[i] < 0 or nnodes is not a multiple of the non-zero entries of dims

Related Information

MPI_CART_CREATE

MPI_ERRHANDLER_CREATE, MPI_Errhandler_create

Purpose

Registers a user-defined error handler.

C Synopsis

#include <mpi.h>
int MPI_Errhandler_create(MPI_Handler_function *function,
    MPI_Errhandler *errhandler);

Fortran Synopsis

include 'mpif.h'
MPI_ERRHANDLER_CREATE(EXTERNAL FUNCTION,INTEGER ERRHANDLER,
      INTEGER IERROR)

Parameters

function

is a user defined error handling procedure (IN)

errhandler

is an MPI error handler (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_ERRHANDLER_CREATE registers the user routine function for use as an MPI error handler.

You can associate an error handler with a communicator. MPI will use the specified error handling routine for any exception that takes place during a call on this communicator. Different tasks can attach different error handlers to the same communicator. MPI calls not related to a specific communicator are considered as attached to the communicator MPI_COMM_WORLD.

Notes

The MPI standard specifies the following error handler prototype. A correct user error handler would be coded as:

void my_handler(MPI_Comm *comm, int *errcode, ...){}

The Parallel Environment for AIX implementation of MPI passes additional arguments to an error handler. The MPI standard allows this and urges an MPI implementation that does so to document the additional arguments. These additional arguments will be ignored by fully portable user error handlers. Anyone who wants to use the extra errhandler arguments can do so by using the C varargs (or stdargs) facility, but will be writing code that does not port cleanly to other MPI implementations, which happen to have different additional arguments.

The effective prototype for an error handler in IBM's implementation is:

typedef void (MPI_Handler_function)
  (MPI_Comm *comm, int *code, char *routine_name, int *flag, int *badval)

The additional arguments are:

routine_name
the name of the MPI routine in which the error occurred

flag
TRUE if badval is meaningful, FALSE if not

badval
the non-valid integer value that triggered the error

The interpretation of badval is context-dependent, so badval is not likely to be useful to a user error handler function that cannot identify this context. The routine_name string is more likely to be useful.

Errors

NULL function

MPI not initialized

MPI already finalized

Related Information

MPI_ERRHANDLER_SET
MPI_ERRHANDLER_GET
MPI_ERRHANDLER_FREE

MPI_ERRHANDLER_FREE, MPI_Errhandler_free

Purpose

Marks an error handler for deallocation.

C Synopsis

#include <mpi.h>
int MPI_Errhandler_free(MPI_Errhandler *errhandler);

Fortran Synopsis

include 'mpif.h'
MPI_ERRHANDLER_FREE(INTEGER ERRHANDLER,INTEGER IERROR)

Parameters

errhandler

is an MPI error handler (handle) (INOUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine marks error handler errhandler for deallocation and sets errhandler to MPI_ERRHANDLER_NULL. Actual deallocation occurs when all communicators associated with the error handler have been deallocated.

Errors

Invalid error handler

MPI not initialized

MPI already finalized

Related Information

MPI_ERRHANDLER_CREATE

MPI_ERRHANDLER_GET, MPI_Errhandler_get

Purpose

Gets an error handler associated with a communicator.

C Synopsis

#include <mpi.h>
int MPI_Errhandler_get(MPI_Comm comm,MPI_Errhandler *errhandler);

Fortran Synopsis

include 'mpif.h'
MPI_ERRHANDLER_GET(INTEGER COMM,INTEGER ERRHANDLER,INTEGER IERROR)

Parameters

comm

is a communicator (handle) (IN)

errhandler

is the MPI error handler currently associated with comm (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine returns the error handler errhandler currently associated with communicator comm.

Errors

Invalid communicator

MPI not initialized

MPI already finalized

Related Information

MPI_ERRHANDLER_SET
MPI_ERRHANDLER_CREATE

MPI_ERRHANDLER_SET, MPI_Errhandler_set

Purpose

Associates a new error handler with a communicator.

C Synopsis

#include <mpi.h>
int MPI_Errhandler_set(MPI_Comm comm,MPI_Errhandler errhandler);

Fortran Synopsis

include 'mpif.h'
MPI_ERRHANDLER_SET(INTEGER COMM, INTEGER ERRHANDLER, INTEGER IERROR)

Parameters

comm

is a communicator (handle) (IN)

errhandler

is a new MPI error handler for comm (handle) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine associates error handler errhandler with communicator comm. The association is local.

MPI will use the specified error handling routine for any exception that takes place during a call on this communicator. Different tasks can attach different error handlers to the same communicator. MPI calls not related to a specific communicator are considered as attached to the communicator MPI_COMM_WORLD.

Notes

An error handler that does not end in the MPI job being terminated, creates undefined risks. Some errors are harmless while others are catastrophic. For example, an error detected by one member of a collective operation can result in other members waiting indefinitely for an operation which will never occur.

It is also important to note that the MPI standard does not specify the state the MPI library should be in after an error occurs. MPI does not provide a way for users to determine how much, if any, damage has been done to the MPI state by a particular error.

The default error handler is MPI_ERRORS_ARE_FATAL, which behaves as if it contains a call to MPI_ABORT. MPI_ERRHANDLER_SET allows users to replace MPI_ERRORS_ARE_FATAL with an alternate error handler. The MPI standard provides MPI_ERRORS_RETURN, and IBM adds the non-standard MPE_ERRORS_WARN. These are pre-defined handlers that cause the error code to be returned and MPI to continue to run. Error handlers that are written by MPI users may call MPI_ABORT. If they do not abort, they too will cause MPI to deliver an error return code to the caller and continue to run.

Error handlers that let MPI return should only be used if every MPI call checks its return code. Continuing to use MPI after an error involves undefined risks. You may do cleanup after an MPI error is detected, as long as it doesn't use MPI calls. This should normally be followed by a call to MPI_ABORT.

The error Invalid error handler will be raised if errhandler is a file error handler (created with the routine MPI_FILE_CREATE_ERRHANDLER). Predefined error handlers, MPI_ERRORS_ARE_FATAL and MPI_ERRORS_RETURN, can be associated with both communicators and file handles.

Errors

Invalid Communicator

Invalid error handler

MPI not initialized

MPI already finalized

Related Information

MPI_ERRHANDLER_GET
MPI_ERRHANDLER_CREATE

MPI_ERROR_CLASS, MPI_Error_class

Purpose

Returns the error class for the corresponding error code.

C Synopsis

#include <mpi.h>
int MPI_Error_class(int errorcode,int *errorclass);

Fortran Synopsis

include 'mpif.h'
MPI_ERROR_CLASS(INTEGER ERRORCODE,INTEGER ERRORCLASS,INTEGER IERROR)

Parameters

errorcode

is the error code returned by an MPI routine (IN)

errorclass

is the error class for the errorcode (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine returns the error class corresponding to an error code.

Table 2 lists the valid error classes for threaded and non-threaded libraries.

Table 2. MPI Error Classes: Threaded and Non-Threaded Libraries
Error Classes Description
MPI_SUCCESS No error
MPI_ERR_BUFFER Non-valid buffer pointer
MPI_ERR_COUNT Non-valid count argument
MPI_ERR_TYPE Non-valid datatype argument
MPI_ERR_TAG Non-valid tag argument
MPI_ERR_COMM Non-valid communicator
MPI_ERR_RANK Non-valid rank
MPI_ERR_REQUEST Non-valid request (handle)
MPI_ERR_ROOT Non-valid root
MPI_ERR_GROUP Non-valid group
MPI_ERR_OP Non-valid operation
MPI_ERR_TOPOLOGY Non-valid topology
MPI_ERR_DIMS Non-valid dimension argument
MPI_ERR_ARG Non-valid argument
MPI_ERR_IN_STATUS Error code is in status
MPI_ERR_PENDING Pending request
MPI_ERR_TRUNCATE Message truncated on receive
MPI_ERR_INTERN Internal MPI error
MPI_ERR_OTHER Known error not provided
MPI_ERR_UNKNOWN Unknown error
MPI_ERR_LASTCODE Last standard error code

Table 3 lists the valid error classes for threaded libraries only.

Table 3. MPI Error Classes: Threaded Libraries Only
Error Classes Description
MPI_ERR_FILE Non-valid file handle
MPI_ERR_NOT_SAME Collective argument is not identical on all tasks
MPI_ERR_AMODE Error related to the amode passed to MPI_FILE_OPEN
MPI_ERR_UNSUPPORTED_DATAREP Unsupported datarep passed to MPI_FILE_SET_VIEW
MPI_ERR_UNSUPPORTED_OPERATION Unsupported operation, such as seeking on a file that supports sequential access only
MPI_ERR_NO_SUCH_FILE File does not exist
MPI_ERR_FILE_EXISTS File exists
MPI_ERR_BAD_FILE Non-valid file name (the path name is too long, for example)
MPI_ERR_ACCESS Permission denied
MPI_ERR_NO_SPACE Not enough space
MPI_ERR_QUOTA Quota exceeded
MPI_ERR_READ_ONLY Read-only file or file system
MPI_ERR_FILE_IN_USE File operation could not be completed because the file is currently opened by some task
MPI_ERR_DUP_DATAREP Conversion functions could not be registered because a previously-defined data representation was passed to MPI_REGISTER_DATAREP
MPI_ERR_CONVERSION An error occurred in a user-supplied data conversion function
MPI_ERR_IO Other I/O error

Notes

For this implementation of MPI, refer to the IBM Parallel Environment for AIX: Messages, which provides a listing of all the error messages issued as well as the error class to which the message belongs. Be aware that the MPI standard is not explicit enough about error classes to guarantee that every implementation of MPI will use the same error class for every detectable user error.

Errors

MPI not initialized

MPI already finalized

Related Information

MPI_ERROR_STRING

MPI_ERROR_STRING, MPI_Error_string

Purpose

Returns the error string for a given error code.

C Synopsis

#include <mpi.h>
int MPI_Error_string(int errorcode,char *string,
      int *resultlen);

Fortran Synopsis

include 'mpif.h'
MPI_ERROR_STRING(INTEGER ERRORCODE,CHARCTER STRING(*),
     INTEGER RESULTLEN,INTEGER IERROR)

Parameters

errorcode

is the error code returned by an MPI routine (IN)

string

is the error message for the errorcode (OUT)

resultlen

is the character length of string (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine returns the error string for a given error code. The returned string is null terminated with the terminating byte not counted in resultlen.

Storage for string must be at least MPI_MAX_ERROR_STRING characters long. The number of characters actually written is returned in resultlen.

Errors

Invalid error code
errorcode is not defined

MPI not initialized

MPI already finalized

Related Information

MPI_ERROR_CLASS

MPI_FILE_CLOSE, MPI_File_close

Purpose

Closes the file referred to by its file handle fh. It may also delete the file if the appropriate mode was set when the file was opened.

C Synopsis

#include <mpi.h>
int MPI_File_close (MPI_File *fh);

Fortran Synopsis

include 'mpif.h'
MPI_FILE_CLOSE(INTEGER FH,INTEGER IERROR)

Parameters

fh

is the file handle of the file to be closed (handle) (INOUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_FILE_CLOSE closes the file referred to by fh and deallocates associated internal data structures. This is a collective operation. The file is also deleted if MPI_MODE_DELETE_ON_CLOSE was set when the file was opened. In this situation, if other tasks have already opened the file and are still accessing it concurrently, these accesses will proceed normally, as if the file had not been deleted, until the tasks close the file. However, new open operations on the file will fail. If I/O operations are pending on fh, an error is returned to all the participating tasks, the file is neither closed nor deleted, and fh remains a valid file handle.

Notes

You are responsible for making sure all outstanding nonblocking requests and split collective operations associated with fh made by a task have completed before that task calls MPI_FILE_CLOSE.

If you call MPI_FINALIZE before all files are closed, an error will be raised on MPI_COMM_WORLD.

MPI_FILE_CLOSE deallocates the file handle object and sets fh to MPI_FILE_NULL.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Returning Errors (MPI Error Class):

Invalid file handle (MPI_ERR_FILE)
fh is not a valid file handle

Pending I/O operations (MPI_ERR_OTHER)
There are pending I/O operations

Internal close failed (MPI_ERR_IO)
An internal close operation on the file failed

Returning Errors When a File Is To Be Deleted (MPI Error Class):

Permission denied (MPI_ERR_ACCESS)
Write access to the directory containing the file is denied

File does not exist (MPI_ERR_NO_SUCH_FILE)
The file that is to be deleted does not exist

Read-only file system (MPI_ERR_READ_ONLY)
The directory containing the file resides on a read-only file system

Internal unlink failed (MPI_ERR_IO)
An internal unlink operation on the file failed

Related Information

MPI_FILE_OPEN
MPI_FILE_DELETE
MPI_FINALIZE

MPI_FILE_CREATE_ERRHANDLER, MPI_File_create_errhandler

Purpose

Registers a user-defined error handler that you can associate with an open file.

C Synopsis

#include <mpi.h>
int MPI_File_create_errhandler (MPI_File_errhandler_fn *function,
    MPI_Errhandler *errhandler);

Fortran Synopsis

include 'mpif.h'
MPI_FILE_CREATE_ERRHANDLER(EXTERNAL FUNCTION,INTEGER ERRHANDLER,
    INTEGER IERROR)

Parameters

function

is a user defined file error handling procedure (IN)

errhandler

is an MPI error handler (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_FILE_CREATE_ERRHANDLER registers the user routine function for use as an MPI error handler that can be associated with a file handle. Once associated with a file handle, MPI uses the specified error handling routine for any exception that takes place during a call on this file handle.

Notes

Different tasks can associate different error handlers with the same file. MPI_ERRHANDLER_FREE is used to free any error handler.

The MPI standard specifies the following error handler prototype:

typedef void (MPI_File_errhandler_fn) (MPI_File *, int *, ...);

A correct user error handler would be coded as:

void my_handler(MPI_File *fh, int *errcode,...){}

The Parallel Environment for AIX implementation of MPI passes additional arguments to an error handler. The MPI standard allows this and urges an MPI implementation that does so to document the additional arguments. These additional arguments will be ignored by fully portable user error handlers. Anyone who wants to use the extra errhandler arguments can do so by using the C varargs (or stdargs) facility, but will be writing code that does not port cleanly to other MPI implementations, which happen to have different additional arguments.

The effective prototype for an error handler in IBM's implementation is:

 typedef void (MPI_File_errhandler_fn)
   (MPI_File *fh, int *code, char *routine_name, int *flag, int *badval)

The additional arguments are:

routine_name
the name of the MPI routine in which the error occurred

flag
TRUE if badval is meaningful, FALSE if not

badval
the non-valid integer value that triggered the error

The interpretation of badval is context-dependent, so badval is not likely to be useful to a user error handler function that cannot identify this context. The routine_name string is more likely to be useful.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Null function not allowed
function cannot be NULL.

Related Information

MPI_FILE_SET_ERRHANDLER
MPI_FILE_GET_ERRHANDLER
MPI_ERRHANDLER_FREE

MPI_FILE_DELETE, MPI_File_delete

Purpose

Deletes the file referred to by filename after pending operations on the file complete. New operations cannot be initiated on the file.

C Synopsis

#include <mpi.h>
int MPI_File_delete (char *filename,MPI_Info info);

Fortran Synopsis

include 'mpif.h'
MPI_FILE_DELETE(CHARACTER*(*) FILENAME,INTEGER INFO,
    INTEGER IERROR)

Parameters

filename

is the name of the file to be deleted (string) (IN)

info

is an info object specifying file hints (handle) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine deletes the file referred to by filename. If other tasks have already opened the file and are still accessing it concurrently, these accesses will proceed normally, as if the file had not been deleted, until the tasks close the file. However, new open operations on the file will fail. There are no hints defined for MPI_FILE_DELETE.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Returning Errors (MPI Error Class):

Pathname too long (MPI_ERR_BAD_FILE)
A filename must contain less than 1024 characters.

Invalid file system type (MPI_ERR_OTHER)
filename refers to a file belonging to a file system of an unsupported type.

Invalid info (MPI_ERR_INFO)
info is not a valid info object.

Permission denied (MPI_ERR_ACCESS)
Write access to the directory containing the file is denied.

File or directory does not exist (MPI_ERR_NO_SUCH_FILE)
The file that is to be deleted does not exist, or a directory in the path does not exist.

Read-only file system (MPI_ERR_READ_ONLY)
The directory containing the file resides on a read-only file system.

Internal unlink failed (MPI_ERR_IO)
An internal unlink operation on the file failed.

Related Information

MPI_FILE_CLOSE

MPI_FILE_GET_AMODE, MPI_File_get_amode

Purpose

Retrieves the access mode specified when the file was opened.

C Synopsis

#include <mpi.h>
int MPI_File_get_amode (MPI_File fh,int *amode);

Fortran Synopsis

include 'mpif.h'
MPI_FILE_GET_AMODE(INTEGER FH,INTEGER AMODE,INTEGER IERROR)

Parameters

fh

is the file handle (handle) (IN)

amode

is the file access mode used to open the file (integer) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_FILE_GET_AMODE allows you to retrieve the access mode specified when the file referred to by fh was opened.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Returning Errors (MPI Error Class):

Invalid file handle (MPI_ERR_FILE)
fh is not a valid file handle.

Related Information

MPI_FILE_OPEN

MPI_FILE_GET_ATOMICITY, MPI_File_get_atomicity

Purpose

Retrieves the current atomicity mode in which the file is accessed.

C Synopsis

#include <mpi.h>
int MPI_File_get_atomicity (MPI_File fh,int *flag);

Fortran Synopsis

include 'mpif.h'
MPI_FILE_GET_ATOMICITY (INTEGER FH,LOGICAL FLAG,INTEGER IERROR)

Parameters

fh

is the file handle (handle) (IN)

flag

TRUE if atomic mode, FALSE if non-atomic mode (boolean) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_FILE_GET_ATOMICITY returns in flag 1 if the atomic mode is enabled for the file referred to by fh, otherwise flag returns 0.

Notes

The atomic mode is set to FALSE by default when the file is first opened. In MPI-2, MPI_FILE_SET_ATOMICITY is defined as the way to set atomicity. However, it is not provided in this release.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Returning Errors (MPI Error Class):

Invalid file handle (MPI_ERR_FILE)
fh is not a valid file handle.

Related Information

MPI_FILE_OPEN

MPI_FILE_GET_ERRHANDLER, MPI_File_get_errhandler

Purpose

Retrieves the error handler currently associated with a file handle.

C Synopsis

#include <mpi.h>
int MPI_File_get_errhandler (MPI_File file,MPI_Errhandler *errhandler);

Fortran Synopsis

include 'mpif.h'
MPI_FILE_GET_ERRHANDLER (INTEGER FILE,INTEGER ERRHANDLER,
    INTEGER IERROR)

Parameters

fh

is a file handle or MPI_FILE_NULL (handle)(IN)

errhandler

is the error handler currently associated with fh or the current default file error handler (handle)(OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

If fh is MPI_FILE_NULL, then MPI_FILE_GET_ERRHANDLER returns in errhandler the default file error handler currently assigned to the calling task. If fh is a valid file handle, then MPI_FILE_GET_ERRHANDLER returns in errhandler, the error handler currently associated with the file handle fh. Error handlers may be different at each task.

Notes

At MPI_INIT time, the default file error handler is MPI_ERRORS_RETURN. You can alter the default by calling the routine MPI_FILE_SET_ERRHANDLER and passing MPI_FILE_NULL as the file handle parameter. Any program that uses MPI_ERRORS_RETURN should check function return codes.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Invalid file handle
fh must be a valid file handle or MPI_FILE_NULL.

Related Information

MPI_FILE_CREATE_ERRHANDLER
MPI_FILE_SET_ERRHANDLER
MPI_ERRHANDLER_FREE

MPI_FILE_GET_GROUP, MPI_File_get_group

Purpose

Retrieves the group of tasks that opened the file.

C Synopsis

#include <mpi.h>
int MPI_File_get_group (MPI_File fh,MPI_Group *group);

Fortran Synopsis

include 'mpif.h'
MPI_FILE GET_GROUP (INTEGER FH,INTEGER GROUP,INTEGER IERROR)

Parameters

fh

is the file handle (handle) (IN)

group

is the group which opened the file handle (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_FILE_GET_GROUP lets you retrieve in group the group of tasks that opened the file referred to by fh. You are responsible for freeing group via MPI_GROUP_FREE.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Returning Errors (MPI Error Class):

Invalid file handle (MPI_ERR_FILE)
fh is not a valid file handle.

Related Information

MPI_FILE_OPEN

MPI_FILE_GET_INFO, MPI_File_get_info

Purpose

Returns a new info object identifying the hints associated with fh.

C Synopsis

#include <mpi.h>
int MPI_File_get_info (MPI_File fh,MPI_Info *info_used);

Fortran Synopsis

include 'mpif.h'
MPI_FILE_GET_INFO (INTEGER FH,INTEGER INFO_USED,
    INTEGER IERROR)

Parameters

fh

is the file handle (handle) (IN)

info_used

is the new info object (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

Because no file hints are defined in this release, MPI_FILE_GET_INFO simply creates a new empty info object and returns its handle in info_used after checking for the validity of the file handle fh. You are responsible for freeing info_used via MPI_INFO_FREE.

Notes

File hints can be specified by the user through the info parameter of routines: MPI_FILE_SET_INFO, MPI_FILE_OPEN, MPI_FILE_SET_VIEW. MPI can also assign default values to file hints it supports when these hints are not specified by the user.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Returning Errors (MPI Error Class):

Invalid file handle (MPI_ERR_FILE)
fh is not a valid file handle.

Related Information

MPI_FILE_SET_INFO
MPI_FILE_OPEN
MPI_FILE_SET_VIEW
MPI_INFO_FREE

MPI_FILE_GET_SIZE, MPI_File_get_size

Purpose

Retrieves the current file size.

C Synopsis

#include <mpi.h>
int MPI_File_get_size (MPI_File fh,MPI_Offset size);

Fortran Synopsis

include 'mpif.h'
MPI_FILE_GET_SIZE (INTEGER FH,INTEGER(KIND=MPI_OFFSET_KIND) SIZE,
    INTEGER IERROR)

Parameters

fh

is the file handle (handle) (IN)

size

is the size of the file in bytes (long long) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_FILE_GET_SIZE returns in size the current length in bytes of the open file referred to by fh.

Notes

You can alter the size of the file by calling the routine MPI_FILE_SET_SIZE. The size of the file will also be altered when a write operation to the file results in adding data beyond the current end of the file.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Returning Errors (MPI Error Class):

Invalid file handle (MPI_ERR_FILE)
fh is not a valid file handle.

Internal fstat failed (MPI_ERR_IO)
An internal fstat operation on the file failed.

Related Information

MPI_FILE_SET_SIZE
MPI_FILE_WRITE_AT
MPI_FILE_WRITE_AT_ALL
MPI_FILE_IWRITE_AT

MPI_FILE_GET_VIEW, MPI_File_get_view

Purpose

Retrieves the current file view.

C Synopsis

#include <mpi.h>
int MPI_File_get_view (MPI_File fh,MPI_Offset *disp,
    MPI_Datatype *etype,MPI_Datatype *filetype,char *datarep);

Fortran Synopsis

include 'mpif.h'
MPI_FILE_GET_VIEW (INTEGER FH,INTEGER(KIND=MPI_OFFSET_KIND) DISP,
    INTEGER ETYPE,INTEGER FILETYPE,INTEGER DATAREP,INTEGER IERROR)

Parameters

fh

is the file handle (handle) (IN)

disp

is the displacement (long long) (OUT)

etype

is the elementary datatype (handle) (OUT).

filetype

is the file type (handle) (OUT).

datarep

is the data representation (string) (OUT).

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_FILE_GET_VIEW retrieves the current view associated with the open file referred to by fh. The current view displacement is returned in disp. A reference to the current elementary datatype is returned in etype and a reference to the current file type is returned in filetype. The current data representation is returned in datarep. If etype and filetype are named types, they cannot be freed. If either one is a user-defined types, it should be freed. Use MPI_TYPE_GET_ENVELOPE to identify which types should be freed via MPI_TYPE_FREE. Freeing the MPI_Datatype reference returned by MPI_FILE_GET_VIEW invalidates only this reference.

Notes

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Returning Errors (MPI Error Class):

Invalid file handle (MPI_ERR_FILE)
fh is not a valid file handle.

Related Information

MPI_FILE_OPEN
MPI_FILE_SET_VIEW
MPI_TYPE_FREE

MPI_FILE_IREAD_AT, MPI_File_iread_at

Purpose

A nonblocking version of MPI_FILE_READ_AT. The call returns immediately with a request handle that you can use to check for the completion of the read operation.

C Synopsis

#include <mpi.h>
int MPI_File_iread_at (MPI_File fh,MPI_Offset offset,void *buf,
    int count,MPI_Datatype datatype,MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPI_FILE_IREAD_AT (INTEGER FH,INTEGER (KIND=MPI_OFFSET_KIND) OFFSET,
    CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER REQUEST,
    INTEGER IERROR)

Parameters

fh

is the file handle (handle) (IN).

offset

is the file offset (long long) (IN).

buf

is the initial address of buffer (choice) (OUT).

count

is the number of elements in the buffer (integer) (IN).

datatype

is the datatype of each buffer element (handle) (IN).

request

is the request object (handle) (OUT).

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine, MPI_FILE_IREAD_AT, is the nonblocking version of MPI_FILE_READ_AT and it performs the same function as MPI_FILE_READ_AT except it immediately returns in request a handle. This request handle can be used to either test or wait for the completion of the read operation or it can be used to cancel the read operation. The memory buffer buf cannot be accessed until the request has completed via a completion routine call. Completion of the request guarantees that the read operation is complete.

When MPI_FILE_IREAD_AT completes, the actual number of bytes read is stored in the completion routine's status argument. If an error occurs during the read operation, the error is returned by the completion routine through its return value or in the appropriate index of the array_of_statuses argument.

If the completion routine is associated with multiple requests, it returns when requests complete successfully. Or, if one of the requests fails, the errorhandler associated with that request is triggered. If that is an "error return" errorhandler, each element of the array_of_statuses argument is updated to contain MPI_ERR_PENDING for each request that did not yet complete. The first error dictates the outcome of the entire completion routine whether the error is on a file request or a communication request. The order in which requests are processed is not defined.

Notes

A valid call to MPI_CANCEL on the request will return MPI_SUCCESS. The eventual call to MPI_TEST_CANCELLED on the status will show that the cancel was unsuccessful.

Note that when you specify a value for the offset argument, constants of the appropriate type should be used. In Fortran, constants of type INTEGER(KIND=8) should be used, for example, 45_8.

Passing MPI_STATUS_IGNORE for the status argument or MPI_STATUSES_IGNORE for the array_of_statuses argument in the completion routine call is not supported in this release.

If an error occurs during the read operation, the number of bytes contained in the status argument of the completion routine is meaningless.

For additional information, see MPI_FILE_READ_AT.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Returning Errors (MPI Error Class):

Permission denied (MPI_ERR_ACCESS)
The file was opened in write-only mode.

Invalid file handle (MPI_ERR_FILE)
fh is not a valid file handle.

Invalid count (MPI_ERR_COUNT)
count is an invalid count.

MPI_DATATYPE_NULL not valid (MPI_ERR_TYPE)
datatype has already been freed.

Undefined datatype (MPI_ERR_TYPE)
datatype is not a defined datatype.

Invalid datatype (MPI_ERR_TYPE)
datatype can be neither MPI_LB nor MPI_UB.

Uncommitted datatype (MPI_ERR_TYPE)
datatype must be committed.

Unsupported operation on sequential access file (MPI_ERR_UNSUPPORTED_OPERATION)
MPI_MODE_SEQUENTIAL was set when the file was opened.

Invalid offset (MPI_ERR_ARG)
offset is an invalid offset.

Error Returned By Completion Routine (MPI Error Class):

Internal read failed (MPI_ERR_IO)
An internal read operation failed.

Internal lseek failed (MPI_ERR_IO)
An internal lseek operation failed.

Related Information

MPI_FILE_READ_AT
MPI_WAIT
MPI_TEST
MPI_CANCEL

MPI_FILE_IWRITE_AT, MPI_File_iwrite_at

Purpose

A nonblocking version of MPI_FILE_WRITE_AT. The call returns immediately with a request handle that you can use to check for the completion of the write operation.

C Synopsis

#include <mpi.h>
int MPI_File_iwrite_at (MPI_File fh,MPI_Offset offset,void *buf,
    int count,MPI_Datatype datatype,MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPI_FILE_IWRITE_AT(INTEGER FH,INTEGER(KIND=MPI_OFFSET_KIND) OFFSET,
   CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER REQUEST,
   INTEGER IERROR)

Parameters

fh

is the file handle (handle) (INOUT).

offset

is the file offset (long long) (IN).

buf

is the initial address of buffer (choice) (IN).

count

is the number of elements in buffer (integer) (IN).

datatype

is the datatype of elements in count (handle) (IN).

request

is the request object (handle) (OUT).

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine, MPI_FILE_IWRITE_AT, is the nonblocking version of MPI_FILE_WRITE_AT and it performs the same function as MPI_FILE_WRITE_AT except it immediately returns in request a handle. This request handle can be used to either test or wait for the completion of the write operation or it can be used to cancel the write operation. The memory buffer buf cannot be modified until the request has completed via a completion routine call. For example, MPI_WAIT, MPI_TEST, or one of the other MPI wait or test functions. Completion of the request does not guarantee that the data has been written to the storage device(s). In particular, written data may still be present in system buffers. However, it guarantees that the memory buffer can be safely reused.

When MPI_FILE_IWRITE_AT completes, the actual number of bytes written is stored in the completion routine's status argument. If an error occurs during the write operation, then the error is returned by the completion routine through its return code or in the appropriate index of the array_of_statuses argument.

If the completion routine is associated with multiple requests, it returns when all requests complete successfully. Or, if one of the requests fails, the errorhandler associated with that request is triggered. If that is an "error return" errorhandler, each element of the array_of_statuses argument is updated to contain MPI_ERR_PENDING for each request that did not yet complete. The first error dictates the outcome of the entire completion routine whether the error is on a file request or a communication request. The order in which requests are processed is not defined.

Notes

A valid call to MPI_CANCEL on the request will return MPI_SUCCESS. The eventual call to MPI_TEST_CANCELLED on the status will show that the cancel was unsuccessful.

Note that when you specify a value for the offset argument, constants of the appropriate type should be used. In Fortran, constants of type INTEGER(KIND=8) should be used, for example, 45_8.

Passing MPI_STATUSES_IGNORE for the status argument or MPI_STATUSES_IGNORE for the array_of_statuses argument in the completion routine call is not supported in this release.

If an error occurs during the write operation, the number of bytes contained in the status argument of the completion routine is meaningless.

For more information, see MPI_FILE_WRITE_AT.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Returning Errors (MPI Error Class):

Permission denied (MPI_ERR_ACCESS)
The file was opened in read-only mode.

Invalid file handle (MPI_ERR_FILE)
fh is not a valid file handle.

Invalid count (MPI_ERR_COUNT)
count is an invalid count.

MPI_DATATYPE_NULL not valid (MPI_ERR_TYPE)
datatype has already been freed.

Undefined datatype (MPI_ERR_TYPE)
datatype is not a defined datatype.

Invalid datatype (MPI_ERR_TYPE)
datatype can be neither MPI_LB nor MPI_UB.

Uncommitted datatype (MPI_ERR_TYPE)
datatype must be committed.

Unsupported operation on sequential access file (MPI_ERR_UNSUPPORTED_OPERATION)
MPI_MODE_SEQUENTIAL was set when the file was opened.

Invalid offset (MPI_ERR_ARG)
offset is an invalid offset.

Errors Returned By Completion Routine (MPI Error Class):

Not enough space in file system (MPI_ERR_NO_SPACE)
The file system on which the file resides is full.

File too big (MPI_ERR_OTHER)
The file has reached the maximum size allowed.

Internal write failed (MPI_ERR_IO)
An internal write operation failed.

Internal lseek failed (MPI_ERR_IO)
An internal lseek operation failed.

Related Information

MPI_FILE_WRITE_AT
MPI_FILE_WAIT
MPI_FILE_TEST
MPI_FILE_CANCEL

MPI_FILE_OPEN, MPI_File_open

Purpose

Opens the file called filename.

C Synopsis

#include <mpi.h>
int MPI_File_open (MPI_Comm comm,char *filename,int amode,MPI_info,
    MPI_File *fh);

Fortran Synopsis

include 'mpif.h'
MPI_FILE_OPEN(INTEGER COMM,CHARACTER FILENAME(*),INTEGER AMODE,
    INTEGER INFO,INTEGER FH,INTEGER IERROR)

Parameters

comm

is the communicator (handle) (IN)

filename

is the name of the file to open (string) (IN)

amode

is the file access mode (integer) (IN)

info

is the info object (handle) (IN)

fh

is the new file handle (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_FILE_OPEN opens the file referred to by filename, sets the default view on the file, and sets the access mode amode. MPI_FILE_OPEN returns a file handle fh used for all subsequent operations on the file. The file handle fh remains valid until the file is closed (MPI_FILE_CLOSE). The default view is similar to a linear byte stream in the native representation starting at file offset 0. You can call MPI_FILE_SET_VIEW to set a different view of the file.

MPI_FILE_OPEN is a collective operation. comm must be a valid intracommunicator. Values specified for amode by all participating tasks must be identical. The program is erroneous when participating tasks do not refer to the same file through their own instances of filename.

No hints are defined in this release; therefore, info is presumed to be empty.

Notes

This implementation is targeted to the IBM Generalized Parallel File System (GPFS) for production use. It requires that a single GPFS file system be available across all tasks of the MPI job. It can also be used for development purposes on any other file system that supports the POSIX interface (AFS, DFS, JFS, or NFS), as long as the application runs on only one node or workstation.

For AFS, DFS, and NFS, MPI-IO uses file locking for all accesses by default. If other tasks on the same node share the file and also use file locking, file consistency is preserved. If the MPI_FILE_OPEN is done with mode MPI_MODE_UNIQUE_OPEN, file locking is not done.

If you call MPI_FINALIZE before all files are closed, an error will be raised on MPI_COMM_WORLD.

The following access modes (specified in amode), are supported:

MPI_MODE_RDONLY - read only
MPI_MODE_RDWR - reading and writing
MPI_MODE_WRONLY - write only
MPI_MODE_CREATE - create the file if it does not exist
MPI_MODE_EXCL - raise an error if the file already exists and MPI_MODE_CREATE is specified
MPI_MODE_DELETE_ON_CLOSE - delete file on close
MPI_MODE_UNIQUE_OPEN - file will not be concurrently opened elsewhere
MPI_MODE_SEQUENTIAL - file will only be accessed sequentially
MPI_MODE_APPEND - set initial position of all file pointers to end of file

In C and C++: You can use bit vector OR to combine these integer constants.

In Fortran: You can use the bit vector IOR intrinsic to combine these integers. If addition is used, each constant should only appear once.

MPI-IO depends on hidden threads that use MPI message passing. MPI-IO cannot be used with MP_SINGLE_THREAD set to yes.

The default for MP_CSS_INTERRUPT is no. If you do not override the default, MPI-IO enables interrupts while files are open. If you have forced interrupts to yes or no, MPI-IO does not alter your selection.

Parameter consistency checking is only performed if the environment variable MP_EUIDEVELOP is set to yes. If this variable is set and the amodes specified are not identical, the error Inconsistent amodes will be raised on some tasks. Similarly, if this variable is set and the file inodes associated with the file names are not identical, the error Inconsistent file inodes will be raised on some tasks. In either case, the error Consistency error occurred on another task will be raised on the other tasks.

When MPI-IO is used correctly, a file name will be represented at every task by the same file system. In one detectable error situation, a file will appear to be on different file system types. For example, a particular file could be visible to some tasks as a GPFS file and to others as NFS-mounted.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Invalid communicator
comm is not a valid communicator.

Can't use an intercommunicator
comm is an intercommunicator.

Conflicting collective operations on communicator

Returning Errors (MPI Error Class):

Pathname too long (MPI_ERR_BAD_FILE)
File name must contain less than 1024 characters.

Invalid access mode (MPI_ERR_AMODE)
amode is not a valid access mode.

Invalid file system type (MPI_ERR_OTHER)
filename refers to a file belonging to a file system of an unsupported type.

Invalid info (MPI_ERR_INFO)
info is not a valid info object.

Locally detected error occurred on another task (MPI_ERR_ARG)
Local parameter check failed on other task(s).

Inconsistent file inodes (MPI_ERR_NOT_SAME)
Local filename corresponds to a file inode that is not consistent with that associated with the filename of other task(s).

Inconsistent file system types (MPI_ERR_NOT_SAME)
Local file system type associated with filename is not identical to that of other task(s).

Inconsistent amodes (MPI_ERR_NOT_SAME)
Local amode is not consistent with the amode of other task(s).

Consistency error occurred on another task (MPI_ERR_ARG)
Consistency check failed on other task(s).

Permission denied (MPI_ERR_ACCESS)
Access to the file was denied.

File already exists (MPI_ERR_FILE_EXISTS)
MPI_MODE_CREATE and MPI_MODE_EXCL are set and the file exists.

File or directory does not exist (MPI_ERR_NO_SUCH_FILE)
The file does not exist and MPI_MODE_CREATE is not set, or a directory in the path does not exist.

Not enough space in file system (MPI_ERR_NO_SPACE)
The directory or the file system is full.

File is a directory (MPI_ERR_BAD_FILE)
The file is a directory.

Read-only file system (MPI_ERR_READ_ONLY)
The file resides in a read-only file system and write access is required.

Internal open failed (MPI_ERR_IO)
An internal open operation on the file failed.

Internal stat failed (MPI_ERR_IO)
An internal stat operation on the file failed.

Internal fstat failed (MPI_ERR_IO)
An internal fstat operation on the file failed.

Internal fstatvfs failed (MPI_ERR_IO)
An internal fstatvfs operation on the file failed.

Related Information

MPI_FILE_CLOSE
MPI_FILE_SET_VIEW
MPI_FINALIZE

MPI_FILE_READ_AT, MPI_File_read_at

Purpose

Reads a file starting at the position specified by offset.

C Synopsis

#include <mpi.h>
int MPI_File_read_at (MPI_File fh,MPI_Offset offset,void *buf,
    int count,MPI_Datatype datatype,MPI_Status *status);

Fortran Synopsis

include 'mpif.h'
MPI_FILE_READ_AT(INTEGER FH,INTEGER(KIND=MPI_OFFSET_KIND) OFFSET,
    CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,
    INTEGER STATUS(MPI_STATUS_SIZE),INTEGER IERROR)

Parameters

fh

is the file handle (handle) (IN).

offset

is the file offset (long long) (IN).

buf

is the initial address of buffer (choice) (OUT).

count

is the number of items in buffer (integer) (IN).

datatype

is the datatype of each buffer element (handle) (IN).

status

is the status object (status) (OUT).

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_FILE_READ_AT attempts to read from the file referred to by fh count items of type datatype into the buffer buf, starting at the offset offset, relative to the current view. The call returns only when data is available in buf. status contains the number of bytes successfully read and accessor functions MPI_GET_COUNT and MPI_GET_ELEMENTS allow you to extract from status the number of items and the number of intrinsic MPI elements successfully read, respectively. You can check for a read beyond the end of file condition by comparing the number of items requested with the number of items actually read.

Notes

Note that when you specify a value for the offset argument, constants of the appropriate type should be used. In Fortran, constants of type INTEGER(KIND=8) should be used, for example, 45_8.

Passing MPI_STATUS_IGNORE for the status argument is not supported in this release.

If an error is raised, the number of bytes contained in the status argument is meaningless.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Returning Errors (MPI Error Class):

Permission denied (MPI_ERR_ACCESS)
The file was opened in write-only mode.

Invalid file handle (MPI_ERR_FILE)
fh is not a valid file handle.

Invalid count (MPI_ERR_COUNT)
count is not an invalid count.

MPI_DATATYPE_NULL not valid (MPI_ERR_TYPE)
datatype has already been freed.

Undefined datatype (MPI_ERR_TYPE)
datatype is not a defined datatype.

Invalid datatype (MPI_ERR_TYPE)
datatype can be neither MPI_LB nor MPI_UB.

Uncommitted datatype (MPI_ERR_TYPE)
datatype must be committed.

Unsupported operation on sequential access file (MPI_ERR_UNSUPPORTED_OPERATION)
MPI_MODE_SEQUENTIAL was set when the file was opened.

Invalid offset (MPI_ERR_ARG)
offset is and invalid offset.

Internal read failed (MPI_ERR_IO)
An internal read operation failed.

Internal lseek failed (MPI_ERR_IO)
An internal lseek operation failed.

Related Information

MPI_FILE_READ_AT_ALL
MPI_FILE_IREAD_AT

MPI_FILE_READ_AT_ALL, MPI_File_read_at_all

Purpose

A collective version of MPI_FILE_READ_AT.

C Synopsis

#include <mpi.h>
int MPI_File_read_at_all (MPI_File fh,MPI_Offset offset,void *buf,
    int count,MPI_Datatype datatype,MPI_Status *status);

Fortran Synopsis

include 'mpif.h'
MPI_FILE_READ_AT_ALL(INTEGER FH,INTEGER(KIND=MPI_OFFSET_KIND) OFFSET,
    CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,
    INTEGER STATUS(MPI_STATUS_SIZE),INTEGER IERROR)

Parameters

fh

is the file handle (handle)(IN).

offset

is the file offset (long long) (IN).

buf

is the initial address of the buffer (choice) (OUT).

count

is the number of elements in buffer (integer) (IN).

datatype

is the datatype of each buffer element (handle) (IN).

status

is the status object (Status) (OUT).

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_FILE_READ_AT_ALL is the collective version of the routine MPI_FILE_READ_AT. It has the exact semantics as its counterpart. The number of bytes actually read by the calling task is returned in status. The call returns when the data requested by the calling task is available in buf. The call does not wait for accesses from other tasks associated with the file handle fh to have data available in their buffers.

Notes

Note that when you specify a value for the offset argument, constants of the appropriate type should be used. In Fortran, constants of type INTEGER(KIND=8) should be used, for example, 45_8.

Passing MPI_STATUS_IGNORE for the status argument is not supported in this release.

If an error is raised, the number of bytes contained in status is meaningless.

For additional information, see MPI_FILE_READ_AT.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Returning Errors (MPI Error Class):

Permission denied (MPI_ERR_ACCESS)
The file was opened in write-only mode.

Invalid count (MPI_ERR_COUNT)
count is an invalid count.

Invalid file handle (MPI_ERR_FILE)
fh is not a valid file handle.

MPI_DATATYPE_NULL not valid (MPI_ERR_TYPE)
datatype has already been freed.

Undefined datatype (MPI_ERR_TYPE)
datatype is not a defined datatype.

Invalid datatype (MPI_ERR_TYPE)
datatype can be neither MPI_LB nor MPI_UB.

Uncommitted datatype (MPI_ERR_TYPE)
datatype must be committed.

Unsupported operation on sequential access file (MPI_ERR_UNSUPPORTED_OPERATION)
MPI_MODE_SEQUENTIAL was set when the file was opened.

Invalid offset (MPI_ERR_ARG)
offset is an invalid offset.

Internal read failed (MPI_ERR_IO)
An internal read operation failed.

Internal lseek failed (MPI_ERR_IO)
An internal lseek operation failed.

Related Information

MPI_FILE_READ_AT
MPI_FILE_IREAD_AT

MPI_FILE_SET_ERRHANDLER, MPI_File_set_errhandler

Purpose

Associates a new error handler to a file.

C Synopsis

#include <mpi.h>
int MPI_File_set_errhandler (MPI_File fh,
    MPI_Errhandler errhandler);

Fortran Synopsis

include 'mpif.h'
MPI_FILE_SET_ERRHANDLER(INTEGER FH,INTEGER ERRHANLDER,
    INTEGER IERROR)

Parameters

fh

is the valid file handle (handle) (IN)

errhandler

is the new error handler for the opened file (handle) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_FILE_SET_ERRHANDLER associates a new error handler to a file. If fh is equal to MPI_FILE_NULL, then MPI_FILE_SET_ERRHANDLER defines the new default file error handler on the calling task to be error handler errhandler. If fh is a valid file handle, then this routine associates the error handler errhandler with the file referred to by fh.

Notes

The error Invalid error handler is raised if errhandler was created with any error handler create routine other than MPI_FILE_CREATE_ERRHANDLER. You can associate the predefined error handlers, MPI_ERRORS_ARE_FATAL and MPI_ERRORS_RETURN, as well as the implementation-specific MPE_ERRORS_WARN, with file handles.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Invalid file handle
fh must be a valid file handle or MPI_FILE_NULL.

Invalid error handler
errhandler must be a valid error handler.

Related Information

MPI_FILE_CREATE_ERRHANDLER
MPI_FILE_GET_ERRHANDLER
MPI_ERRHANDLER_FREE

MPI_FILE_SET_INFO, MPI_File_set_info

Purpose

Specifies new hints for an open file.

C Synopsis

#include <mpi.h>
int MPI_File_set_info (MPI_File fh,MPI_Info info);

Fortran Synopsis

include 'mpif.h'
MPI_FILE_SET_INFO(INTEGER FH,INTEGER INFO,INTEGER IERROR)

Parameters

fh

is the file handle (handle) (INOUT)

info

is the info object (handle) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_FILE_SET_INFO sets any hints that the info object contains for fh. In this release, file hints are not supported, so all info objects will be empty. However, you are free to associate new hints with an open file. They will just be ignored by MPI.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Returning Errors (MPI Error Class):

Invalid file handle (MPI_ERR_FILE)
fh is not a valid file handle.

Invalid info (MPI_ERR_INFO)
info is not a valid info object.

Related Information

MPI_FILE_GET_INFO
MPI_FILE_OPEN
MPI_FILE_SET_VIEW

MPI_FILE_SET_SIZE, MPI_File_set_size

Purpose

Expands or truncates an open file.

C Synopsis

#include <mpi.h>
int MPI_File_set_size (MPI_File fh,MPI_Offset size);

Fortran Synopsis

include 'mpif.h'
MPI_FILE_SET_SIZE (INTEGER FH,INTEGER(KIND=MPI_OFFSET_KIND) SIZE,
    INTEGER IERROR)

Parameters

fh

is the file handle (handle) (INOUT)

size

is the requested size of the file after truncation or expansion (long long) (IN).

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_FILE_SET_SIZE is a collective operation that allows you to expand or truncate the open file referred to by fh. All participating tasks must specify the same value for size. If I/O operations are pending on fh, then an error is returned to the participating tasks and the file is not resized.

If size is larger than the current file size, the file length is increased to size and a read of unwritten data in the extended area returns zeros. However, file blocks are not allocated in the extended area. If size is smaller than the current file size, the file is truncated at the position defined by size. File blocks located beyond this point are de-allocated.

Notes

Note that when you specify a value for the size argument, constants of the appropriate type should be used. In Fortran, constants of type INTEGER(KIND=8) should be used, for example, 45_8.

Parameter consistency checking is only performed if the environment variable MP_EUIDEVELOP is set to yes. If this variable is set and the sizes specified are not identical, the error Inconsistent file sizes will be raised on some tasks, and the error Consistency error occurred on another task will be raised on the other tasks.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Returning Errors (MPI Error Class):

Permission denied (MPI_ERR_ACCESS)
The file was opened in read-only mode.

Unsupported operation on sequential access file (MPI_ERR_UNSUPPORTED_OPERATION)
MPI_MODE_SEQUENTIAL was set when the file was opened.

Pending I/O operations (MPI_ERR_OTHER)
There are pending I/O operations.

Locally detected error occurred on another task (MPI_ERR_ARG)
Local parameter check failed on other task(s).

Invalid file handle (MPI_ERR_FILE)
fh is not a valid file handle.

Invalid file size (MPI_ERR_ARG)
Local size is negative

Inconsistent file sizes (MPI_ERR_NOT_SAME)
Local size is not consistent with the file size of other task(s)

Consistency error occurred on another task (MPI_ERR_ARG)
Consistency check failed on other task(s).

Internal ftruncate failed (MPI_ERR_IO)
An internal ftruncate operation on the file failed.

Related Information

MPI_FILE_GET_SIZE

MPI_FILE_SET_VIEW, MPI_File_set_view

Purpose

Associates a new view with the open file.

C Synopsis

#include <mpi.h>
int MPI_File_set_view (MPI_File fh,MPI_Offset disp,
    MPI_Datatype etype,MPI_Datatype filetype,
    char *datarep,MPI_Info info);

Fortran Synopsis

include 'mpif.h'
MPI_FILE_SET_VIEW (INTEGER FH,INTEGER(KIND=MPI_OFFSET_KIND) DISP,
    INTEGER ETYPE,INTEGER FILETYPE,CHARACTER DATAREP(*),INTEGER INFO,
    INTEGER IERROR)

Parameters

fh

is the file handle (handle) (IN).

disp

is the displacement (long long) (IN).

etype

is the elementary datatype (handle) (IN).

filetype

is the filetype (handle) (IN).

datarep

is the data representation (string) (IN).

info

is the info object (handle) (IN).

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_FILE_SET_VIEW is a collective operation and associates a new view defined by disp, etype, filetype, and datarep with the open file referred to by fh. All participating tasks must specify the same values for datarep and the same extents for etype.

There are no further restrictions on etype and filetype except those referred to in the MPI-2 standard. No checking is performed on the validity of these datatypes. If I/O operations are pending on fh, an error is returned to the participating tasks and the new view is not associated with the file. The only data representation currently supported is native. Since in this release file hints are not supported, the info argument will be ignored, after its validity is checked.

Notes

Note that when you specify a value for the disp argument, constants of the appropriate type should be used. In Fortran, constants of type INTEGER(KIND=8) should be used, for example, 45_8.

It is expected that a call to MPI_FILE_SET_VIEW will immediately follow MPI_FILE_OPEN in many instances.

Parameter consistency checking is only performed if the environment variable MP_EUIDEVELOP is set to yes. If this variable is set and the extents of the elementary datatypes specified are not identical, the error Inconsistent elementary datatypes will be raised on some tasks and the error Consistency error occurred on another task will be raised on the other tasks.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Returning Errors (MPI Error Class):

Invalid displacement (MPI_ERR_ARG)
Invalid displacement.

Invalid file handle (MPI_ERR_FILE)
fh is not a valid file handle.

MPI_DATATYPE_NULL not valid (MPI_ERR_TYPE)
Either etype or filetype has already been freed.

Undefined datatype (MPI_ERR_TYPE)
etype or filetype is not a defined datatype.

Invalid datatype (MPI_ERR_TYPE)
etype or filetype can be neither MPI_LB nor MPI_UB.

Uncommitted datatype (MPI_ERR_TYPE)
Both etype or filetype must be committed.

Invalid data representation (MPI_ERR_UNSUPPORTED_DATAREP)
datarep is an invalid data representation.

Invalid info (MPI_ERR_INFO)
info is not a valid info object.

Pending I/O operations (MPI_ERR_OTHER)
There are pending I/O operations.

Locally detected error occurred on another task (MPI_ERR_ARG)
Local parameter check failed on other task(s).

Inconsistent elementary datatypes (MPI_ERR_NOT_SAME)
Local etype extent is not consistent with the elementary datatype extent of other task(s).

Consistency error occurred on another task (MPI_ERR_ARG)
Consistency check failed on other task(s).

Related Information

MPI_FILE_GET_VIEW

MPI_FILE_SYNC, MPI_File_sync

Purpose

Commits file updates of an open file to one or more storage devices.

C Synopsis

#include <mpi.h>
int MPI_File_sync (MPI_File fh);

Fortran Synopsis

include 'mpif.h'
MPI_FILE_SYNC (INTEGER FH,INTEGER IERROR)

Parameters

fh

is the file handle (handle) (INOUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_FILE_SYNC is a collective operation. It forces the updates to the file referred to by fh to be propagated to the storage device(s) before it returns. If I/O operations are pending on fh, an error is returned to the participating tasks and no sync operation is performed on the file.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Returning Errors (MPI Error Class):

Invalid file handle (MPI_ERR_FILE)
fh is not a valid file handle.

Permission denied (MPI_ERR_ACCESS)
The file was opened in read-only mode.

Pending I/O operations (MPI_ERR_OTHER)
There are pending I/O operations.

Locally detected error occurred on another task (MPI_ERR_ARG)
Local parameter check failed on other task(s).

Internal fsync failed (MPI_ERR_IO)
An internal fsync operation failed.

Related Information

MPI_FILE_WRITE_AT
MPI_FILE_WRITE_AT_ALL
MPI_FILE_IWRITE_AT

MPI_FILE_WRITE_AT, MPI_File_write_at

Purpose

Writes to a file starting at the position specified by offset.

C Synopsis

#include <mpi.h>
int MPI_File_write_at (MPI_File fh,MPI_Offset offset,void *buf,
    int count,MPI_Datatype datatype,MPI_Status *status);

Fortran Synopsis

include 'mpif.h'
MPI_FILE_WRITE_AT(INTEGER FH,INTEGER(KIND_MPI_OFFSET_KIND) OFFSET,
    CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,
    INTEGER STATUS(MPI_STATUS_SIZE),
    INTEGER IERROR)

Parameters

fh

is the file handle (handle) (INOUT).

offset

is the file offset (long long) (IN).

buf

is the initial address of buffer (choice) (IN).

count

is the number of elements in buffer (integer) (IN).

datatype

is the datatype of each buffer element (handle) (IN).

status

is the status object (Status) (OUT).

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_FILE_WRITE_AT attempts to write into the file referred to by fh count items of type datatype out of the buffer buf, starting at the offset offset and relative to the current view. MPI_FILE_WRITE_AT returns when it is safe to reuse buf. status contains the number of bytes successfully written and accessor functions MPI_GET_COUNT and MPI_GET_ELEMENTS allows you to extract from status the number of items and the number of intrinsic MPI elements successfully written, respectively.

Notes

Note that when you specify a value for the offset argument, constants of the appropriate type should be used. In Fortran, constants of type INTEGER(KIND=8) should be used, for example, 45_8.

Passing MPI_STATUS_IGNORE for the status argument is not supported in this release.

If an error is raised, the number of bytes contained in status is meaningless.

When the call returns, it does not necessarily mean that the write operation has completed. In particular, written data may still be in system buffers and may not have been written to storage device(s) yet. To ensure that written data is committed to the storage device(s), you must use MPI_FILE_SYNC.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Returning Errors (MPI Error Class):

Permission denied (MPI_ERR_ACCESS)
The file was opened in read-only mode.

Invalid file handle (MPI_ERR_FILE)
fh is not a valid file handle.

Invalid count (MPI_ERR_COUNT)
count is not a valid count.

MPI_DATATYPE_NULL not valid (MPI_ERR_TYPE)
datatype has already been freed.

Undefined datatype (MPI_ERR_TYPE)
datatype is not a defined datatype.

Invalid datatype (MPI_ERR_TYPE)
datatype can be neither MPI_LB nor MPI_UB.

Uncommitted datatype (MPI_ERR_TYPE)
datatype must be committed.

Unsupported operation on sequential access file (MPI_ERR_UNSUPPORTED_OPERATION)
MPI_MODE_SEQUENTIAL was set when the file was opened.

Invalid offset(MPI_ERR_ARG)
offset is an invalid offset.

Not enough space in file system (MPI_ERR_NO_SPACE)
The file system on which the file resides is full.

File too big (MPI_ERR_IO)
The file has reached the maximum size allowed.

Internal write failed (MPI_ERR_IO)
An internal write operation failed.

Internal lseek failed (MPI_ERR_IO)
An internal lseek operation failed.

Related Information

MPI_FILE_WRITE_AT_ALL
MPI_FILE_IWRITE
MPI_FILE_SYNC

MPI_FILE_WRITE_AT_ALL, MPI_File_write_at_all

Purpose

A collective version of MPI_FILE_WRITE_AT.

C Synopsis

#include <mpi.h>
int MPI_File_write_at_all (MPI_File fh,MPI_Offset offset,void *buf,
    int count,MPI_Datatype datatype,MPI_Status *status);

Fortran Synopsis

include 'mpif.h'
MPI_FILE_WRITE_AT_ALL (INTEGER FH,
    INTEGER (KIND=MPI_OFFSET_KIND) OFFSET,
    CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,
    INTEGER STATUS(MPI_STATUS_SIZE),INTEGER IERROR)

Parameters

fh

is the file handle (handle)(INOUT).

offset

is the file offset (long long) (IN).

buf

is the initial address of buffer (choice) (IN).

count

is the number of elements in buffer (integer) (IN).

datatype

is the datatype of each buffer element (handle) (IN).

status

is the status object (Status) (OUT).

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_FILE_WRITE_AT_ALL is the collective version of MPI_FILE_WRITE_AT. In status is stored the number of bytes actually written by the calling task. The call returns when the calling task can safely reuse buf. It does not wait until the storing buffers in other participating tasks can safely be re-used.

Notes

Note that when you specify a value for the offset argument, constants of the appropriate type should be used. In Fortran, constants of type INTEGER(KIND=8) should be used, for example, 45_8.

Passing MPI_STATUS_IGNORE for the status argument is not supported in this release.

If an error is raised, the number of bytes contained in status is meaningless.

When the call returns, it does not necessarily mean that the write operation has completed. In particular, written data may still be in system buffers and may not have been written to storage device(s) yet. To ensure that written data is committed to the storage device(s), you must use MPI_FILE_SYNC.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Returning Errors (MPI Error Class):

Permission denied (MPI_ERR_ACCESS)
The file was opened in read-only mode.

Invalid count (MPI_ERR_COUNT)
count is not a valid count.

Invalid file handle (MPI_ERR_FILE)
fh is not a valid file handle.

MPI_DATATYPE_NULL not valid (MPI_ERR_TYPE)
datatype has already been freed.

Undefined datatype (MPI_ERR_TYPE)
datatype is not a defined datatype.

Invalid datatype (MPI_ERR_TYPE)
datatype can be neither MPI_LB nor MPI_UB.

Uncommitted datatype (MPI_ERR_TYPE)
datatype must be committed.

Unsupported operation on sequential access file (MPI_ERR_UNSUPPORTED_OPERATION)
MPI_MODE_SEQUENTIAL was set when the file was opened.

Invalid offset (MPI_ERR_ARG)
offset is an invalid offset.

Not enough space in file system (MPI_ERR_NO_SPACE)
The file system on which the file resides is full.

File too big (MPI_ERR_IO)
The file has reached the maximum size allowed.

Internal write failed (MPI_ERR_IO)
An internal write operation failed.

Internal lseek failed (MPI_ERR_IO)
An internal lseek operation failed.

Related Information

MPI_FILE_WRITE_AT
MPI_FILE_IWRITE_AT
MPI_FILE_SYNC

MPI_FINALIZE, MPI_Finalize

Purpose

Terminates all MPI processing.

C Synopsis

#include <mpi.h>
int MPI_Finalize(void);

Fortran Synopsis

include 'mpif.h'
MPI_FINALIZE(INTEGER IERROR)

Parameters

IERROR

is the Fortran return code. It is always the last argument.

Description

Make sure this routine is the last MPI call. Any MPI calls made after MPI_FINALIZE raise an error. You must be sure that all pending communications involving a task have completed before the task calls MPI_FINALIZE. You must also be sure that all files opened by the task have been closed before the task calls MPI_FINALIZE.

Although MPI_FINALIZE terminates MPI processing, it does not terminate the task. It is possible to continue with non-MPI processing after calling MPI_FINALIZE, but no other MPI calls (including MPI_INIT) can be made.

In a threaded environment both MPI_INIT and MPI_FINALIZE must be called on the same thread. MPI_FINALIZE closes the communication library and terminates the service threads. It does not affect any threads you created, other than returning an error if one subsequently makes an MPI call. If you had registered a SIGIO handler, it is restored as a signal handler; however, the SIGIO signal is blocked when MPI_FINALIZE returns. If you want to catch SIGIO after MPI_FINALIZE has been called, you should unblock it.

Notes

The MPI standard does not specify the state of MPI tasks after MPI_FINALIZE, therefore, an assumption that all tasks continue may not be portable. If MPI_BUFFER_ATTACH has been used and MPI_BUFFER_DETACH has been not called, there will be an implicit MPI_BUFFER_DETACH within MPI_FINALIZE. See MPI_BUFFER_DETACH.

Errors

MPI already finalized

MPI not initialized

Related Information

MPI_ABORT
MPI_BUFFER_DETACH
MPI_INIT

MPI_GATHER, MPI_Gather

Purpose

Collects individual messages from each task in comm at the root task.

C Synopsis

#include <mpi.h>
int MPI_Gather(void* sendbuf,int sendcount,MPI_Datatype sendtype,
    void* recvbuf,int recvcount,MPI_Datatype recvtype,int root,
    MPI_Comm comm);

Fortran Synopsis

include 'mpif.h'
MPI_GATHER(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE,
    CHOICE RECVBUF,INTEGER RECVCOUNT,INTEGER RECVTYPE,INTEGER ROOT,
    INTEGER COMM,INTEGER IERROR)

Parameters

sendbuf

is the starting address of the send buffer (choice) (IN)

sendcount

is the number of elements in the send buffer (integer) (IN)

sendtype

is the datatype of the send buffer elements (integer) (IN)

recvbuf

is the address of the receive buffer (choice, significant only at root) (OUT)

recvcount

is the number of elements for any single receive (integer, significant only at root) (IN)

recvtype

is the datatype of the receive buffer elements (handle, significant only at root) (IN)

root

is the rank of the receiving task (integer) (IN)

comm

is the communicator (handle) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine collects individual messages from each task in comm at the root task and stores them in rank order.

The type signature of sendcount, sendtype on task i must be equal to the type signature of recvcount, recvtype at the root. This means the amount of data sent must be equal to the amount of data received, pairwise between each task and the root. Distinct type maps between sender and receiver are allowed.

The following is information regarding MPI_GATHER arguments and tasks:

Note that the argument revcount at the root indicates the number of items it receives from each task. It is not the total number of items received.

A call where the specification of counts and types causes any location on the root to be written more than once is erroneous.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid communicator

Invalid count(s)
count < 0

Invalid datatype(s)

Type not committed

Invalid root
root < 0 or root >= groupsize

Unequal message lengths

MPI not initialized

MPI already finalized

Develop mode error if:

Inconsistent root

Inconsistent message lengths

Related Information

MPE_IGATHER
MPI_SCATTER
MPI_GATHER
MPI_ALLGATHER

MPI_GATHERV, MPI_Gatherv

Purpose

Collects individual messages from each task in comm at the root task. Messages can have different sizes and displacements.

C Synopsis

#include <mpi.h>
int MPI_Gatherv(void* sendbuf,int sendcount,MPI_Datatype sendtype,
    void* recvbuf,int recvcounts,int *displs,MPI_Datatype recvtype,
    int root,MPI_Comm comm);

Fortran Synopsis

include 'mpif.h'
MPI_GATHERV(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE,
    CHOICE RECVBUF,INTEGER RECVCOUNTS(*),INTEGER DISPLS(*),
    INTEGER RECVTYPE,INTEGER ROOT,INTEGER COMM,INTEGER IERROR)

Parameters

sendbuf

is the starting address of the send buffer (choice) (IN)

sendcount

is the number of elements in the send buffer (integer) (IN)

sendtype

is the datatype of the send buffer elements (handle) (IN)

recvbuf

is the address of the receive buffer (choice, significant only at root) (OUT)

recvcounts

integer array (of length group size) that contains the number of elements received from each task (significant only at root) (IN)

displs

integer array (of length group size). Entry i specifies the displacement relative to recvbuf at which to place the incoming data from task i (significant only at root) (IN)

recvtype

is the datatype of the receive buffer elements (handle, significant only at root) (IN)

root

is the rank of the receiving task (integer) (IN)

comm

is the communicator (handle) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine collects individual messages from each task in comm at the root task and stores them in rank order. With recvcounts as an array, messages can have varying sizes, and displs allows you the flexibility of where the data is placed on the root.

The type signature of sendcount, sendtype on task i must be equal to the type signature of recvcounts[i], recvtype at the root. This means the amount of data sent must be equal to the amount of data received, pairwise between each task and the root. Distinct type maps between sender and receiver are allowed.

The following is information regarding MPI_GATHERV arguments and tasks:

A call where the specification of sizes, types and displacements causes any location on the root to be written more than once is erroneous.

Notes

Displacements are expressed as elements of type recvtype, not as bytes.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid communicator

Invalid communicator type
must be intracommunicator

Invalid count(s)
count < 0

Invalid datatype(s)

Type not committed

Invalid root
root < 0 or root >= groupsize

A send and receive have unequal message lengths

MPI not initialized

MPI already finalized

Develop mode error if:

Inconsistent root

Related Information

MPE_IGATHER
MPI_GATHER

MPI_GET_COUNT, MPI_Get_count

Purpose

Returns the number of elements in a message.

C Synopsis

#include <mpi.h>
int MPI_Get_count(MPI_Status *status,MPI_Datatype datatype,
    int *count);

Fortran Synopsis

include 'mpif.h'
MPI_GET_COUNT(INTEGER STATUS(MPI_STATUS_SIZE),INTEGER DATATYPE,
    INTEGER COUNT,INTEGER IERROR)

Parameters

status

is a status object (status) (IN). Note that in Fortran a single status object is an array of integers.

datatype

is the datatype of each message element (handle) (IN)

count

is the number of elements (integer) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This subroutine returns the number of elements in a message. The datatype argument and the argument provided by the call that set the status variable should match.

When one of the MPI wait or test calls returns status for a non-blocking operation request and the corresponding blocking operation does not provide a status argument, the status from this wait/test does not contain meaningful source, tag or message size information.

Errors

Invalid datatype

Type not committed

MPI not initialized

MPI already finalized

Related Information

MPI_IRECV
MPI_WAIT
MPI_RECV
MPI_PROBE

MPI_GET_ELEMENTS, MPI_Get_elements

Purpose

Returns the number of basic elements in a message.

C Synopsis

#include <mpi.h>
int MPI_Get_elements(MPI_Status *status,MPI_Datatype datatype,
    int *count);

Fortran Synopsis

include 'mpif.h'
MPI_GET_ELEMENTS(INTEGER STATUS(MPI_STATUS_SIZE),INTEGER DATATYPE,
    INTEGER COUNT,INTEGER IERROR)

Parameters

status

is a status of object (status) (IN). Note that in Fortran a single status object is an array of integers.

datatype

is the datatype used by the operation (handle) (IN)

count

is an integer specifying the number of basic elements (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine returns the number of type map elements in a message. When the number of bytes does not align with the type signature, MPI_GET_ELEMENTS returns MPI_UNDEFINED. For example, given type signature (int, short, int, short) a 10 byte message would return 3 while an 8 byte message would return MPI_UNDEFINED.

When one of the MPI wait or test calls returns status for a nonblocking operation request and the corresponding blocking operation does not provide a status argument, the status from this wait/test does not contain meaningful source, tag or message size information.

Errors

Invalid datatype

Type is not committed

MPI not initialized

MPI already finalized

Related Information

MPI_GET_COUNT

MPI_GET_PROCESSOR_NAME, MPI_Get_processor_name

Purpose

Returns the name of the local processor.

C Synopsis

#include <mpi.h>
int MPI_Get_processor_name(char *name,int *resultlen);

Fortran Synopsis

include 'mpif.h'
MPI_GET_PROCESSOR_NAME(CHARACTER NAME(*),INTEGER RESULTLEN,
      INTEGER IERROR)

Parameters

name

is a unique specifier for the actual node (OUT)

resultlen

specifies the printable character length of the result returned in name (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine returns the name of the local processor at the time of the call. The name is a character string from which it is possible to identify a specific piece of hardware. name represents storage that is at least MPI_MAX_PROCESSOR_NAME characters long and MPI_GET_PROCESSOR_NAME can write up to this many characters in name.

The actual number of characters written is returned in resultlen. The returned name is a null terminated C string with the terminating byte not counted in resultlen.

Errors

MPI not initialized

MPI already finalized

MPI_GET_VERSION, MPI_Get_version

Purpose

Returns the version of the MPI standard supported in this release.

C Synopsis

#include <mpi.h>
int MPI_Get_version(int *version,int *subversion);

Fortran Synopsis

include 'mpif.h'
MPI_GET_VERSION(INTEGER VERSION, INTEGER SUBVERSION, INTEGER IERROR)

Parameters

version

MPI standard version number (integer) (OUT)

subversion

MPI standard subversion number (integer) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine is used to determine the version of the MPI standard supported by the MPI implementation.

There are also new symbolic constants, MPI_VERSION and MPI_SUBVERSION, provided in mpi.h and mpif.h that provide similar compile-time information.

MPI_GET_VERSION can be called before MPI_INIT.

MPI_GRAPH_CREATE, MPI_Graph_create

Purpose

Creates a new communicator containing graph topology information.

C Synopsis

#include <mpi.h>
MPI_Graph_create(MPI_Comm comm_old,int nnodes, int *index,
    int *edges,int reorder,MPI_Comm *comm_graph);

Fortran Synopsis

include 'mpif.h'
MPI_GRAPH_CREATE(INTEGER COMM_OLD,INTEGER NNODES,INTEGER INDEX(*),
    INTEGER EDGES(*),INTEGER REORDER,INTEGER COMM_GRAPH,
    INTEGER IERROR)

Parameters

comm_old

is the input communicator (handle) (IN)

nnodes

is an integer specifying the number of nodes in the graph (IN)

index

is an array of integers describing node degrees (IN)

edges

is an array of integers describing graph edges (IN)

reorder

if true, ranking may be reordered (logical) (IN)

comm_graph

is the communicator with the graph topology added (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine creates a new communicator containing graph topology information provided by nnodes, index, edges, and reorder. MPI_GRAPH_CREATE returns the handle for this new communicator in comm_graph.

If there are more tasks in comm_old then nnodes, some tasks are returned comm_graph as MPI_COMM_NULL.

Notes

The reorder argument is currently ignored.

The following is an example showing how to define the arguments nnodes, index, and edges. Assume there are four tasks (0, 1, 2, 3) with the following adjacency matrix:
Task Neighbors
0 1, 3
1 0
2 3
3 0, 2

Then the input arguments are:
Argument Input
nnodes 4
index 2, 3, 4, 6
edges 1, 3, 0, 3, 0, 2

Thus, in C, index[0] is the degree of node zero, and index[i]-index[i-1] is the degree of node i, i=1, ..., nnodes-1. The list of neighbors of node zero is stored in edges[j], for 0 >equiv. j >equiv. index[0]-1 and the list of neighbors of node i, i > 0, is stored in edges[j], index[i-1] >equiv. j >equiv. index[i]-1.

In Fortran, index(1) is the degree of node zero, and index(i+1)- index(i) is the degree of node i, i=1, ..., nnodes-1. The list of neighbors of node zero is stored in edges(j), for 1 >equiv. j >equiv. index(1) and the list of neighbors of node i, i > 0, is stored in edges(j), index(i)+1 >equiv. j >equiv. index(i+1).

Observe that because node 0 indicates node 1 is a neighbor, that node 1 must indicate that node 0 is its' neighbor. For any edge A>B the edge B>A must also be specified.

Errors

MPI not initialized

MPI already finalized

Invalid communicator

Invalid communicator type
must be intracommunicator

Invalid nnodes
nnodes<0 or nnodes > groupsize

Invalid node degree
(index[i]-index[i-1]) < 0

Invalid neighbor
edges[i] < 0 or edges[i]>=nnodes

Asymmetric graph

Conflicting collective operations on communicator

Related Information

MPI_CART_CREATE

MPI_GRAPH_GET, MPI_Graph_get

Purpose

Retrieves graph topology information from a communicator.

C Synopsis

#include <mpi.h>
MPI_Graph_get(MPI_Comm comm,int maxindex,int maxedges,
      int *index,int *edges);

Fortran Synopsis

include 'mpif.h'
MPI_GRAPH_GET(INTEGER COMM,INTEGER MAXINDEX,INTEGER MAXEDGES,
      INTEGER INDEX(*),INTEGER EDGES(*),INTEGER IERROR)

Parameters

comm

is a communicator with graph topology (handle) (IN)

maxindex

is an integer specifying the length of index in the calling program (IN)

maxedges

is an integer specifying the length of edges in the calling program (IN)

index

is an array of integers containing node degrees (OUT)

edges

is an array of integers containing node neighbors (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine retrieves the index and edges graph topology information associated with a communicator.

Errors

MPI not initialized

MPI already finalized

Invalid communicator

No topology

Invalid topology type
topology type must be graph

Invalid array size
maxindex < 0 or maxedges < 0

Related Information

MPI_GRAPHDIMS_GET
MPI_GRAPH_CREATE

MPI_GRAPH_MAP, MPI_Graph_map

Purpose

Computes placement of tasks on the physical machine.

C Synopsis

#include <mpi.h>
MPI_Graph_map(MPI_Comm comm,int nnodes,int *index,int *edges,int *newrank);

Fortran Synopsis

include 'mpif.h'
MPI_GRAPH_MAP(INTEGER COMM,INTEGER NNODES,INTEGER INDEX(*),
              INTEGER EDGES(*),INTEGER NEWRANK,INTEGER IERROR)

Parameters

comm

is the input communicator (handle) (IN)

nnodes

is the number of graph nodes (integer) (IN)

index

is an integer array specifying node degrees (IN)

edges

is an integer array specifying node adjacency (IN)

newrank

is the reordered rank,or MPI_Undefined if the calling task does not belong to the graph (integer) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_GRAPH_MAP allows MPI to compute an optimal placement for the calling task on the physical machine by reordering the tasks in comm.

Notes

MPI_CART_MAP returns newrank as the original rank of the calling task if it belongs to the grid or MPI_UNDEFINED if it does not. Currently, no reordering is done by this function.

Errors

Invalid communicator

Invalid communicator type
must be intracommunicator

Invalid nnodes
nnodes <0 or nnodes > groupsize

Invalid node degree
index[i] < 0

Invalid neighbors
edges[i] < 0 or edges[i] >= nnodes

MPI not initialized

MPI already finalized

Related Information

MPI_GRAPH_CREATE
MPI_CART_MAP

MPI_GRAPH_NEIGHBORS, MPI_Graph_neighbors

Purpose

Returns the neighbors of the given task.

C Synopsis

#include <mpi.h>
MPI_Graph_neighbors(MPI_Comm comm,int rank,int maxneighbors,int *neighbors);

Fortran Synopsis

include 'mpif.h'
MPI_GRAPH_NEIGHBORS(MPI_COMM COMM,INTEGER RANK,INTEGER MAXNEIGHBORS,
      INTEGER NNEIGHBORS(*),INTEGER IERROR)

Parameters

comm

is a communicator with graph topology (handle) (IN)

rank

is the rank of a task within group of comm (integer) (IN)

maxneighbors

is the size of array neighbors (integer) (IN)

neighbors

is the ranks of tasks that are neighbors of the specified task (array of integer) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine retrieves the adjacency information for a particular task.

Errors

Invalid array size
maxneighbors < 0

Invalid rank
rank < 0 or rank > groupsize

MPI not initialized

MPI already finalized

Invalid communicator

No topology

Invalid topology type
no graph topology associate with communicator

Related Information

MPI_GRAPH_NEIGHBORS_COUNT
MPI_GRAPH_CREATE

MPI_GRAPH_NEIGHBORS_COUNT, MPI_Graph_neighbors_count

Purpose

Returns the number of neighbors of the given task.

C Synopsis

#include <mpi.h>
MPI_Graph_neighbors_count(MPI_Comm comm,int rank,
int *neighbors);

Fortran Synopsis

include 'mpif.h'
MPI_GRAPH_NEIGHBORS_COUNT(INTEGER COMM,INTEGER RANK,
INTEGER NEIGHBORS(*),INTEGER IERROR)

Parameters

comm

is a communicator with graph topology (handle) (IN)

rank

is the rank of a task within comm (integer) (IN)

neighbors

is the number of neighbors of the specified task (integer) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine returns the number of neighbors of the given task.

Errors

Invalid rank
rank < 0 or rank > = groupsize

MPI not initialized

MPI already finalized

Invalid communicator

No graph topology associated with communicator

Invalid topology type

Related Information

MPI_GRAPH_NEIGHBORS
MPI_GRAPH_CREATE

MPI_GRAPHDIMS_GET, MPI_Graphdims_get

Purpose

Retrieves graph topology information from a communicator.

C Synopsis

#include <mpi.h>
MPI_Graphdims_get(MPI_Comm comm,int *nnodes,int *nedges);

Fortran Synopsis

include 'mpif.h'
MPI_GRAPHDIMS_GET(INTEGER COMM,INTEGER NNDODES,INTEGER NEDGES,
      INTEGER IERROR)

Parameters

comm

is a communicator with graph topology (handle) (IN)

nnodes

is an integer specifying the number of nodes in the graph. The number of nodes and the number of tasks in the group are equal. (OUT)

nedges

is an integer specifying the number of edges in the graph. (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine retrieves the number of nodes and the number of edges in the graph topology associated with a communicator.

Errors

MPI not initialized

MPI already finalized

Invalid communicator

No topology

Invalid topology type
topology type must be graph

Related Information

MPI_GRAPH_GET
MPI_GRAPH_CREATE

MPI_GROUP_COMPARE, MPI_Group_compare

Purpose

Compares the contents of two task groups.

C Synopsis

#include <mpi.h>
int MPI_Group_compare(MPI_Group group1,MPI_Group group2,
    int *result);

Fortran Synopsis

include 'mpif.h'
MPI_GROUP_COMPARE(INTEGER GROUP1,INTEGER GROUP2,INTEGER RESULT,
    INTEGER IERROR)

Parameters

group1

is the first group (handle) (IN)

group2

is the second group (handle) (IN)

result

is the result (integer) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine compares the contents of two task groups and returns one of the following:

MPI_IDENT
both groups have the exact group members and group order

MPI_SIMILAR
group members are the same but group order is different

MPI_UNEQUAL
group size and/or members are different

Errors

Invalid group(s)

MPI not initialized

MPI already finalized

Related Information

MPI_COMM_COMPARE

MPI_GROUP_DIFFERENCE, MPI_Group_difference

Purpose

Creates a new group that is the difference of two existing groups.

C Synopsis

#include <mpi.h>
int MPI_Group_difference(MPI_Group group1,MPI_Group group2,
    MPI_Group *newgroup);

Fortran Synopsis

include 'mpif.h'

MPI_GROUP_DIFFERENCE(INTEGER GROUP1,INTEGER GROUP2,
      INTEGER NEWGROUP,INTEGER IERROR)

Parameters

group1

is the first group (handle) (IN)

group2

is the second group (handle) (IN)

newgroup

is the difference group (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine creates a new group that is the difference of two existing groups. The new group consists of all elements of the first group (group1) that are not in the second group (group2), and is ordered as in the first group.

Errors

Invalid group(s)

MPI not initialized

MPI already finalized

Related Information

MPI_GROUP_UNION
MPI_GROUP_INTERSECTION

MPI_GROUP_EXCL, MPI_Group_excl

Purpose

Creates a new group by excluding selected tasks of an existing group.

C Synopsis

#include <mpi.h>
int MPI_Group_excl(MPI_Group group,int n,int *ranks,
    MPI_Group *newgroup);

Fortran Synopsis

include 'mpif.h'
MPI_GROUP_EXCL(INTEGER GROUP,INTEGER N,INTEGER RANKS(*),
      INTEGER NEWGROUP,INTEGER IERROR)

Parameters

group

is the group (handle) (IN)

n

is the number of elements in array ranks (integer) (IN)

ranks

is the array of integer ranks in group not to appear in newgroup (IN)

newgroup

is the new group derived from above preserving the order defined by group (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine removes selected tasks from an existing group to create a new group.

MPI_GROUP_EXCL creates a group of tasks newgroup obtained by deleting from group tasks with ranks ranks[0],... ranks[n-1]. The ordering of tasks in newgroup is identical to the ordering in group. Each of the n elements of ranks must be a valid rank in group and all elements must be distinct. If n= 0, then newgroup is identical to group.

Errors

Invalid group

Invalid size
n <0 or n > groupsize

Invalid rank(s)
ranks[i] < 0 or ranks[i] > = groupsize

Duplicate rank(s)

MPI not initialized

MPI already finalized

Related Information

MPI_GROUP_INCL
MPI_GROUP_RANGE_EXCL
MPI_GROUP_RANGE_INCL

MPI_GROUP_FREE, MPI_Group_free

Purpose

Marks a group for deallocation.

C Synopsis

#include <mpi.h>
int MPI_Group_free(MPI_Group *group);

Fortran Synopsis

include 'mpif.h'
MPI_GROUP_FREE(INTEGER GROUP,INTEGER IERROR)

Parameters

group

is the group (handle) (INOUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_GROUP_FREE sets the handle group to MPI_GROUP_NULL and marks the group object for deallocation. Actual deallocation occurs only after all operations involving group are completed. Any active operation using group completes normally but no new calls with meaningful references to the freed group are possible.

Errors

Invalid group

MPI not initialized

MPI already finalized

MPI_GROUP_INCL, MPI_Group_incl

Purpose

Creates a new group consisting of selected tasks from an existing group.

C Synopsis

#include <mpi.h>
int MPI_Group_incl(MPI_Group group,int n,int *ranks,
    MPI_Group *newgroup);

Fortran Synopsis

include 'mpif.h'
MPI_GROUP_INCL(INTEGER GROUP,INTEGER N,INTEGER RANKS(*),
      INTEGER NEWGROUP,INTEGER IERROR)

Parameters

group

is the group (handle) (IN)

n

is the number of elements in array ranks and the size of newgroup(integer) (IN)

ranks

is the ranks of tasks in group to appear in newgroup (array of integers) (IN)

newgroup

is the new group derived from above in the order defined by ranks (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine creates a new group consisting of selected tasks from an existing group.

MPI_GROUP_INCL creates a group newgroup consisting of n tasks in group with ranks rank[0], ..., rank[n-1]. The task with rank i in newgroup is the task with rank ranks[i] in group.

Each of the n elements of ranks must be a valid rank in group and all elements must be distinct. If n = 0, then newgroup is MPI_GROUP_EMPTY. This function can be used to reorder the elements of a group.

Errors

Invalid group

Invalid size
n <0 or n > groupsize

Invalid rank(s)
ranks[i] < 0 or ranks[i] >= groupsize

Duplicate rank(s)

MPI not initialized

MPI already finalized

Related Information

MPI_GROUP_EXCL
MPI_GROUP_RANGE_INCL
MPI_GROUP_RANGE_EXCL

MPI_GROUP_INTERSECTION, MPI_Group_intersection

Purpose

Creates a new group that is the intersection of two existing groups.

C Synopsis

#include <mpi.h>
int MPI_Group_intersection(MPI_Group group1,MPI_Group group2,
    MPI_Group *newgroup);

Fortran Synopsis

include 'mpif.h'

MPI_GROUP_INTERSECTION(INTEGER GROUP1,INTEGER GROUP2,
      INTEGER NEWGROUP,INTEGER IERROR)

Parameters

group1

is the first group (handle) (IN)

group2

is the second group (handle) (IN)

newgroup

is the intersection group (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine creates a new group that is the intersection of two existing groups. The new group consists of all elements of the first group (group1) that are also part of the second group (group2), and is ordered as in the first group.

Errors

Invalid group(s)

MPI not initialized

MPI already finalized

Related Information

MPI_GROUP_UNION
MPI_GROUP_DIFFERENCE

MPI_GROUP_RANGE_EXCL, MPI_Group_range_excl

Purpose

Creates a new group by removing selected ranges of tasks from an existing group.

C Synopsis

#include <mpi.h>
int MPI_Group_range_excl(MPI_Group group,int n,
    int ranges[][3],MPI_Group *newgroup);

Fortran Synopsis

include 'mpif.h'
MPI_GROUP_RANGE_EXCL(INTEGER GROUP,INTEGER N,INTEGER RANGES(3,*),
    INTEGER NEWGROUP,INTEGER IERROR)

Parameters

group

is the group (handle) (IN)

n

is the number of triplets in array ranges (integer) (IN)

ranges

is an array of integer triplets of the form (first rank, last rank, stride) specifying the ranks in group of tasks that are to be excluded from the output group newgroup. (IN)

newgroup

is the new group derived from above that preserves the order in group (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine creates a new group by removing selected ranges of tasks from an existing group. Each computed rank must be a valid rank in group and all computed ranks must be distinct.

The function of this routine is equivalent to expanding the array ranges to an array of the excluded ranks and passing the resulting array of ranks and other arguments to MPI_GROUP_EXCL. A call to MPI_GROUP_EXCL is equivalent to a call to MPI_GROUP_RANGE_EXCL with each rank i in ranks replaced by the triplet (i,i,1) in the argument ranges.

Errors

Invalid group

Invalid size
n < 0 or n > groupsize

Invalid rank(s)
a computed rank < 0 or >= groupsize

Duplicate rank(s)

Invalid stride(s)
stride[i] = 0

Too many ranks
Number of ranks > groupsize

MPI not initialized

MPI already finalized

Related Information

MPI_GROUP_RANGE_INCL
MPI_GROUP_EXCL
MPI_GROUP_INCL

MPI_GROUP_RANGE_INCL, MPI_Group_range_incl

Purpose

Creates a new group consisting of selected ranges of tasks from an existing group.

C Synopsis

#include <mpi.h>
int MPI_Group_range_incl(MPI_Group group,int n,
    int ranges[][3],MPI_Group *newgroup);

Fortran Synopsis

include 'mpif.h'
MPI_GROUP_RANGE_INCL(INTEGER GROUP,INTEGER N,INTEGER RANGES(3,*),
    INTEGER NEWGROUP,INTEGER IERROR)

Parameters

group

is the group (handle) (IN)

n

is the number of triplets in array ranges (integer) (IN)

ranges

is a one-dimensional array of integer triplets of the form (first rank, last rank, stride) indicating ranks in group of tasks to be included in newgroup (IN)

newgroup

is the new group derived from above in the order defined by ranges (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine creates a new group consisting of selected ranges of tasks from an existing group. The function of this routine is equivalent to expanding the array of ranges to an array of the included ranks and passing the resulting array of ranks and other arguments to MPI_GROUP_INCL. A call to MPI_GROUP_INCL is equivalent to a call to MPI_GROUP_RANGE_INCL with each rank i in ranks replaced by the triplet (i,i,1) in the argument ranges.

Errors

Invalid group

Invalid size
n <0 or n > groupsize

Invalid rank(s)
a computed rank < 0 or >= groupsize

Duplicate rank(s)

Invalid stride(s)
stride[i] = 0

Too many ranks
nranks > groupsize

MPI not initialized

MPI already finalized

Related Information

MPI_GROUP_RANGE_EXCL
MPI_GROUP_INCL
MPI_GROUP_EXCL

MPI_GROUP_RANK, MPI_Group_rank

Purpose

Returns the rank of the local task with respect to group.

C Synopsis

#include <mpi.h>
int MPI_Group_rank(MPI_Group group,int *rank);

Fortran Synopsis

include 'mpif.h'
MPI_GROUP_RANK(INTEGER GROUP,INTEGER RANK,INTEGER IERROR)

Parameters

group

is the group (handle) (IN)

rank

is an integer that specifies the rank of the calling task in group or MPI_UNDEFINED if the task is not a member. (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine returns the rank of the local task with respect to group. This local operation does not require any intertask communication.

Errors

Invalid group

MPI not initialized

MPI already finalized

Related Information

MPI_COMM_RANK

MPI_GROUP_SIZE, MPI_Group_size

Purpose

Returns the number of tasks in a group.

C Synopsis

#include <mpi.h>
int MPI_Group_size(MPI_Group group,int *size);

Fortran Synopsis

include 'mpif.h'
MPI_GROUP_SIZE(INTEGER GROUP,INTEGER SIZE,INTEGER IERROR)

Parameters

group

is the group (handle) (IN)

size

is the number of tasks in the group (integer) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine returns the number of tasks in a group. This is a local operation and does not require any intertask communication.

Errors

Invalid group

MPI not initialized

MPI already finalized

Related Information

MPI_COMM_SIZE

MPI_GROUP_TRANSLATE_RANKS, MPI_Group_translate_ranks

Purpose

Converts task ranks of one group into ranks of another group.

C Synopsis

#include <mpi.h>
int MPI_Group_translate_ranks(MPI_Group group1,int n,
    int *ranks1,MPI_Group group2,int *ranks2);

Fortran Synopsis

include 'mpif.h'
MPI_GROUP_TRANSLATE_RANKS(INTEGER GROUP1, INTEGER N,
    INTEGER RANKS1(*),INTEGER GROUP2,INTEGER RANKS2(*),INTEGER IERROR)

Parameters

group1

is group1 (handle) (IN)

n

is an integer that specifies the number of ranks in ranks1 and ranks2 arrays (IN)

ranks1

is an array of zero or more valid ranks in group1 (IN)

group2

is group2 (handle) (IN)

ranks2

is an array of corresponding ranks in group2. If the task of ranks1(i) is not a member of group2, ranks2(i) returns MPI_UNDEFINED. (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This subroutine converts task ranks of one group into ranks of another group. For example, if you know the ranks of tasks in one group, you can use this function to find the ranks of tasks in another group.

Errors

Invalid group(s)

Invalid rank count
n < 0

Invalid rank
ranks1[i] < 0 or ranks1[i] > &equals size of group1

MPI not initialized

MPI already finalized

Related Information

MPI_COMM_COMPARE

MPI_GROUP_UNION, MPI_Group_union

Purpose

Creates a new group that is the union of two existing groups.

C Synopsis

#include <mpi.h>
int MPI_Group_union(MPI_Group group1,MPI_Group group2,
    MPI_Group *newgroup);

Fortran Synopsis

include 'mpif.h'

MPI_GROUP_UNION(INTEGER GROUP1,INTEGER GROUP2,INTEGER NEWGROUP,
      INTEGER IERROR)

Parameters

group1

is the first group (handle) (IN)

group2

is the second group (handle) (IN)

newgroup

is the union group (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine creates a new group that is the union of two existing groups. The new group consists of the elements of the first group (group1) followed by all the elements of the second group (group2) not in the first group.

Errors

Invalid group(s)

MPI not initialized

MPI already finalized

Related Information

MPI_GROUP_INTERSECTION
MPI_GROUP_DIFFERENCE

MPI_IBSEND, MPI_Ibsend

Purpose

Performs a nonblocking buffered mode send operation.

C Synopsis

#include <mpi.h>
int MPI_Ibsend(void* buf,int count,MPI_Datatype datatype,
    int dest,int tag,MPI_Comm comm,MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPI_IBSEND(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER DEST,
    INTEGER TAG,INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)

Parameters

buf

is the initial address of the send buffer (choice) (IN)

count

is the number of elements in the send buffer (integer) (IN)

datatype

is the datatype of each send buffer element (handle) (IN)

dest

is the rank of the destination task in comm (integer) (IN)

tag

is the message tag (integer) (IN)

comm

is the communicator (handle) (IN)

request

is the communication request (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_IBSEND starts a buffered mode, nonblocking send. The send buffer may not be modified until the request has been completed by MPI_WAIT, MPI_TEST, or one of the other MPI wait or test functions.

Notes

See MPI_BSEND for additional information.

Errors

Invalid count
count < 0

Invalid datatype

Invalid destination

Type not committed
dest < 0 or dest > = groupsize

Invalid tag
tag < 0

Invalid comm

MPI not initialized

MPI already finalized

Develop mode error if:

Illegal buffer update

Related Information

MPI_BSEND
MPI_BSEND_INIT
MPI_WAIT
MPI_BUFFER_ATTACH

MPI_INFO_CREATE, MPI_Info_create

Purpose

Creates a new info object.

C Synopsis

#include <mpi.h>
int MPI_Info_create (MPI_Info *info);

Fortran Synopsis

include 'mpif.h'
MPI_INFO_CREATE (INTEGER INFO,INTEGER IERROR)

Parameters

info

is the info object created (handle)(OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_INFO_CREATE creates a new info object and returns a handle to it in the info argument.

Because this release does not recognize any key, info objects are always empty.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Related Information

MPI_INFO_FREE
MPI_INFO_SET
MPI_INFO_GET
MPI_INFO_GET_NKEYS
MPI_INFO_GET_VALUELEN
MPI_INFO_GET_NTHKEY
MPI_INFO_DELETE
MPI_INFO_DUP

MPI_INFO_DELETE, MPI_Info_delete

Purpose

Deletes a (key, value) pair from an info object.

C Synopsis

#include <mpi.h>
int MPI_Info_delete (MPI_Info info,char *key);

Fortran Synopsis

include 'mpif.h'
MPI_INFO_DELETE (INTEGER INFO,CHARACTER KEY(*),
    INTEGER IERROR)

Parameters

info

is the info object (handle)(OUT)

key

is the key of the pair to be deleted (string)(IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_INFO_DELETE deletes a pair (key, value) from info. If the key is not recognized by MPI, it is ignored and the call returns MPI_SUCCESS and has no effect on info.

Because this release does not recognize any key, this call always returns MPI_SUCCESS and has no effect on info.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Invalid info
info is not a valid info object

Invalid info key
key must contain less than 128 characters

Related Information

MPI_INFO_CREATE
MPI_INFO_SET
MPI_INFO_GET
MPI_INFO_GET_NKEYS
MPI_INFO_GET_VALUELEN
MPI_INFO_GET_NTHKEY
MPI_INFO_DUP
MPI_INFO_FREE

MPI_INFO_DUP, MPI_Info_dup

Purpose

Duplicates an info object.

C Synopsis

#include <mpi.h>
int MPI_Info_dup (MPI_Info info,MPI_Info *newinfo);

Fortran Synopsis

include 'mpif.h'
MPI_INFO_DUP (INTEGER INFO,INTEGER NEWINFO,INTEGER IERROR)

Parameters

info

is the info object to be duplicated(handle)(IN)

newinfo

is the new info object (handle)(OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_INFO_DUP duplicates the info object referred to by info and returns in newinfo a handle to this newly created info object.

Because this release does not recognize any key, the new info object is empty.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Invalid info
info is not a valid info object

Related Information

MPI_INFO_CREATE
MPI_INFO_FREE
MPI_INFO_SET
MPI_INFO_GET
MPI_INFO_GET_NKEYS
MPI_INFO_GET_VALUELEN
MPI_INFO_GET_NTHKEY
MPI_INFO_DELETE

MPI_INFO_FREE, MPI_Info_free

Purpose

Frees the info object referred to by the info argument and sets it to MPI_INFO_NULL.

C Synopsis

#include <mpi.h>
int MPI_Info_free (MPI_Info *info);

Fortran Synopsis

include 'mpif.h'
MPI_INFO_FREE (INTEGER INFO,INTEGER IERROR)

Parameters

info

is the info object (handle)(IN/OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_INFO_FREE frees the info object referred to by the info argument and sets info to MPI_INFO_NULL.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Invalid info
info is not a valid info object

Related Information

MPI_INFO_CREATE
MPI_INFO_DELETE
MPI_INFO_SET
MPI_INFO_GET
MPI_INFO_GET_NKEYS
MPI_INFO_GET_VALUELEN
MPI_INFO_GET_NTHKEY
MPI_INFO_DUP

MPI_INFO_GET, MPI_Info_get

Purpose

Retrieves the value associated with key in an info object.

C Synopsis

#include <mpi.h>
int MPI_Info_get (MPI_Info info,char *key,int valuelen,
    char *value,int *flag);

Fortran Synopsis

include 'mpif.h'
MPI_INFO_GET (INTEGER INFO,CHARACTER KEY(*),INTEGER VALUELEN,
    CHARACTER VALUE(*),LOGICAL FLAG,INTEGER IERROR)

Parameters

info

is the info object (handle)(IN)

key

is the key (string)(IN)

valuelen

is the length of the value argument (integer)(IN)

value

is the value (string)(OUT)

flag

is true if key is defined and is false if not (boolean)(OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_INFO_GET retrieves the value associated with key in the info object referred to by info.

Because this release does not recognize any key, flag is set to false, value remains unchanged, and valuelen is ignored.

Notes

In order to determine how much space should be allocated for the value argument, call MPI_INFO_GET_VALUELEN first.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Invalid info
info is not a valid info object

Invalid info key
key must contain less than 128 characters

Related Information

MPI_INFO_CREATE
MPI_INFO_FREE
MPI_INFO_SET
MPI_INFO_GET_NKEYS
MPI_INFO_GET_VALUELEN
MPI_INFO_GET_NTHKEY
MPI_INFO_DUP
MPI_INFO_DELETE

MPI_INFO_GET_NKEYS, MPI_Info_get_nkeys

Purpose

Returns the number of keys defined in an info object.

C Synopsis

#include <mpi.h>
int MPI_Info_get_nkeys (MPI_Info info,int *nkeys);

Fortran Synopsis

include 'mpif.h'
MPI_INFO_GET_NKEYS (INTEGER INFO,INTEGER NKEYS,INTEGER IERROR)

Parameters

info

is the info object (handle)(IN)

nkeys

is the number of defined keys (integer)(OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_INFO_GET_NKEYS returns in nkeys the number of keys currently defined in the info object referred to by info.

Because this release does not recognize any key, the number of keys returned is zero.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Invalid info
info is not a valid info object

Related Information

MPI_INFO_CREATE
MPI_INFO_FREE
MPI_INFO_SET
MPI_INFO_GET
MPI_INFO_GET_VALUELEN
MPI_INFO_GET_NTHKEY
MPI_INFO_DUP
MPI_INFO_DELETE

MPI_INFO_GET_NTHKEY, MPI_Info_get_nthkey

Purpose

Retrieves the nth key defined in an info object.

C Synopsis

#include <mpi.h>
int MPI_Info_get_nthkey (MPI_Info info, int n, char *key);

Fortran Synopsis

include 'mpif.h'
MPI_INFO_GET_NTHKEY (INTEGER INFO,INTEGER N,CHARACTER KEY(*),
    INTEGER IERROR)

Parameters

info

is the info object (handle)(IN)

n

is the key number (integer)(IN)

key

is the key (string)(OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_INFO_GET_NTHKEY retrieves the nth key defined in the info object referred to by info. The first key defined has the rank of 0 so n must be greater than - 1 but less than the number of keys returned by MPI_INFO_GET_NKEYS.

Because this release does not recognize any key, this function always raises an error.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Invalid info
info is not a valid info object

Invalid info key index
n must have a value between 0 and N -1, where N is the number of keys returned by MPI_INFO_GET_NKEYS

Related Information

MPI_INFO_CREATE
MPI_INFO_FREE
MPI_INFO_SET
MPI_INFO_GET
MPI_INFO_GET_VALUELEN
MPI_INFO_GET_NKEYS
MPI_INFO_DUP
MPI_INFO_DELETE

MPI_INFO_GET_VALUELEN, MPI_Info_get_valuelen

Purpose

Retrieves the length of the value associated with a key of an info object.

C Synopsis

#include <mpi.h>
int MPI_Info_get_valuelen (MPI_Info info,char *key,int *valuelen,
    int *flag);

Fortran Synopsis

include 'mpif.h'
MPI_INFO_GET_VALUELEN (INTEGER INFO,CHARACTER KEY(*),INTEGER VALUELEN,
    LOGICAL FLAG,INTEGER IERROR)

Parameters

info

is the info object (handle)(IN)

key

is the key (string)(IN)

valuelen

is the length of the value associated with key (integer)(OUT)

flag

is true if key is defined and is false if not (boolean)(OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_INFO_GET_VALUELEN retrieves the length of the value associated with the key in the info object referred to by info.

Because this release does not recognize any key, flag is set to false and valuelen remains unchanged.

Notes

Use this routine prior to calling MPI_INFO_GET to determine how much space must be allocated for the value parameter of MPI_INFO_GET.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Invalid info
info is not a valid info object

Invalid info key
key must contain less than 128 characters

Related Information

MPI_INFO_CREATE
MPI_INFO_FREE
MPI_INFO_SET
MPI_INFO_GET
MPI_INFO_GET_NKEYS
MPI_INFO_GET_NTHKEY
MPI_INFO_DUP
MPI_INFO_DELETE

MPI_INFO_SET, MPI_Info_set

Purpose

Adds a pair (key, value) to an info object.

C Synopsis

#include <mpi.h>
int MPI_Info_set(MPI_Info info,char *key,char *value);

Fortran Synopsis

include 'mpif.h'
MPI_INFO_SET (INTEGER INFO,CHARACTER KEY(*),CHARACTER VALUE(*),
    INTEGER IERROR)

Parameters

info

is the info object (handle)(INOUT)

key

is the key (string)(IN)

value

is the value (string)(IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_INFO_SET adds a recognized (key, value) pair to the info object referred to by info. When MPI_INFO_SET is called with a key which is not recognized, it behaves as a no-op.

Because this release does not recognize any key, the info object remains unchanged.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Invalid info
info is not a valid info object

Invalid info key
key must contain less than 128 characters

Invalid info value
value must contain less than 1024 characters

Related Information

MPI_INFO_CREATE
MPI_INFO_FREE
MPI_INFO_GET
MPI_INFO_GET_VALUELEN
MPI_INFO_GET_NKEYS
MPI_INFO_GET_NTHKEY
MPI_INFO_DUP
MPI_INFO_DELETE

MPI_INIT, MPI_Init

Purpose

Initializes MPI.

C Synopsis

#include <mpi.h>
int MPI_Init(int *argc,char ***argv);

Fortran Synopsis

include 'mpif.h'
MPI_INIT(INTEGER IERROR)

Parameters

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine initializes MPI. All MPI programs must call this routine before any other MPI routine (with the exception of MPI_INITIALIZED). More than one call to MPI_INIT by any task is erroneous.

Notes

argc and argv are the arguments passed to main. The IBM MPI implementation of the MPI Standard does not examine or modify these arguments when passed to MPI_INIT.

In a threaded environment, MPI_INIT needs to be called once per task and not once per thread. You don't need to call it on the main thread but both MPI_INIT and MPI_FINALIZE must be called on the same thread.

MPI_INIT opens a local socket and binds it to a port, sends that information to POE, receives a list of destination addresses and ports, opens a socket to send to each one, verifies that communication can be established, and distributes MPI internal state to each task.

In the signal-handling library, this work is done in the initialization stub added by POE, so that the library is open when your main program is called. MPI_INIT sets a flag saying that you called it.

In the threaded library, the work of MPI_INIT is done when the function is called. The local socket is not open when your main program starts. This may affect the numbering of file descriptors, the use of the environment strings, and the treatment of stdin (the MP_HOLD_STDIN variable). If an existing non-threaded program is relinked using the threaded library, the code prior to calling MPI_INIT should be examined with these thoughts in mind.

Also for the threaded library, if you had registered a function as an AIX signal handler for the SIGIO signal at the time that MPI_INIT was called, that function will be added to the interrupt service thread and be processed as a thread function rather than as a signal handler. You'll need to set the environment variable MP_CSS_INTERRUPT=YES to get arriving packets to invoke the interrupt service thread.

Errors

MPI already initialized

MPI already finalized

Related Information

MPI_INITIALIZED
MPI_FINALIZE

MPI_INITIALIZED, MPI_Initialized

Purpose

Determines whether MPI is initialized.

C Synopsis

#include <mpi.h>
int MPI_Initialized(int *flag);

Fortran Synopsis

include 'mpif.h'
MPI_INITIALIZED(INTEGER FLAG,INTEGER IERROR)

Parameters

flag

is true if MPI_INIT was called; otherwise is false.

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine determines if MPI is initialized. This and MPI_GET_VERSION are the only MPI calls that can be made before MPI_INIT is called.

Notes

Because it is erroneous to call MPI_INIT more than once per task, use MPI_INITIALIZED if there is doubt as to the state of MPI.

Related Information

MPI_INIT

MPI_INTERCOMM_CREATE, MPI_Intercomm_create

Purpose

Creates an intercommunicator from two intracommunicators.

C Synopsis

#include <mpi.h>
int MPI_Intercomm_create(MPI_Comm local_comm,int local_leader,
    MPI_Comm peer_comm,int remote_leader,int tag,MPI_Comm *newintercom);

Fortran Synopsis

include 'mpif.h'
MPI_INTERCOMM_CREATE(INTEGER LOCAL_COMM,INTEGER LOCAL_LEADER,
    INTEGER PEER_COMM,INTEGER REMOTE_LEADER,INTEGER TAG,
    INTEGER NEWINTERCOM,INTEGER IERROR)

Parameters

local_comm

is the local intracommunicator (handle) (IN)

local_leader

is an integer specifying the rank of local group leader in local_comm (IN)

peer_comm

is the "peer" intracommunicator (significant only at the local_leader) (handle) (IN)

remote_leader

is the rank of remote group leader in peer_comm (significant only at the local_leader) (integer) (IN)

tag

"safe" tag (integer) (IN)

newintercom

is the new intercommunicator (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine creates an intercommunicator from two intracommunicators and is collective over the union of the local and the remote groups. Tasks should provide identical local_comm and local_leader arguments within each group. Wildcards are not permitted for remote_leader, local_leader, and tag.

MPI_INTERCOMM_CREATE uses point-to-point communication with communicator peer_comm and tag tag between the leaders. Make sure that there are no pending communications on peer_comm that could interfere with this communication. It is recommended that you use a dedicated peer communicator, such as a duplicate of MPI_COMM_WORLD, to avoid trouble with peer communicators.

Errors

Conflicting collective operations on communicator

Invalid communicator(s)

Invalid communicator type(s)
must be intracommunicator(s)

Invalid rank(s)
rank < 0 or rank > = groupsize

Invalid tag
tag < 0

MPI not initialized

MPI already finalized

Related Information

MPI_COMM_DUP
MPI_INTERCOMM_MERGE

MPI_INTERCOMM_MERGE, MPI_Intercomm_merge

Purpose

Creates an intracommunicator by merging the local and the remote groups of an intercommunicator.

C Synopsis

#include <mpi.h>
int MPI_Intercomm_merge(MPI_Comm intercomm,int high,
    MPI_Comm *newintracom);

Fortran Synopsis

include 'mpif.h'
MPI_INTERCOMM_MERGE(INTEGER INTERCOMM,INTEGER HIGH,
      INTEGER NEWINTRACOMM,INTEGER IERROR)

Parameters

intercomm

is the intercommunicator (handle) (IN)

high

(logical) (IN)

newintracomm

is the new intracommunicator (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine creates an intracommunicator from the union of two groups associated with intercomm. Tasks should provide the same high value within each of the two groups. If tasks in one group provide the value high = false and tasks in the other group provide the value high = true, then the union orders the "low" group before the "high" group. If all tasks provided the same high argument, then the order of the union is arbitrary.

This call is blocking and collective within the union of the two groups.

Errors

Invalid communicator

Invalid communicator type
must be intercommunicator

Inconsistent high within group

MPI not initialized

MPI already finalized

Related Information

MPI_INTERCOMM_CREATE

MPI_IPROBE, MPI_Iprobe

Purpose

Checks to see if a message matching source, tag, and comm has arrived.

C Synopsis

#include <mpi.h>
int MPI_Iprobe(int source,int tag,MPI_Comm comm,int *flag,
      MPI_Status *status);

Fortran Synopsis

include 'mpif.h'
MPI_IPROBE(INTEGER SOURCE,INTEGER TAG,INTEGER COMM,INTEGER FLAG,
      INTEGER STATUS(MPI_STATUS_SIZE),INTEGER IERROR)

Parameters

source

is a source rank or MPI_ANY_SOURCE (integer) (IN)

tag

is a tag value or MPI_ANY_TAG (integer) (IN)

comm

is a communicator (handle) (IN)

flag

(logical) (OUT)

status

is a status object (status) (OUT). Note that in Fortran a single status object is an array of integers.

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine allows you to check for incoming messages without actually receiving them.

MPI_IPROBE(source, tag, comm, flag, status) returns flag = true when there is a message that can be received that matches the pattern specified by the arguments source, tag, and comm. The call matches the same message that would have been received by a call to MPI_RECV(..., source, tag, comm, status) executed at the same point in the program and returns in status the same values that would have been returned by MPI_RECV(). Otherwise, the call returns flag = false and leaves status undefined.

When MPI_IPROBE returns flag = true, the content of the status object can be accessed to find the source, tag and length of the probed message.

A subsequent receive executed with the same comm, and the source and tag returned in status by MPI_IPROBE receives the message that was matched by the probe, if no other intervening receive occurs after the initial probe.

source can be MPI_ANY_SOURCE and tag can be MPI_ANY_TAG. This allows you to probe messages from any source and/or with any tag, but you must provide a specific communicator with comm.

When a message is not received immediately after it is probed, the same message can be probed for several times before it is received.

Notes

In a threaded environment, MPI_PROBE or MPI_IPROBE followed by MPI_RECV, based on the information from the probe, may not be a thread-safe operation. You must ensure that no other thread received the detected message.

An MPI_IPROBE cannot prevent a message from being cancelled successfully by the sender, making it unavailable for the MPI_RECV. Structure your program so this will not occur.

Errors

Invalid source
source < 0 or source > &equals groupsize

Invalid tag
tag < 0

Invalid communicator

MPI not initialized

MPI already finalized

Related Information

MPI_PROBE
MPI_RECV

MPI_IRECV, MPI_Irecv

Purpose

Performs a nonblocking receive operation.

C Synopsis

#include <mpi.h>
int MPI_Irecv(void* buf,int count,MPI_Datatype datatype,
    int source,int tag,MPI_Comm comm,MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPI_IRECV(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER SOURCE,
      INTEGER TAG,INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)

Parameters

buf

is the initial address of the receive buffer (choice) (OUT)

count

is the number of elements in the receive buffer (integer) (IN)

datatype

is the datatype of each receive buffer element (handle) (IN)

source

is the rank of source or MPI_ANY_SOURCE (integer) (IN)

tag

is the message tag or MPI_ANY_TAG (integer) (IN)

comm

is the communicator (handle) (IN)

request

is the communication request (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine starts a nonblocking receive and returns a handle to a request object. You can later use the request to query the status of the communication or wait for it to complete.

A nonblocking receive call means the system may start writing data into the receive buffer. Once the nonblocking receive operation is called, do not access any part of the receive buffer until the receive is complete.

Notes

The message received must be less than or equal to the length of the receive buffer. If all incoming messages do not fit without truncation, an overflow error occurs. If a message arrives that is shorter than the receive buffer, then only those locations corresponding to the actual message are changed. If an overflow occurs, it is flagged at the MPI_WAIT or MPI_TEST. See MPI_RECV for additional information.

Errors

Invalid count
count < 0

Invalid datatype

Type not committed

Invalid source
source < 0 or source > = groupsize

Invalid tag
tag < 0

Invalid comm

MPI not initialized

MPI already finalized

Related Information

MPI_RECV
MPI_RECV_INIT
MPI_WAIT

MPI_IRSEND, MPI_Irsend

Purpose

Performs a nonblocking ready mode send operation.

C Synopsis

#include <mpi.h>
int MPI_Irsend(void* buf,int count,MPI_Datatype datatype,
    int dest,int tag,MPI_Comm comm,MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPI_IRSEND(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER DEST,
      INTEGER TAG,INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)

Parameters

buf

is the initial address of the send buffer (choice) (IN)

count

is the number of elements in the send buffer (integer) (IN)

datatype

is the datatype of each send buffer element (handle) (IN)

dest

is the rank of the destination task in comm (integer) (IN)

tag

is the message tag (integer) (IN)

comm

is the communicator (handle) (IN)

request

is the communication request (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_IRSEND starts a ready mode, nonblocking send. The send buffer may not be modified until the request has been completed by MPI_WAIT, MPI_TEST, or one of the other MPI wait or test functions.

Notes

See MPI_RSEND for additional information.

Errors

Invalid count
count < 0

Invalid datatype

Type not committed

Invalid destination
dest < 0 or dest > = groupsize

Invalid tag
tag < 0

Invalid comm

No receive posted
error flagged at destination

MPI not initialized

MPI already finalized

Develop mode error if:

Illegal buffer update

Related Information

MPI_RSEND
MPI_RSEND_INIT
MPI_WAIT

MPI_ISEND, MPI_Isend

Purpose

Performs a nonblocking standard mode send operation.

C Synopsis

#include <mpi.h>
int MPI_Isend(void* buf,int count,MPI_Datatype datatype,
    int dest,int tag,MPI_Comm comm,MPI_request *request);

Fortran Synopsis

include 'mpif.h'
MPI_ISEND(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER DEST,
    INTEGER TAG,INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)

Parameters

buf

is the initial address of the send buffer (choice) (IN)

count

is the number of elements in the send buffer (integer) (IN)

datatype

is the datatype of each send buffer element (handle) (IN)

dest

is the rank of the destination task in comm (integer) (IN)

tag

is the message tag (integer) (IN)

comm

is the communicator (handle) (IN)

request

is the communication request (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine starts a nonblocking standard mode send. The send buffer may not be modified until the request has been completed by MPI_WAIT, MPI_TEST, or one of the other MPI wait or test functions.

Notes

See MPI_SEND for additional information.

Errors

Invalid count
count < 0

Invalid datatype

Type not committed

Invalid destination
dest < 0 or dest > = groupsize

Invalid tag
tag < 0

Invalid comm

MPI not initialized

MPI already finalized

Develop mode error if:

Illegal buffer update

Related Information

MPI_SEND
MPI_SEND_INIT
MPI_WAIT

MPI_ISSEND, MPI_Issend

Purpose

Performs a nonblocking synchronous mode send operation.

C Synopsis

#include <mpi.h>
int MPI_Issend(void* buf,int count,MPI_Datatype datatype,
    int dest,int tag,MPI_Comm comm,MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPI_ISSEND(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER DEST,
    INTEGER TAG,INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)

Parameters

buf

is the initial address of the send buffer (choice) (IN)

count

is the number of elements in the send buffer (integer) (IN)

datatype

is the datatype of each send buffer element (handle) (IN)

dest

is the rank of the destination task in comm (integer) (IN)

tag

is the message tag (integer) (IN)

comm

is the communicator (handle) (IN)

request

is the communication request (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_ISSEND starts a synchronous mode, nonblocking send. The send buffer may not be modified until the request has been completed by MPI_WAIT, MPI_TEST, or one of the other MPI wait or test functions.

Notes

See MPI_SSEND for additional information.

Errors

Invalid count
count < 0

Invalid datatype

Type not committed

Invalid destination
dest < 0 or dest > = groupsize

Invalid tag
tag < 0

Invalid comm

MPI not initialized

MPI already finalized

Develop mode error if:

Illegal buffer update

Related Information

MPI_SSEND
MPI_SSEND_INIT
MPI_WAIT

MPI_KEYVAL_CREATE, MPI_Keyval_create

Purpose

Generates a new attribute key.

C Synopsis

#include <mpi.h>
int MPI_Keyval_create(MPI_Copy_function *copy_fn,
    MPI_Delete_function *delete_fn,int *keyval,
    void* extra_state);

Fortran Synopsis

include 'mpif.h'
MPI_KEYVAL_CREATE(EXTERNAL COPY_FN,EXTERNAL DELETE_FN,
      INTEGER KEYVAL,INTEGER EXTRA_STATE,INTEGER IERROR)

Parameters

copy_fn

is the copy callback function for keyval (IN)

delete_fn

is the delete callback function for keyval (IN)

keyval

is an integer specifying the key value for future access (OUT)

extra_state

is the extra state for callback functions (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine generates a new attribute key. Keys are locally unique in a task, opaque to the user, and are explicitly stored in integers. Once allocated, keyval can be used to associate attributes and access them on any locally defined communicator. copy_fn is invoked when a communicator is duplicated by MPI_COMM_DUP. It should be of type MPI_COPY_FUNCTION, which is defined as follows:

In C:

typedef int MPI_Copy_function (MPI_Comm oldcomm,int keyval,
        void *extra_state,void *attribute_val_in,
        void *attribute_val_out,int *flag);

In Fortran:

SUBROUTINE COPY_FUNCTION(INTEGER OLDCOMM,INTEGER KEYVAL,
        INTEGER EXTRA_STATE,INTEGER ATTRIBUTE_VAL_IN,
        INTEGER ATTRIBUTE_VAL_OUT,LOGICAL FLAG,INTEGER IERROR)

You can use the predefined functions MPI_NULL_COPY_FN and MPI_DUP_FN to never copy or to always copy, respectively.

delete_fn is invoked when a communicator is deleted by MPI_COMM_FREE or when a call is made to MPI_ATTR_DELETE. A call to MPI_ATTR_PUT that overlays a previously put attribute also causes delete_fn to be called. It should be defined as follows:

In C:

typedef int MPI_Delete_function (MPI_Comm comm,int keyval,
        void *attribute_val, void *extra_state);

In Fortran:

SUBROUTINE DELETE_FUNCTION(INTEGER COMM,INTEGER KEYVAL,
        INTEGER ATTRIBUTE_VAL,INTEGER EXTRA_STATE,
        INTEGER IERROR)

You can use the predefined function MPI_NULL_DELETE_FN if no special handling of attribute deletions is required.

In Fortran, the value of extra_state is recorded by MPI_KEYVAL_CREATE and the callback functions should not attempt to modify this value.

The MPI standard requires that when copy_fn or delete_fn gives a return code other than MPI_SUCCESS, the MPI routine in which this occurs must fail. The standard does not suggest that the copy_fn or delete_fn return code be used as the MPI routine's return value. The standard does require that an MPI return code be in the range between MPI_SUCCESS and MPI_ERR_LASTCODE. It places no range limits on copy_fn or delete_fn return codes. For this reason, we provide a specific error code for a copy_fn failure and another for a delete_fn failure. These error codes can be found in error class MPI_ERR_OTHER. The copy_fn or the delete_fn return code is not preserved.

Errors

MPI not initialized

MPI already finalized

Related Information

MPI_ATTR_PUT
MPI_ATTR_DELETE
MPI_COMM_DUP
MPI_COMM_FREE

MPI_KEYVAL_FREE, MPI_Keyval_free

Purpose

Marks an attribute key for deallocation.

C Synopsis

#include <mpi.h>
int MPI_Keyval_free(int *keyval);

Fortran Synopsis

include 'mpif.h'
MPI_KEYVAL_FREE(INTEGER KEYVAL,INTEGER IERROR)

Parameters

keyval

attribute key (integer) (INOUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine sets keyval to MPI_KEYVAL_INVALID and marks the attribute key for deallocation. You can free an attribute key that is in use because the actual deallocation occurs only when all active references to it are complete. These references, however, need to be explicitly freed. Use calls to MPI_ATTR_DELETE to free one attribute instance. To free all attribute instances associated with a communicator, use MPI_COMM_FREE.

Errors

Invalid attribute key
attribute key is undefined

Predefined attribute key
attribute key is predefined

MPI not initialized

MPI already finalized

Related Information

MPI_ATTR_DELETE
MPI_COMM_FREE

MPI_OP_CREATE, MPI_Op_create

Purpose

Binds a user-defined reduction operation to an op handle.

C Synopsis

#include <mpi.h>
int MPI_Op_create(MPI_User_function *function,int commute,
      MPI_Op *op);

Fortran Synopsis

include 'mpif.h'
MPI_OP_CREATE(EXTERNAL FUNCTION,INTEGER COMMUTE,INTEGER OP,
      INTEGER IERROR)

Parameters

function

is the user-defined reduction function (function) (IN)

commute

is true if commutative; otherwise it's false (IN)

op

is the reduction operation (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine binds a user-defined reduction operation to an op handle which you can then use in MPI_REDUCE, MPI_ALLREDUCE, MPI_REDUCE_SCATTER and MPI_SCAN and their nonblocking equivalents.

The user-defined operation is assumed to be associative. If commute = true, then the operation must be both commutative and associative. If commute = false, then the order of the operation is fixed. The order is defined in ascending, task rank order and begins with task zero.

function is user-defined function. It must have the following four arguments: invec, inoutvec, len, and datatype.

The following is the ANSI-C prototype for the function:

typedef void MPI_User_function(void *invec, void *inoutvec,
    int *len, MPI_Datatype *datatype);

The following is the Fortran declaration for the function:

SUBROUTINE USER_FUNCTION(INVEC(*), INOUTVEC(*), LEN, TYPE)
<type> INVEC(LEN), INOUTVEC(LEN)
 INTEGER LEN, TYPE

Notes

See Appendix D. "Reduction Operations" for information about reduction functions.

Errors

Null function

MPI not initialized

MPI already finalized

Related Information

MPI_OP_FREE
MPI_REDUCE
MPI_ALLREDUCE
MPI_REDUCE_SCATTER
MPI_SCAN

MPI_OP_FREE, MPI_Op_free

Purpose

Marks a user-defined reduction operation for deallocation.

C Synopsis

#include <mpi.h>
int MPI_Op_free(MPI_Op *op);

Fortran Synopsis

include 'mpif.h'
MPI_OP_FREE(INTEGER OP,INTEGER IERROR)

Parameters

op

is the reduction operation (handle) (INOUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This function marks a reduction operation for deallocation, and set op to MPI_OP_NULL. Actual deallocation occurs when the operation's reference count is zero.

Errors

Invalid operation

Predefined operation

MPI not initialized

MPI already finalized

Related Information

MPI_OP_CREATE

MPI_PACK, MPI_Pack

Purpose

Packs the message in the specified send buffer into the specified buffer space.

C Synopsis

#include <mpi.h>
int MPI_Pack(void* inbuf,int incount,MPI_Datatype datatype,
    void *outbuf,int outsize,int *position,MPI_Comm comm);

Fortran Synopsis

include 'mpif.h'
MPI_PACK(CHOICE INBUF,INTEGER INCOUNT,INTEGER DATATYPE,
    CHOICE OUTBUF,INTEGER OUTSIZE,INTEGER POSITION,INTEGER COMM
    INTEGER IERROR)

Parameters

inbuf

is the input buffer start (choice) (IN)

incount

is an integer specifying the number of input data items (IN)

datatype

is the datatype of each input data item (handle) (IN)

outbuf

is the output buffer start (choice) (OUT)

outsize

is an integer specifying the output buffer size in bytes (OUT)

position

is the current position in the output buffer counted in bytes (integer) (INOUT)

comm

is the communicator for sending the packed message (handle) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine packs the message specified by inbuf, incount, and datatype into the buffer space specified by outbuf and outsize. The input buffer is any communication buffer allowed in MPI_SEND. The output buffer is any contiguous storage space containing outsize bytes and starting at the address outbuf.

The input value of position is the beginning offset in the output buffer that will be used for packing. The output value of position is the offset in the output buffer following the locations occupied by the packed message. comm is the communicator that will be used for sending the packed message.

Errors

Invalid incount
incount < 0

Invalid datatype

Type not committed

Invalid communicator

Outbuf too small

MPI not initialized

MPI already finalized

Related Information

MPI_UNPACK
MPI_PACK_SIZE

MPI_PACK_SIZE, MPI_Pack_size

Purpose

Returns the number of bytes required to hold the data.

C Synopsis

#include <mpi.h>
int MPI_Pack_size(int incount,MPI_Datatype datatype,
    MPI_Comm comm, int *size);

Fortran Synopsis

include 'mpif.h'
MPI_PACK_SIZE(INTEGER INCOUNT,INTEGER DATATYPE,INTEGER COMM,
INTEGER SIZE,INTEGER IERROR)

Parameters

incount

is an integer specifying the count argument to a packing call (IN)

datatype

is the datatype argument to a packing call (handle) (IN)

comm

is the communicator to a packing call (handle) (IN)

size

size of packed message in bytes (integer) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine returns the number of bytes required to pack incount replications of the datatype. You can use MPI_PACK_SIZE to determine the size required for a packing buffer or to track space needed for buffered sends.

Errors

Invalid datatype

Type is not committed

MPI not initialized

MPI already finalized

Invalid communicator

Invalid incount
incount < 0

Related Information

MPI_PACK

MPI_PCONTROL, MPI_Pcontrol

Purpose

Provides profiler control.

C Synopsis

#include <mpi.h>
int MPI_Pcontrol(const int level, ...);

Fortran Synopsis

include 'mpif.h'
MPI_PCONTROL(INTEGER LEVEL, ...)

Parameters

level

is the profiling level (IN)

The proper values for level and the meanings of those values are determined by the profiler being used.

...

0 or more parameters

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_PCONTROL is a placeholder to allow applications to run with or without an independent profiling package without modification. MPI implementations do not use this routine and do not have any control of the implementation of the profiling code.

Calls to this routine allow a profiling package to be controlled from MPI programs. The nature of control and the arguments required are determined by the profiling package. The MPI library routine by this name returns to the caller without any action.

Notes

For each additional call level introduced by the profiling code, the global variable VT_instaddr_depth needs to be incremented so the Visualization Tool Tracing Subsystem(VT) can record where the application called the MPI message passing library routine. The VT_instaddr_depth variable is defined in /usr/lpp/ppe.vt/include/VT_mpi.h.

Errors

MPI does not report any errors for MPI_PCONTROL.

MPI_PROBE, MPI_Probe

Purpose

Waits until a message matching source, tag, and comm arrives.

C Synopsis

#include <mpi.h>
int MPI_Probe(int source,int tag,MPI_Comm comm,MPI_Status *status);

Fortran Synopsis

include 'mpif.h'
MPI_PROBE(INTEGER SOURCE,INTEGER TAG,INTEGER COMM,
    INTEGER STATUS(MPI_STATUS_SIZE),INTEGER IERROR)

Parameters

source

is a source rank or MPI_ANY_SOURCE (integer) (IN)

tag

is a source tag or MPI_ANY_TAG (integer) (IN)

comm

is a communicator (handle) (IN)

status

is a status object (status) (OUT). Note that in Fortran a single status object is an array of integers.

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_PROBE behaves like MPI_IPROBE. It allows you to check for an incoming message without actually receiving it. MPI_PROBE is different in that it is a blocking call that returns only after a matching message has been found.

Notes

In a threaded environment, MPI_PROBE or MPI_IPROBE followed by MPI_RECV, based on the information from the probe, may not be a thread-safe operation. You must ensure that no other thread received the detected message.

An MPI_IPROBE cannot prevent a message from being cancelled successfully by the sender, making it unavailable for the MPI_RECV. Structure your program so this will not occur.

Errors

Invalid source
source < 0 or source > = groupsize

Invalid tag
tag < 0

Invalid communicator

MPI not initialized

MPI already finalized

Related Information

MPI_IPROBE
MPI_RECV

MPI_RECV, MPI_Recv

Purpose

Performs a blocking receive operation.

C Synopsis

#include <mpi.h>
int MPI_Recv(void* buf,int count,MPI_Datatype datatype,
    int source,int tag,MPI_Comm comm,MPI_Status *status);

Fortran Synopsis

include 'mpif.h'
MPI_RECV(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER SOURCE,
      INTEGER TAG,INTEGER COMM,INTEGER STATUS(MPI_STATUS_SIZE),INTEGER IERROR)

Parameters

buf

is the initial address of the receive buffer (choice) (OUT)

count

is the number of elements to be received (integer) (IN)

datatype

is the datatype of each receive buffer element (handle) (IN)

source

is the rank of the source task in comm or MPI_ANY_SOURCE (integer) (IN)

tag

is the message tag or MPI_ANY_TAG (integer) (IN)

comm

is the communicator (handle) (IN)

status

is the status object (status) (OUT). Note that in Fortran a single status object is an array of integers.

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_RECV is a blocking receive. The receive buffer is storage containing room for count consecutive elements of the type specified by datatype, starting at address buf.

The message received must be less than or equal to the length of the receive buffer. If all incoming messages do not fit without truncation, an overflow error occurs. If a message arrives that is shorter than the receive buffer, then only those locations corresponding to the actual message are changed.

Errors

Invalid count
count < 0

Invalid datatype

Type not committed

Invalid source
source < 0 or source > = groupsize

Invalid tag
tag < 0

Invalid comm

Truncation occurred

MPI not initialized

MPI already finalized

Related Information

MPI_IRECV
MPI_SENDRECV
MPI_SEND

MPI_RECV_INIT, MPI_Recv_init

Purpose

Creates a persistent receive request.

C Synopsis

#include <mpi.h>
int MPI_Recv_init(void* buf,int count,MPI_Datatype datatype,
    int source,int tag,MPI_Comm comm,MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPI_RECV_INIT(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,
      INTEGER SOURCE,INTEGER TAG,INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)

Parameters

buf

is the initial address of the receive buffer (choice) (OUT)

count

is the number of elements to be received (integer) (IN)

datatype

is the type of each element (handle) (IN)

source

is the rank of source or MPI_ANY_SOURCE (integer) (IN)

tag

is the tag or MPI_ANY_TAG (integer) (IN)

comm

is the communicator (handle) (IN)

request

is the communication request (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine creates a persistent communication request for a receive operation. The argument buf is marked as OUT because the user gives permission to write to the receive buffer by passing the argument to MPI_RECV_INIT.

A persistent communication request is inactive after it is created. No active communication is attached to the request.

A send or receive communication using a persistent request is initiated by the function MPI_START.

Notes

See MPI_RECV for additional information.

Errors

Invalid count
count < 0

Invalid datatype

Type not committed

Invalid source
source < 0 or source > = groupsize

Invalid tag
tag < &zero.

Invalid comm

MPI not initialized

MPI already finalized

Related Information

MPI_START
MPI_IRECV

MPI_REDUCE, MPI_Reduce

Purpose

Applies a reduction operation to the vector sendbuf over the set of tasks specified by comm and places the result in recvbuf on root.

C Synopsis

#include <mpi.h>
int MPI_Reduce(void* sendbuf,void* recvbuf,int count,
    MPI_Datatype datatype,MPI_Op op,int root,MPI_Comm comm);

Fortran Synopsis

include 'mpif.h'
MPI_REDUCE(CHOICE SENDBUF,CHOICE RECVBUF,INTEGER COUNT,
    INTEGER DATATYPE,INTEGER OP,INTEGER ROOT,INTEGER COMM,
    INTEGER IERROR)

Parameters

sendbuf

is the address of the send buffer (choice) (IN)

recvbuf

is the address of the receive buffer (choice, significant only at root) (OUT)

count

is the number of elements in the send buffer (integer) (IN)

datatype

is the datatype of elements of the send buffer (handle) (IN)

op

is the reduction operation (handle) (IN)

root

is the rank of the root task (integer) (IN)

comm

is the communicator (handle) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine applies a reduction operation to the vector sendbuf over the set of tasks specified by comm and places the result in recvbuf on root.

Both the input and output buffers have the same number of elements with the same type. The arguments sendbuf, count, and datatype define the send or input buffer and recvbuf, count and datatype define the output buffer. MPI_REDUCE is called by all group members using the same arguments for count, datatype, op, and root. If a sequence of elements is provided to a task, then the reduce operation is executed element-wise on each entry of the sequence. Here's an example. If the operation is MPI_MAX and the send buffer contains two elements that are floating point numbers (count = 2 and datatype = MPI_FLOAT), then recvbuf(1) = global max(sendbuf(1)) and recvbuf (2) = global max(sendbuf(2)).

Users may define their own operations or use the predefined operations provided by MPI. User defined operations can be overloaded to operate on several datatypes, either basic or derived. A list of the MPI predefined operations is in this manual. Refer to Appendix D. "Reduction Operations".

The argument datatype of MPI_REDUCE must be compatible with op. For a list of predefined operations refer to Appendix I. "Predefined Datatypes".

When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Notes

See Appendix D. "Reduction Operations".

Errors

Invalid count
count < 0

Invalid datatype

Type not committed

Invalid op

Invalid root
root < 0 or root > = groupsize

Invalid communicator

Invalid communicator type
must be intracommunicator

Unequal message lengths

MPI not initialized

MPI already finalized

Develop mode error if:

Inconsistent op

Inconsistent datatype

Inconsistent root

Inconsistent message length

Related Information

MPE_IREDUCE
MPI_ALLREDUCE
MPI_REDUCE_SCATTER
MPI_SCAN
MPI_OP_CREATE

MPI_REDUCE_SCATTER, MPI_Reduce_scatter

Purpose

Applies a reduction operation to the vector sendbuf over the set of tasks specified by comm and scatters the result according to the values in recvcounts.

C Synopsis

#include <mpi.h>
int MPI_Reduce_scatter(void* sendbuf,void* recvbuf,int *recvcounts,
    MPI_Datatype datatype,MPI_Op op,MPI_Comm comm);

Fortran Synopsis

include 'mpif.h'
MPI_REDUCE_SCATTER(CHOICE SENDBUF,CHOICE RECVBUF,
    INTEGER RECVCOUNTS(*),INTEGER DATATYPE,INTEGER OP,
    INTEGER COMM,INTEGER IERROR)

Parameters

sendbuf

is the starting address of the send buffer (choice) (IN)

recvbuf

is the starting address of the receive buffer (choice) (OUT)

recvcounts

integer array specifying the number of elements in result distributed to each task. Must be identical on all calling tasks. (IN)

datatype

is the datatype of elements in the input buffer (handle) (IN)

op

is the reduction operation (handle) (IN)

comm

is the communicator (handle) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_REDUCE_SCATTER first performs an element-wise reduction on vector of count = &Sigma. i recvcounts[i] elements in the send buffer defined by sendbuf, count and datatype. Next, the resulting vector is split into n disjoint segments, where n is the number of members in the group. Segment i contains recvcounts[i] elements. The ith segment is sent to task i and stored in the receive buffer defined by recvbuf, recvcounts[i] and datatype.

Notes

MPI_REDUCE_SCATTER is functionally equivalent to MPI_REDUCE with count equal to the sum of recvcounts[i] followed by MPI_SCATTERV with sendcounts equal to recvcounts. When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid recvcounts
recvcounts[i] < 0

Invalid datatype

Type not committed

Invalid op

Invalid communicator

Invalid communicator type
must be intracommunicator

Unequal message lengths

MPI not initialized

MPI already finalized

Develop mode error if:

Inconsistent op

Inconsistent datatype

Related Information

MPE_IREDUCE_SCATTER
MPI_REDUCE
MPI_OP_CREATE

MPI_REQUEST_FREE, MPI_Request_free

Purpose

Marks a request for deallocation.

C Synopsis

#include <mpi.h>
int MPI_Request_free(int MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPI_REQUEST_FREE(INTEGER REQUEST,INTEGER IERROR)

Parameters

request

is a communication request (handle) (INOUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine marks a request object for deallocation and sets request to MPI_REQUEST_NULL. An ongoing communication associated with the request is allowed to complete before deallocation occurs.

Notes

This function marks a communication request as free. Actual deallocation occurs when the request is complete. Active receive requests and collective communication requests cannot be freed.

Errors

Invalid request

Attempt to free receive request

Attempt to free CCL request

MPI not initialized

MPI already finalized

Related Information

MPI_WAIT

MPI_RSEND, MPI_Rsend

Purpose

Performs a blocking ready mode send operation.

C Synopsis

#include <mpi.h>
int MPI_Rsend(void* buf,int count,MPI_Datatype datatype,
    int dest,int tag,MPI_Comm comm);

Fortran Synopsis

include 'mpif.h'
MPI_RSEND(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER DEST,
      INTEGER TAG,INTEGER COMM,INTEGER IERROR)

Parameters

buf

is the initial address of the send buffer (choice) (IN)

count

is the number of elements in the send buffer (integer) (IN)

datatype

is the datatype of each send buffer element (handle) (IN)

dest

is the rank of destination (integer) (IN)

tag

is the message tag (integer) (IN)

comm

is the communicator (handle) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine is a blocking ready mode send. It can be started only when a matching receive is posted. If a matching receive is not posted, the operation is erroneous and its outcome is undefined.

The completion of MPI_RSEND indicates that the send buffer can be reused.

Notes

A ready send for which no receive exists produces an asynchronous error at the destination. The error is not detected at the MPI_RSEND and it returns MPI_SUCCESS.

Errors

Invalid count
count < 0

Invalid datatype

Type not committed

Invalid destination
dest < 0 or dest > = groupsize

Invalid tag
tag < 0

Invalid comm

No receive posted
error flagged at destination

MPI not initialized

MPI already finalized

Related Information

MPI_IRSEND
MPI_SEND

MPI_RSEND_INIT, MPI_Rsend_init

Purpose

Creates a persistent ready mode send request.

C Synopsis

#include <mpi.h>
int MPI_Rsend_init(void* buf,int count,MPI_Datatype datatype,
      int dest,int tag,MPI_Comm comm,MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPI_RSEND_INIT(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,
      INTEGER DEST,INTEGER TAG,INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)

Parameters

buf

is the initial address of the send buffer (choice) (IN)

count

is the number of elements to be sent (integer) (IN)

datatype

is the type of each element (handle) (IN)

dest

is the rank of the destination task (integer) (IN)

tag

is the message tag (integer) (IN)

comm

is the communicator (handle) (IN)

request

is the communication request (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_RSEND_INIT creates a persistent communication object for a ready mode send operation. MPI_START or MPI_STARTALL is used to activate the send.

Notes

See MPI_RSEND for additional information.

Errors

Invalid count
count < 0

Invalid datatype

Type not committed

Invalid destination
dest < 0 or dest > = groupsize

Invalid tag
tag < 0

Invalid comm

MPI not initialized

MPI already finalized

Related Information

MPI_START
MPI_IRSEND

MPI_SCAN, MPI_Scan

Purpose

Performs a parallel prefix reduction on data distributed across a group.

C Synopsis

#include <mpi.h>
int MPI_Scan(void* sendbuf,void* recvbuf,int count,
    MPI_Datatype datatype,MPI_Op op,MPI_Comm comm);

Fortran Synopsis

include 'mpif.h'
MPI_SCAN(CHOICE SENDBUF,CHOICE RECVBUF,INTEGER COUNT,
    INTEGER DATATYPE,INTEGER OP,INTEGER COMM,INTEGER IERROR)

Parameters

sendbuf

is the starting address of the send buffer (choice) (IN)

recvbuf

is the starting address of the receive buffer (choice) (OUT)

count

is the number of elements in sendbuf (integer) (IN)

datatype

is the datatype of elements in sendbuf (handle) (IN)

op

is the reduction operation (handle) (IN)

comm

is the communicator (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_SCAN is used to perform a prefix reduction on data distributed across the group. The operation returns, in the receive buffer of the task with rank i, the reduction of the values in the send buffers of tasks with ranks 0, ..., i (inclusive). The type of operations supported, their semantics, and the restrictions on send and receive buffers are the same as for MPI_REDUCE.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid count
count < 0

Invalid datatype

Type not committed

Invalid op

Invalid communicator

Invalid communicator type
must be intracommunicator

Unequal message lengths

MPI not initialized

MPI already finalized

Develop mode error if:

Inconsistent op

Inconsistent datatype

Inconsistent message length

Related Information

MPE_ISCAN
MPI_REDUCE
MPI_OP_CREATE

MPI_SCATTER, MPI_Scatter

Purpose

Distributes individual messages from root to each task in comm.

C Synopsis

#include <mpi.h>
int MPI_Scatter(void* sendbuf,int sendcount,MPI_Datatype sendtype,
    void* recvbuf,int recvcount,MPI_Datatype recvtype,int root,
    MPI_Comm comm);

Fortran Synopsis

include 'mpif.h'
MPI_SCATTER(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE,
    CHOICE RECVBUF,INTEGER RECVCOUNT,INTEGER RECVTYPE,INTEGER ROOT,
    INTEGER COMM,INTEGER IERROR)

Parameters

sendbuf

is the address of the send buffer (choice, significant only at root) (IN)

sendcount

is the number of elements to be sent to each task (integer, significant only at root) (IN)

sendtype

is the datatype of the send buffer elements (handle, significant only at root) (IN)

recvbuf

is the address of the receive buffer (choice) (OUT)

recvcount

is the number of elements in the receive buffer (integer) (IN)

recvtype

is the datatype of the receive buffer elements (handle) (IN)

root

is the rank of the sending task (integer) (IN)

comm

is the communicator (handle) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_SCATTER distributes individual messages from root to each task in comm. This routine is the inverse operation to MPI_GATHER.

The type signature associated with sendcount, sendtype at the root must be equal to the type signature associated with recvcount, recvtype at all tasks. (Type maps can be different.) This means the amount of data sent must be equal to the amount of data received, pairwise between each task and the root. Distinct type maps between sender and receiver are allowed.

The following is information regarding MPI_SCATTER arguments and tasks:

A call where the specification of counts and types causes any location on the root to be read more than once is erroneous.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid communicator

Invalid communicator type
must be intracommunicator

Invalid count(s)
count < 0

Invalid datatype(s)

Type not committed

Invalid root
(root < 0 or root >= groupsize)

Unequal message lengths

MPI not initialized

MPI already finalized

Develop mode error if:

Inconsistent root

Inconsistent message length

Related Information

MPE_ISCATTER
MPI_SCATTER
MPI_GATHER

MPI_SCATTERV, MPI_Scatterv

Purpose

Distributes individual messages from root to each task in comm. Messages can have different sizes and displacements.

C Synopsis

#include <mpi.h>
int MPI_Scatterv(void* sendbuf,int *sendcounts,
    int *displs,MPI_Datatype sendtype,void* recvbuf,
    int recvcount,MPI_Datatype recvtype,int root,
    MPI_Comm comm);

Fortran Synopsis

include 'mpif.h'
MPI_SCATTERV(CHOICE SENDBUF,INTEGER SENDCOUNTS(*),INTEGER DISPLS(*),
    INTEGER SENDTYPE,CHOICE RECVBUF,INTEGER RECVCOUNT,INTEGER RECVTYPE,
    INTEGER ROOT,INTEGER COMM,INTEGER IERROR)

Parameters

sendbuf

is the address of the send buffer (choice, significant only at root) (IN)

sendcounts

integer array (of length group size) that contains the number of elements to send to each task (significant only at root) (IN)

displs

integer array (of length group size). Entry i specifies the displacement relative to sendbuf from which to send the outgoing data to task i (significant only at root) (IN)

sendtype

is the datatype of the send buffer elements (handle, significant only at root) (IN)

recvbuf

is the address of the receive buffer (choice) (OUT)

recvcount

is the number of elements in the receive buffer (integer) (IN)

recvtype

is the datatype of the receive buffer elements (handle) (IN)

root

is the rank of the sending task (integer) (IN)

comm

is the communicator (handle) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine distributes individual messages from root to each task in comm. Messages can have different sizes and displacements.

With sendcounts as an array, messages can have varying sizes of data that can be sent to each task. displs allows you the flexibility of where the data can be taken from on the root.

The type signature of sendcount[i], sendtype at the root must be equal to the type signature of recvcount, recvtype at task i. (The type maps can be different.) This means the amount of data sent must be equal to the amount of data received, pairwise between each task and the root. Distinct type maps between sender and receiver are allowed.

The following is information regarding MPI_SCATTERV arguments and tasks:

A call where the specification of sizes, types and displacements causes any location on the root to be read more than once is erroneous.

When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid communicator

Invalid communicator type
must be intracommunicator

Invalid count(s)
count < 0

Invalid datatype(s)

Type not committed

Invalid root
root < 0 or root >= groupsize

Unequal message lengths

MPI not initialized

MPI already finalized

Develop mode error if:

Inconsistent root

Related Information

MPI_SCATTER
MPI_GATHER

MPI_SEND, MPI_Send

Purpose

Performs a blocking standard mode send operation.

C Synopsis

#include <mpi.h>
int MPI_Send(void* buf,int count,MPI_Datatype datatype,
    int dest,int tag,MPI_Comm comm);

Fortran Synopsis

include 'mpif.h'
MPI_SEND(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER DEST,
      INTEGER TAG,INTEGER COMM,INTEGER IERROR)

Parameters

buf

is the initial address of the send buffer (choice) (IN)

count

is the number of elements in the send buffer (non-negative integer) (IN)

datatype

is the datatype of each send buffer element (handle) (IN)

dest

is the rank of the destination task in comm(integer) (IN)

tag

is the message tag (integer) (IN)

comm

is the communicator (handle) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine is a blocking standard mode send. MPI_SEND causes count elements of type datatype to be sent from buf to the task specified by dest. dest is a task rank which can be any value from 0 to n-1, where n is the number of tasks in comm.

Errors

Invalid count
count < 0

Invalid datatype

Type not committed

Invalid destination
dest < 0 or dest > = groupsize

Invalid tag
tag < 0

Invalid comm

MPI not initialized

MPI already finalized

Related Information

MPI_ISEND
MPI_BSEND
MPI_SSEND
MPI_RSEND
MPI_SENDRECV

MPI_SEND_INIT, MPI_Send_init

Purpose

Creates a persistent standard mode send request.

C Synopsis

#include <mpi.h>
int MPI_Send_init(void* buf,int count,MPI_Datatype datatype,
    int dest,int tag,MPI_Comm comm,MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPI_SEND_INIT(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER DEST,
      INTEGER TAG,INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)

Parameters

buf

is the initial address of the send buffer (choice) (IN)

count

is the number of elements to be sent (integer) (IN)

datatype

is the type of each element (handle) (IN)

dest

is the rank of the destination task (integer) (IN)

tag

is the message tag (integer) (IN)

comm

is the communicator (handle) (IN)

request

is the communication request (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine creates a persistent communication request for a standard mode send operation, and binds to it all arguments of a send operation. MPI_START or MPI_STARTALL is used to activate the send.

Notes

See MPI_SEND for additional information.

Errors

Invalid count
count < 0

Invalid datatype

Type not committed

Invalid destination
dest < 0 or dest > = groupsize

Invalid tag
tag < 0

Invalid comm

MPI not initialized

MPI already finalized

Related Information

MPI_START
MPI_ISEND

MPI_SENDRECV, MPI_Sendrecv

Purpose

Performs a blocking send and receive operation.

C Synopsis

#include <mpi.h>
int MPI_Sendrecv(void* sendbuf,int sendcount,MPI_Datatype sendtype,
     int dest,int sendtag,void *recvbuf,int recvcount,MPI_Datatype recvtype,
     int source,int recvtag,MPI_Comm comm,MPI_Status *status);

Fortran Synopsis

include 'mpif.h'
MPI_SENDRECV(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE,
      INTEGER DEST,INTEGER SENDTAG,CHOICE RECVBUF,INTEGER RECVCOUNT,
      INTEGER RECVTYPE,INTEGER SOURCE,INTEGER RECVTAG,INTEGER COMM,
      INTEGER STATUS(MPI_STATUS_SIZE),INTEGER IERROR)

Parameters

sendbuf

is the initial address of the send buffer (choice) (IN)

sendcount

is the number of elements to be sent (integer) (IN)

sendtype

is the type of elements in the send buffer (handle) (IN)

dest

is the rank of the destination task (integer) (IN)

sendtag

is the send tag (integer) (IN)

recvbuf

is the initial address of the receive buffer (choice) (OUT)

recvcount

is the number of elements to be received (integer) (IN)

recvtype

is the type of elements in the receive buffer (handle) (IN)

source

is the rank of the source task or MPI_ANY_SOURCE (integer) (IN)

recvtag

is the receive tag or MPI_ANY_TAG (integer) (IN)

comm

is the communicator (handle) (IN)

status

is the status object (status) (OUT). Note that in Fortran a single status object is an array of integers.

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine is a blocking send and receive operation. Send and receive use the same communicator but can use different tags. The send and the receive buffers must be disjoint and can have different lengths and datatypes.

Errors

Invalid count(s)
count < 0

Invalid datatype(s)

Type not committed

Invalid destination
dest < 0 or dest > = groupsize

Invalid source
source < 0 or source > = groupsize

Invalid communicator

Invalid tag(s)
tag < 0

MPI not initialized

MPI already finalized

Related Information

MPI_SENDRECV_REPLACE
MPI_SEND
MPI_RECV

MPI_SENDRECV_REPLACE, MPI_Sendrecv_replace

Purpose

Performs a blocking send and receive operation using a common buffer.

C Synopsis

#include <mpi.h>
int MPI_Sendrecv_replace(void* buf,int count,MPI_Datatype datatype,
     int dest,int sendtag,int source,int recvtag,MPI_Comm comm,
     MPI_Status *status);

Fortran Synopsis

include 'mpif.h'
MPI_SENDRECV_REPLACE(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,
      INTEGER DEST,INTEGER SENDTAG,INTEGER SOURCE,INTEGER RECVTAG,
      INTEGER COMM,INTEGER STATUS(MPI_STATUS_SIZE),INTEGER IERROR)

Parameters

buf

is the initial address of the send and receive buffer (choice) (INOUT)

count

is the number of elements to be sent and received (integer) (IN)

datatype

is the type of elements in the send and receive buffer (handle) (IN)

dest

is the rank of the destination task (integer) (IN)

sendtag

is the send message tag (integer) (IN)

source

is the rank of the source task or MPI_ANY_SOURCE (integer) (IN)

recvtag

is the receive message tag or MPI_ANY_TAGE (integer) (IN)

comm

is the communicator (handle) (IN)

status

is the status object (status) (OUT). Note that in Fortran a single status object is an array of integers.

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine is a blocking send and receive operation using a common buffer. Send and receive use the same buffer so the message sent is replaced with the message received.

Errors

Invalid count
count < 0

Invalid datatype

Type not committed

Invalid destination
dest < 0 or dest > = groupsize

Invalid source
source < 0 or source > = groupsize

Invalid communicator

Invalid tag(s)
tag < 0

Out of memory

MPI not initialized

MPI already finalized

Related Information

MPI_SENDRECV

MPI_SSEND, MPI_Ssend

Purpose

Performs a blocking synchronous mode send operation.

C Synopsis

#include <mpi.h>
int MPI_Ssend(void* buf,int count,MPI_Datatype datatype,
    int dest,int tag,MPI_Comm comm);

Fortran Synopsis

include 'mpif.h'
MPI_SSEND(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER DEST,
    INTEGER TAG,INTEGER COMM,INTEGER IERROR)

Parameters

buf

is the initial address of the send buffer (choice) (IN)

count

is the number of elements in the send buffer (integer) (IN)

datatype

is the datatype of each send buffer element (handle) (IN)

dest

is the rank of the destination task (integer) (IN)

tag

is the message tag (integer) (IN)

comm

is the communicator (handle) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine is a blocking synchronous mode send. This a non-local operation. It can be started whether or not a matching receive was posted. However, the send will complete only when a matching receive is posted and the receive operation has started to receive the message sent by MPI_SSEND.

The completion of MPI_SSEND indicates that the send buffer is freed and also that the receiver has started executing the matching receive. If both sends and receives are blocking operations, the synchronous mode provides synchronous communication.

Errors

Invalid count
count < 0

Invalid datatype

Type not committed

Invalid destination
dest < 0 or dest > = groupsize

Invalid tag
tag < 0

Invalid comm

MPI not initialized

MPI already finalized

Related Information

MPI_ISSEND
MPI_SEND

MPI_SSEND_INIT, MPI_Ssend_init

Purpose

Creates a persistent synchronous mode send request.

C Synopsis

#include <mpi.h>
int MPI_Ssend_init(void* buf,int count,MPI_Datatype datatype,
    int dest,int tag,MPI_Comm comm,MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPI_SSEND_INIT(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER DEST
 
      INTEGER TAG,INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)

Parameters

buf

is the initial address of the send buffer (choice) (IN)

count

is the number of elements to be sent (integer) (IN)

datatype

is the type of each element (handle) (IN)

dest

is the rank of the destination task (integer) (IN)

tag

is the message tag (integer) (IN)

comm

is the communicator (handle) (IN)

request

is the communication request (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine creates a persistent communication object for a synchronous mode send operation. MPI_START or MPI_STARTALL can be used to activate the send.

Notes

See MPI_SSEND for additional information.

Errors

Invalid count
count < 0

Invalid datatype

Type not committed

Invalid destination
dest < 0 or dest > = groupsize

Invalid tag
tag < 0

Invalid comm

MPI not initialized

MPI already finalized

Related Information

MPI_START
MPI_ISSEND

MPI_START, MPI_Start

Purpose

Activates a persistent request operation.

C Synopsis

#include <mpi.h>
int MPI_Start(MPI_Request *request);

Fortran Synopsis

include 'mpif.h'
MPI_START(INTEGER REQUEST,INTEGER IERROR)

Parameters

request

is a communication request (handle) (INOUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_START activates a persistent request operation. request is a handle returned by MPI_RECV_INIT, MPI_RSEND_INIT, MPI_SSEND_INIT, MPI_BSEND_INIT or MPI_SEND_INIT. Once the call is made, do not access the communication buffer until the operation completes.

If the request is for a send with ready mode, then a matching receive must be posted before the call is made. If the request is for a buffered send, adequate buffer space must be available.

Errors

Invalid request

Request not persistent

Request already active

Insufficient buffer space
only if buffered send

MPI not initialized

MPI already finalized

Related Information

MPI_STARTALL
MPI_SEND_INIT
MPI_BSEND_INIT
MPI_RSEND_INIT
MPI_SSEND_INIT
MPI_RECV_INIT

MPI_STARTALL, MPI_Startall

Purpose

Activates a collection of persistent request operations.

C Synopsis

#include <mpi.h>
int MPI_Startall(int count,MPI_request *array_of_requests);

Fortran Synopsis

include 'mpif.h'
MPI_STARTALL(INTEGER COUNT,INTEGER ARRAY_OF_REQUESTS(*),INTEGER IERROR)

Parameters

count

is the list length (integer) (IN)

array_of_requests

is the array of requests (array of handle) (INOUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_STARTALL starts all communications associated with request operations in array_of_requests.

A communication started with MPI_STARTALL is completed by a call to one of the MPI wait or test operations. The request becomes inactive after successful completion but is not deallocated and can be reactivated by an MPI_STARTALL. If a request is for a send with ready mode, then a matching receive must be posted before the call. If a request is for a buffered send, adequate buffer space must be available.

Errors

Invalid count

Invalid request array

Request(s) invalid

Request(s) not persistent

Request(s) active

Insufficient buffer space
only if a buffered send

MPI not initialized

MPI already finalized

Related Information

MPI_START

MPI_TEST, MPI_Test

Purpose

Checks to see if a nonblocking request has completed.

C Synopsis

#include <mpi.h>
int MPI_Test(MPI_Request *request,int *flag,MPI_Status *status);

Fortran Synopsis

include 'mpif.h'
MPI_TEST(INTEGER REQUEST,INTEGER FLAG,INTEGER STATUS(MPI_STATUS_SIZE),
    INTEGER IERROR)

Parameters

request

is the operation request (handle) (INOUT)

flag

true if operation completed (logical) (OUT)

status

status object (status) (OUT). Note that in Fortran a single status object is an array of integers.

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_TEST returns flag = true if the operation identified by request is complete. The status object is set to contain information on the completed operation. The request object is deallocated and the request handle is set to MPI_REQUEST_NULL. Otherwise, flag = false and the status object is undefined. MPI_TEST is a local operation. The status object can be queried for information about the operation. (See MPI_WAIT.)

You can call MPI_TEST with a null or inactive request argument. The operation returns flag = true and empty status.

The error field of MPI_Status is never modified. The success or failure is indicated by the return code only.

When one of the MPI wait or test calls returns status for a nonblocking operation request and the corresponding blocking operation does not provide a status argument, the status from this wait/test does not contain meaningful source, tag or message size information.

When you use this routine in a threaded application, make sure the request is tested on only one thread. The request does not have to be tested on the thread that created the request. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid request handle

Truncation occurred

MPI not initialized

MPI already finalized

Develop mode error if:

Illegal buffer update

Related Information

MPI_TESTALL
MPI_TESTSOME
MPI_TESTANY
MPI_WAIT

MPI_TEST_CANCELLED, MPI_Test_cancelled

Purpose

Tests whether a nonblocking operation was cancelled.

C Synopsis

#include <mpi.h>
int MPI_Test_cancelled(MPI_Status * status,int *flag);

Fortran Synopsis

include 'mpif.h'
MPI_TEST_CANCELLED(INTEGER STATUS(MPI_STATUS_SIZE),INTEGER FLAG,
    INTEGER IERROR)

Parameters

status

is a status object (status) (IN). Note that in Fortran a single status object is an array of integers.

flag

true if the operation was cancelled (logical) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_TEST_CANCELLED returns flag = true if the communication associated with the status object was cancelled successfully. In this case, all other fields of status (such as count or tag) are undefined. Otherwise, flag = false is returned. If a receive operation might be cancelled, you should call MPI_TEST_CANCELLED first to check if the operation was cancelled, before checking on the other fields of the return status.

Notes

In this release, nonblocking I/O operations are never cancelled successfully.

Errors

MPI not initialized

MPI already finalized

Related Information

MPI_CANCEL

MPI_TESTALL, MPI_Testall

Purpose

Tests a collection of nonblocking operations for completion.

C Synopsis

#include <mpi.h>
int MPI_Testall(int count,MPI_Request *array_of_requests,
    int *flag,MPI_Status *array_of_statuses);

Fortran Synopsis

include 'mpif.h'
MPI_TESTALL(INTEGER COUNT,INTEGER ARRAY_OF_REQUESTS(*),INTEGER FLAG,
      INTEGER ARRAY_OF_STATUSES(MPI_STATUS_SIZE,*),INTEGER IERROR)

Parameters

count

is the number of requests to test (integer) (IN)

array_of_requests

is an array of requests of length count (array of handles) (INOUT)

flag

(logical) (OUT)

array_of_statuses

is an array of status of length count objects (array of status) (OUT). Note that in Fortran a status object is itself an array.

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine tests a collection of nonblocking operations for completion. flag = true is returned if all operations associated with active handles in the array completed, or when no handle in the list is active.

Each status entry of an active handle request is set to the status of the corresponding operation. A request allocated by a nonblocking operation call is deallocated and the handle is set to MPI_REQUEST_NULL.

Each status entry of a null or inactive handle is set to empty. If one or more requests have not completed, flag = false is returned. No request is modified and the values of the status entries are undefined.

The error fields are never modified unless the function gives a return code of MPI_ERR_IN_STATUS. In which case, the error field of every MPI_Status is modified to reflect the result of the corresponding request.

When one of the MPI wait or test calls returns status for a nonblocking operation request and the corresponding blocking operation does not provide a status argument, the status from this wait/test does not contain meaningful source, tag or message size information.

When you use this routine in a threaded application, make sure the request is tested on only one thread. The request does not have to be tested on the thread that created it. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid count
count <0

Invalid request array

Invalid request(s)

Truncation occurred

MPI not initialized

MPI already finalized

Related Information

MPI_TEST
MPI_WAITALL

MPI_TESTANY, MPI_Testany

Purpose

Tests for the completion of any nonblocking operation.

C Synopsis

#include <mpi.h>
int MPI_Testany(int count,MPI_Request *array_of_requests,
    int *index,int *flag,MPI_Status *status);

Fortran Synopsis

include 'mpif.h'
MPI_TESTANY(INTEGER COUNT,INTEGER ARRAY_OF_REQUESTS(*),INTEGER INDEX,
     INTEGER FLAG,INTEGER STATUS(MPI_STATUS_SIZE),INTEGER IERROR)

Parameters

count

is the list length (integer) (IN)

array_of_requests

is the array of request (array of handles) (INOUT)

index

is the index of the operation that completed, or MPI_UNDEFINED is no operation completed (OUT)

flag

true if one of the operations is complete (logical) (OUT)

status

status object (status) (OUT). Note that in Fortran a single status object is an array of integers.

IERROR

is the Fortran return code. It is always the last argument.

Description

If one of the operations has completed, MPI_TESTANY returns flag = true and returns in index the index of this request in the array, and returns in status the status of the operation. If the request was allocated by a nonblocking operation, the request is deallocated and the handle is set to MPI_REQUEST_NULL.

If none of the operations has completed, it returns flag = false and returns a value of MPI_UNDEFINED in index, and status is undefined. The array can contain null or inactive handles. When the array contains no active handles, then the call returns immediately with flag = true, index = MPI_UNDEFINED, and empty status.

MPI_TESTANY(count, array_of_requests, index, flag, status) has the same effect as the execution of MPI_TEST(array_of_requests[i], flag, status), for i = 0, 1, ..., count-1, in some arbitrary order, until one call returns flag = true, or all fail.

The error fields are never modified unless the function gives a return code of MPI_ERR_IN_STATUS. In which case, the error field of every MPI_Status is modified to reflect the result of the corresponding request.

When one of the MPI wait or test calls returns status for a nonblocking operation request and the corresponding blocking operation does not provide a status argument, the status from this wait/test does not contain meaningful source, tag or message size information.

When you use this routine in a threaded application, make sure the request is tested on only one thread. The request does not have to be tested on the thread that created it. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Notes

The array is indexed from zero in C and from one in Fortran.

Errors

Invalid count
count <0

Invalid request array

Invalid request(s)

Truncation occurred

MPI not initialized

MPI already finalized

Related Information

MPI_TEST
MPI_WAITANY

MPI_TESTSOME, MPI_Testsome

Purpose

Tests a collection of nonblocking operations for completion.

C Synopsis

#include <mpi.h>
int MPI_Testsome(int incount,MPI_Request *array_of_requests,
    int *outcount,int *array_of_indices,
    MPI_Status *array_of_statuses);

Fortran Synopsis

include 'mpif.h'
MPI_TESTSOME(INTEGER INCOUNT,INTEGER ARRAY_OF_REQUESTS(*),
      INTEGER OUTCOUNT,INTEGER ARRAY_OF_INDICES(*),
      INTEGER ARRAY_OF_STATUSES(MPI_STATUS_SIZE,*),INTEGER IERROR)

Parameters

incount

is the length of array_of_requests (integer) (IN)

array_of_requests

is the array of requests (array of handles) (INOUT)

outcount

is the number of completed requests (integer) (OUT)

array_of_indices

is the array of indices of operations that completed (array of integers) (OUT)

array_of_statuses

is the array of status objects for operations that completed (array of status) (OUT). Note that in Fortran a status object is itself an array.

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine tests a collection of nonblocking operations for completion. MPI_TESTSOME behaves like MPI_WAITSOME except that MPI_TESTSOME is a local operation and returns immediately. outcount = 0 is returned when no operation has completed.

When a request for a receive repeatedly appears in a list of requests passed to MPI_TESTSOME and a matching send is posted, then the receive eventually succeeds unless the send is satisfied by another receive. This fairness requirement also applies to send requests and to I/O requests.

The error fields are never modified unless the function gives a return code of MPI_ERR_IN_STATUS. In which case, the error field of every MPI_Status is modified to reflect the result of the corresponding request.

When one of the MPI wait or test calls returns status for a nonblocking operation request and the corresponding blocking operation does not provide a status argument, the status from this wait/test does not contain meaningful source, tag or message size information.

When you use this routine in a threaded application, make sure the request is tested on only one thread. The request does not have to be tested on the thread that created it. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid count
count < 0

Invalid request array

Invalid request(s)

Truncation occurred

MPI not initialized

MPI already finalized

Related Information

MPI_TEST
MPI_TESTSOME

MPI_TOPO_TEST, MPI_Topo_test

Purpose

Returns the type of virtual topology associated with a communicator.

C Synopsis

#include <mpi.h>
MPI_Topo_test(MPI_Comm comm,int *status);

Fortran Synopsis

include 'mpif.h'
MPI_TOPO_TEST(INTEGER COMM,INTEGER STATUS,INTEGER IERROR)

Parameters

comm

is the communicator (handle) (IN)

status

is the topology type of communicator comm (integer) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine returns the type of virtual topology associated with a communicator. The output of status will be as follows:

MPI_GRAPH
graph topology

MPI_CART
cartesian topology

MPI_UNDEFINED
no topology

Errors

MPI not initialized

MPI already finalized

Invalid communicator

Related Information

MPI_CART_CREATE
MPI_GRAPH_CREATE

MPI_TYPE_COMMIT, MPI_Type_commit

Purpose

Makes a datatype ready for use in communication.

C Synopsis

#include <mpi.h>
int MPI_Type_commit(MPI_Datatype *datatype);

Fortran Synopsis

include 'mpif.h'
MPI_TYPE_COMMIT(INTEGER DATATYPE,INTEGER IERROR)

Parameters

datatype

is the datatype that is to be committed (handle) (INOUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

A datatype object must be committed before you can use it in communication. You can use an uncommitted datatype as an argument in datatype constructors.

This routine makes a datatype ready for use in communication. The datatype is the formal description of a communication buffer. It is not the content of the buffer.

Once the datatype is committed it can be repeatedly reused to communicate the changing contents of a buffer or buffers with different starting addresses.

Notes

Basic datatypes are precommitted. It is not an error to call MPI_TYPE_COMMIT on a type that is already committed. Types returned by MPI_TYPE_GET_CONTENTS may or may not already be committed.

Errors

Invalid datatype

MPI not initialized

MPI already finalized

Related Information

MPI_TYPE_CONTIGUOUS
MPI_TYPE_CREATE_DARRAY
MPI_TYPE_CREATE_SUBARRAY
MPI_TYPE_FREE
MPI_TYPE_GET_CONTENTS
MPI_TYPE_HINDEXED
MPI_TYPE_HVECTOR
MPI_TYPE_INDEXED
MPI_TYPE_STRUCT
MPI_TYPE_VECTOR

MPI_TYPE_CONTIGUOUS, MPI_Type_contiguous

Purpose

Returns a new datatype that represents the concatenation of count instances of oldtype.

C Synopsis

#include <mpi.h>
int MPI_Type_contiguous(int count,MPI_Datatype oldtype,
    MPI_Datatype *newtype);

Fortran Synopsis

include 'mpif.h'
MPI_TYPE_CONTIGUOUS(INTEGER COUNT,INTEGER OLDTYPE,INTEGER NEWTYPE,
      INTEGER IERROR)

Parameters

count

is the replication count (non-negative integer) (IN)

oldtype

is the old datatype (handle) (IN)

newtype

is the new datatype (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine returns a new datatype that represents the concatenation of count instances of oldtype. MPI_TYPE_CONTIGUOUS allows replication of a datatype into contiguous locations.

Notes

newtype must be committed using MPI_TYPE_COMMIT before being used for communication.

Errors

Invalid count
count < 0

Undefined oldtype

Oldtype is MPI_LB, MPI_UB, or MPI_PACKED

Stride overflow

Extent overflow

Size overflow

Upper or lower bound overflow

MPI not initialized

MPI already finalized

Related Information

MPI_TYPE_COMMIT
MPI_TYPE_FREE
MPI_TYPE_GET_CONTENTS
MPI_TYPE_GET_ENVELOPE

MPI_TYPE_CREATE_DARRAY, MPI_Type_create_darray

Purpose

Generates the datatypes corresponding to the distribution of an ndims-dimensional array of oldtype elements onto an ndims-dimensional grid of logical tasks.

C Synopsis

#include <mpi.h>
int MPI_Type_create_darray (int size,int rank,int ndims,
    int array_of_gsizes[],int array_of_distribs[],
    int array_of_dargs[],int array_of_psizes[],
    int order,MPI_Datatype oldtype,MPI_Datatype *newtype);
    

Fortran Synopsis

include 'mpif.h'
MPI_TYPE_CREATE_DARRAY (INTEGER SIZE,INTEGER RANK,INTEGER NDIMS,
    INTEGER ARRAY_OF_GSIZES(*),INTEGER ARRAY_OF_DISTRIBS(*),
    INTEGER ARRAY_OF_DARGS(*),INTEGER ARRAY_OF_PSIZES(*),
    INTEGER ORDER,INTEGER OLDTYPE,INTEGER NEWTYPE,INTEGER IERROR)
    

Parameters

size

is the size of the task group (positive integer)(IN)

rank

is the rank in the task group (nonnegative integer)(IN)

ndims

is the number of array dimensions as well as task grid dimensions (positive integer)(IN)

array_of_gsizes

is the number of elements of type oldtype in each dimension of the global array (array of positive integers)(IN)

array_of_distribs

is the distribution of the global array in each dimension (array of state)(IN)

array_of_dargs

is the distribution argument in each dimension of the global array (array of positive integers)(IN)

array_of_psizes

is the size of the logical grid of tasks in each dimension (array of positive integers)(IN)

order

is the array storage order flag (state)(IN)

oldtype

is the old datatype (handle)(IN)

newtype

is the new datatype (handle)(OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_TYPE_CREATE_DARRAY generates the datatypes corresponding to an HPF-like distribution of an ndims-dimensional array of oldtype elements onto an ndims-dimensional grid of logical tasks. The ordering of tasks in the task grid is assumed to be row-major. See The High Performance Fortran Handbook for more information.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Invalid group size
size must be a positive integer

Invalid rank
rank must be a nonnegative integer

Invalid dimension count
ndims must be a positive integer

Invalid array element
Each element of array_of_gsizes and array_of_psizes must be a positive integer

Invalid distribution element
Each element of array_of_distribs must be either MPI_DISTRIBUTE_BLOCK, MPI_DISTRIBUTE_CYCLIC, or MPI_DISTRIBUTE_NONE

Invalid darg element
Each element of array_of_dargs must be a positive integer or equal to MPI_DISTRIBUTE_DFLT_DARG

Invalid order
order must either be MPI_ORDER_C or MPI_ORDER_Fortran

MPI_DATATYPE_NULL not valid
oldtype cannot be equal to MPI_DATATYPE_NULL

Undefined datatype
oldtype is not a defined datatype

Invalid datatype
oldtype cannot be MPI_LB, MPI_UB or MPI_PACKED

Invalid grid size
The product of the elements of array_of_psizes must be equal to size

Invalid block distribution
The condition (array_of_psizes[i]* array_of_dargs[i])<array_of_ gsizes[i] must be satisfied for all indices i between 0 and ndims-1 for which a block distribution is specified

Invalid psize element
Each element of array_of_psizes must be equal to 1 if the same element of array_of_distribs has a value of MPI_DISTRIBUTE_NONE

Stride overflow

Extent overflow

Size overflow

Upper or lower bound overflow

Related Information

MPI_TYPE_COMMIT
MPI_TYPE_FREE
MPI_TYPE_GET_CONTENTS
MPI_TYPE_GET_ENVELOPE

MPI_TYPE_CREATE_SUBARRAY, MPI_Type_create_subarray

Purpose

Returns a new datatype that represents an ndims-dimensional subarray of an ndims-dimensional array.

C Synopsis

#include <mpi.h>
int MPI_Type_create_subarray (int ndims,int array_of_sizes[],
    int array_of_subsizes[],int array_of_starts[],
    int order,MPI_Datatype oldtype,MPI_Datatype *newtype);
    

Fortran Synopsis

include 'mpif.h'
MPI_TYPE_CREATE_SUBARRAY (INTEGER NDIMS,INTEGER ARRAY_OF_SUBSIZES(*),
    INTEGER ARRAY_OF_SIZES(*),INTEGER ARRAY_OF_STARTS(*),
    INTEGER ORDER,INTEGER OLDTYPE,INTEGER NEWTYPE,INTEGER IERROR)
    

Parameters

ndims

is the number of array dimensions(positive integer)(IN)

array_of_sizes

is the number of elements of type oldtype in each dimension of the full array (array of positive integers)(IN)

array_of_subsizes

is the number of type oldtype in each dimension of the subarray (array of positive integers)(IN)

array_of_starts

is the starting coordinates of the subarray in each dimension (array of nonnegative integers)(IN)

order

is the array storage order flag(state)(IN)

oldtype

is the array element datatype (handle)(IN)

newtype

is the new datatype (handle)(OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_TYPE_CREATE_SUBARRAY creates an MPI datatype describing an ndims-dimensional subarray of an ndims-dimensional array. The subarray may be situated anywhere within the full array and may be of any nonzero size up to the size of the larger array as long as it is confined within this array.

This function facilitates creating filetypes to access arrays distributed in blocks among tasks to a single file that contains the full array.

Errors

Fatal Errors:

MPI not initialized

MPI already finalized

Invalid dimension count
ndims must be a positive integer

Invalid array element
Each element of array_of_sizes and array_of_subsizes must be a positive integer, and each element of array_of_starts must be a nonnegative integer

Invalid order
order must be either MPI_ORDER_C or MPI_ORDER_Fortran

MPI_DATATYPE_NULL not valid
oldtype cannot be equal to MPI_DATATYPE_NULL

Undefined datatype
oldtype is not a defined datatype

Invalid datatype
oldtype cannot be MPI_LB, MPI_UB or MPI_PACKED

Invalid subarray size
Each element of array_of_subsizes cannot be greater than the same element of array_of_sizes

Invalid start element
The subarray must be fully contained within the full array.

Stride overflow

Extent overflow

Size overflow

Upper or lower bound overflow

Related Information

MPI_TYPE_COMMIT
MPI_TYPE_FREE
MPI_TYPE_GET_CONTENTS
MPI_TYPE_GET_ENVELOPE

MPI_TYPE_EXTENT, MPI_Type_extent

Purpose

Returns the extent of any defined datatype.

C Synopsis

#include <mpi.h>
int MPI_Type_extent(MPI_Datatype datatype,int *extent);

Fortran Synopsis

include 'mpif.h'
MPI_TYPE_EXTENT(INTEGER DATATYPE,INTEGER EXTENT,INTEGER IERROR)

Parameters

datatype

is the datatype (handle) (IN)

extent

is the datatype extent (integer) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine returns the extent of a datatype. The extent of a datatype is the span from the first byte to the last byte occupied by entries in this datatype and rounded up to satisfy alignment requirements.

Notes

Rounding for alignment is not done when MPI_UB is used to define the datatype. Types defined with MPI_LB, MP_UB or with any type that itself contains MPI_LB or MPI_UB may return an extent which is not directly related to the layout of data in memory. Refer to MPI_Type_struct for more information on MPI_LB and MPI_UB.

Errors

Invalid datatype

MPI not initialized

MPI already finalized

Related Information

MPI_TYPE_SIZE

MPI_TYPE_FREE, MPI_Type_free

Purpose

Marks a datatype for deallocation.

C Synopsis

#include <mpi.h>
int MPI_Type_free(MPI_Datatype *datatype);

Fortran Synopsis

include 'mpif.h'
MPI_TYPE_FREE(INTEGER DATATYPE,INTEGER IERROR)

Parameters

datatype

is the datatype to be freed (handle) (INOUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine marks the datatype object associated with datatype for deallocation. It sets datatype to MPI_DATATYPE_NULL. All communication currently using this datatype completes normally. Derived datatypes defined from the freed datatype are not affected.

Notes

MPI_FILE_GET_VIEW and MPI_TYPE_GET_CONTENTS both return new references or handles for existing MPI_Datatypes. Each new reference to a derived type should be freed after the reference is no longer needed. New references to named types must not be freed. You can identify a derived datatype by calling MPI_TYPE_GET_ENVELOPE and checking that the combiner is not MPI_COMBINER_NAMED. MPI cannot discard a derived MPI_datatype if there are any references to it that have not been freed by MPI_TYPE_FREE.

Errors

Invalid datatype

Predefined datatype

Type is already free

MPI not initialized

MPI already finalized

Related Information

MPI_TYPE_COMMIT
MPI_FILE_GET_VIEW
MPI_TYPE_GET_CONTENTS
MPI_TYPE_GET_ENVELOPE

MPI_TYPE_GET_CONTENTS, MPI_Type_get_contents

Purpose

Obtains the arguments used in the creation of the datatype.

C Synopsis

#include <mpi.h>
int MPI_Type_get_contents(MPI_Datatype datatype,
    int *max_integers, int *max_addresses, int *max_datatypes,
    int array_of_integers[],
    int array_of_addresses[],
    int array_of_datatypes[]);

Fortran Synopsis

include 'mpif.h'
MPI_TYPE_GET_CONTENTS(INTEGER DATATYPE, INTEGER MAX_INTEGERS,
    INTEGER MAX_ADDRESSES, INTEGER MAX_DATATYPES,
    INTEGER ARRAY_of_INTEGERS, INTEGER ARRAY_OF_ADDRESSES,
    INTEGER ARRAY_of_DATATYPES, INTEGER IERROR)

Parameters

datatype

is the datatype to access (handle) (IN)

max_integers

is the number of elements in array_of_integers (non-negative integer) (IN)

max_addresses

is the number of elements in the array_of_addresses (non-negative integer) (IN)

max_datatypes

is the number of elements in array_of_datatypes (non-negative integer) (IN)

array_of_integers

contains integer arguments used in the constructing datatype (array of integers) (OUT)

array_of_addresses

contains address arguments used in the constructing datatype (array of integers) (OUT)

array_of_datatypes

contains datatype arguments used in the constructing datatype (array of handles) (OUT)

If the combiner is MPI_COMBINER_NAMED, it is erroneous to call MPI_TYPE_GET_CONTENTS.

Table 4 lists the combiners and constructor arguments. The lowercase names of the arguments are shown.

Table 4. Combiners and Constructor Arguments
Constructor Argument C Location Fortran Location
ni
na
nd

MPI_COMBINER_DUP
oldtype d[0] D(1)
0
0
1

MPI_COMBINER_CONTIGUOUS
count
oldtype

i[0]
d[0]

I(1)
D(1)

1
0
1

MPI_COMBINER_VECTOR
count
blocklength
stride
oldtype

i[0]
i[1]
i[2]
d[0]

I(1)
I(2)
I(3)
D(1)

3
0
1

MPI_COMBINER_HVECTOR
MPI_COMBINER_HVECTOR_INTEGER

count
blocklength
stride
oldtype

i[0]
i[1]
a[0]
d[0]

I(1)
I(2)
A(1)
D(1)

2
1
1

MPI_COMBINER_INDEXED
count
array_of_blocklengths
array_of_displacements
oldtype

i[0]
i[1] to i[i[0]]
i[i[0]+1] to i[2*i[0]]
d[0]

I(1)
I(2) to I(I(1)+1)
I(I(1)+2) to I(2*I(1)+1)
D(1)

2*count+1
0
1

MPI_COMBINER_HINDEXED
MPI_COMBINER_HINDEXED_INTEGER

count
array_of_blocklengths
array_of_displacements
oldtype

i[0]
i[1] to i[i[0]]
a[0] to a[i[0]-1]
d[0]

I(1)
I(2) to I(I(1)+1)
A(1) to A(I(1))
D(1)

count+1
count
1

MPI_COMBINER_INDEXED_BLOCK
count
blocklength
array_of_displacements
oldtype

i[0]
i[1]
i[2] to i[i[0]+1]
d[0]

I(1)
I(2)
I(3) to I(I(1)+2)
D(1)

count+2
0
1

MPI_COMBINER_STRUCT
MPI_COMBINER_STRUCT_INTEGER

count
array_of_blocklengths
array_of_displacements
array_of_types

i[0]
i[1] to i[i[0]]
a[0] to a[i[0]-1]
d[0] to d[i[0]-1]

I(1)
I(2) to I(I(1)+1)
A(1) to A(I(1))
D(1)

count+1
count
count

MPI_COMBINER_SUBARRAY
ndims
array_of_sizes
array_of_subsizes
array_of_starts
order
oldtype

i[0]
i[1] to i[i[0]]
i[i[0]+1] to i[2*i[0]]
i[2*i[0]+1] to i[3*i[0]]
d[0]

I(1)
I(2) to I(I(1)+1)
I(I(1)+2) to I(2*I(1)+1)
I(2*I(1)+2) to I(3*I(1)+1)
I(3*I(1)+2)
D(1)

3*ndims+2
0
1

MPI_COMBINER_DARRAY
size
rank
ndims
array_of_gsizes
array_of_distribs
array_of_dargs
array_of_psizes
order
oldtype

i[0]
i[1]
i[2]
i[3] to i[i[2]+2]
i[i[2]+3] to i[2*i[2]+2]
i[2*i[2]+3] to i[3*i[2]+2]
i[3*i[2]+3] to i[4*i[2]+2]
i[4*i[2]+3]
d[0]

I(1)
I(2)
I(3)
I(4) to I(I(3)+3)
I(I(3)+4) to I(2*I(3)+3)
I(2*I(3)+4) to I(3*I(3)+3)
I(3*I(3)+4) to I(4*I(3)+3)
I(4*I(3)+4)
D(1)

4*ndims+4
0
1

MPI_COMBINER_F90_REAL
MPI_COMBINER_F90_COMPLEX

p
r

i[0]
i[1]

I(1)
I(2)

2
0
0

MPI_COMBINER_F90_INTEGER
r i[0] I(1)
1
0
0

MPI_COMBINER_RESIZED
lb
extent
oldtype

a[0]
a[1]
d[0]

A(1)
A(2)
D(1)

0
2
1

Description

MPI_TYPE_GET_CONTENTS identifies the combiner and returns the arguments that were used with this combiner to create the datatype of interest. A call to MPI_TYPE_GET_CONTENTS is normally preceded by a call to MPI_TYPE_GET_ENVELOPE to discover whether the type of interest is one that can be decoded and if so, how large the output arrays must be. An MPI_COMBINER_NAMED datatype is a predefined type that may not be decoded. The datatype handles returned in array_of_datatypes can include both named and derived types. The derived types may or may not already be committed. Each entry in array_of_datatypes is a separate datatype handle that must eventually be freed if it represents a derived type.

Notes

An MPI type constructor, such as MPI_TYPE_CONTIGUOUS, creates a datatype object within MPI and gives a handle for that object to the caller. This handle represents one reference to the object. In this implementation of MPI, the MPI datatypes obtained with calls to MPI_TYPE_GET_CONTENTS are new handles for the existing datatype objects. The number of handles (references) given to the user is tracked by a reference counter in the object. MPI cannot discard a datatype object unless MPI_TYPE_FREE has been called on every handle the user has obtained.

The use of reference-counted objects is encouraged, but not mandated, by the MPI standard. Another MPI implementation may create new objects instead. The user should be aware of a side effect of the reference count approach. Suppose mytype was created by a call to MPI_TYPE_VECTOR and used so that a later call to MPI_TYPE_GET_CONTENTS returns its handle in hertype. Because both handles identify the same datatype object, attribute changes made with either handle are changes in the single object. That object will exist at least until MPI_TYPE_FREE has been called on both mytype and hertype. Freeing either handle alone will leave the object intact and the other handle will remain valid.

Errors

Invalid datatype

Predefined datatype

Maximum array size is not big enough

MPI already finalized

MPI not initialized

Related Information

MPI_TYPE_COMMIT
MPI_TYPE_FREE
MPI_TYPE_GET_ENVELOPE

MPI_TYPE_GET_ENVELOPE, MPI_Type_get_envelope

Purpose

Determines the constructor that was used to create the datatype and the amount of data that will be returned by a call to MPI_TYPE_GET_CONTENTS for the same datatype.

C Synopsis

#include <mpi.h>
int MPI_Type_get_envelope(MPI_Datatype datatype, int *num_integers,
    int *num_addresses, int *num_datatypes, int *combiner);

Fortran Synopsis

include 'mpif.h'
MPI_TYPE_GET_ENVELOPE(INTEGER DATATYPE, INTEGER NUM_INTEGERS,
    INTEGER NUM_ADDRESSES, INTEGER NUM_DATATYPES, INTEGER COMBINER,
    INTEGER IERROR)

Parameters

datatype

is the datatype to access (handle) (IN)

num_integers

is the number of input integers used in the call constructing combiner (non-negative integer) (OUT)

num_addresses

is the number of input addresses used in the call constructing combiner (non-negative integer) (OUT)

num_datatypes

is the number of input datatypes used in the call constructing combiner (non-negative integer) (OUT)

combiner

is the combiner (state) (OUT)

Table 5 lists the combiners and the calls associated with them.

Table 5. Combiners and Calls
Combiner What It Represents
MPI_COMBINER_NAMED A named, predefined datatype
MPI_COMBINER_DUP MPI_TYPE_DUP
MPI_COMBINER_CONTIGUOUS MPI_TYPE_CONTIGUOUS
MPI_COMBINER_VECTOR MPI_TYPE_VECTOR
MPI_COMBINER_HVECTOR MPI_TYPE_HVECTOR from C and in some cases Fortran or MPI_TYPE_CREATE_HVECTOR
MPI_COMBINER_HVECTOR_INTEGER MPI_TYPE_HVECTOR from Fortran
MPI_COMBINER_INDEXED MPI_TYPE_INDEXED
MPI_COMBINER_HINDEXED MPI_TYPE_HINDEXED from C and in some cases Fortran or MPI_TYPE_CREATE_HINDEXED
MPI_COMBINER_HINDEXED_INTEGER MPI_TYPE_HINDEXED from Fortran
MPI_COMBINER_INDEXED_BLOCK MPI_TYPE_CREATE_INDEXED_BLOCK
MPI_COMBINER_STRUCT MPI_TYPE_STRUCT from C and in some cases Fortran or MPI_TYPE_CREATE_STRUCT
MPI_COMBINER_STRUCT_INTEGER MPI_TYPE_STRUCT from Fortran
MPI_COMBINER_SUBARRAY MPI_TYPE_CREATE_SUBARRAY
MPI_COMBINER_DARRAY MPI_TYPE_CREATE_DARRAY
MPI_COMBINER_F90_REAL MPI_TYPE_CREATE_F90_REAL
MPI_COMBINER_F90_COMPLEX MPI_TYPE_CREATE_F90_COMPLEX
MPI_COMBINER_F90_INTEGER MPI_TYPE_CREATE_F90_INTEGER
MPI_COMBINER_RESIZED MPI_TYPE_CREATE_RESIZED

Description

MPI_TYPE_GET_ENVELOPE provides information about an unknown datatype which will allow it to be decoded if appropriate. This includes identifying the combiner used to create the unknown type and the sizes that the arrays must be if MPI_TYPE_GET_CONTENTS is to be called. MPI_TYPE_GET_ENVELOPE is also used to determine whether a datatype handle returned by MPI_TYPE_GET_CONTENTS or MPI_FILE_GET_VIEW is for a predefined, named datatype. When the combiner is MPI_COMBINER_NAMED, it is an error to call MPI_TYPE_GET_CONTENTS or MPI_TYPE_FREE with the datatype.

Errors

Invalid datatype

MPI already finalized

MPI not initialized

Related Information

MPI_TYPE_FREE
MPI_TYPE_GET_CONTENTS

MPI_TYPE_HINDEXED, MPI_Type_hindexed

Purpose

Returns a new datatype that represents count blocks. Each block is defined by an entry in array_of_blocklengths and array_of_displacements. Displacements are expressed in bytes.

C Synopsis

#include <mpi.h>
int MPI_Type_hindexed(int count,int *array_of_blocklengths,
    MPI_Aint *array_of_displacements,MPI_Datatype oldtype,
    MPI_Datatype *newtype);

Fortran Synopsis

include 'mpif.h'
MPI_TYPE_HINDEXED(INTEGER COUNT,INTEGER ARRAY_OF_BLOCKLENGTHS(*),
    INTEGER ARRAY_OF DISPLACEMENTS(*),INTEGER OLDTYPE,INTEGER NEWTYPE,
    INTEGER IERROR)

Parameters

count

is the number of blocks and the number of entries in array_of_displacements and array_of_blocklengths (non-negative integer) (IN)

array_of_blocklengths

is the number of instances of oldtype for each block (array of non-negative integers) (IN)

array_of_displacements

is a byte displacement for each block (array of integer) (IN)

oldtype

is the old datatype (handle) (IN)

newtype

is the new datatype (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine returns a new datatype that represents count blocks. Each is defined by an entry in array_of_blocklengths and array_of_displacements. Displacements are expressed in bytes rather than in multiples of the oldtype extent as in MPI_TYPE_INDEXED.

Notes

newtype must be committed using MPI_TYPE_COMMIT before being used for communication.

Errors

Invalid count
count < 0

Invalid blocklength
blocklength [i] < 0

Undefined oldtype

Oldtype is MPI_LB, MPI_UB or MPI_PACKED

MPI not initialized

MPI already finalized

Related Information

MPI_TYPE_COMMIT
MPI_TYPE_FREE
MPI_TYPE_GET_CONTENTS
MPI_TYPE_GET_ENVELOPE
MPI_TYPE_INDEXED

MPI_TYPE_HVECTOR, MPI_Type_hvector

Purpose

Returns a new datatype that represents equally-spaced blocks. The spacing between the start of each block is given in bytes.

C Synopsis

#include <mpi.h>
int MPI_Type_hvector(int count,int blocklength,MPI_Aint stride,
    MPI_Datatype oldtype,MPI_Datatype *newtype);

Fortran Synopsis

include 'mpif.h'
MPI_TYPE_HVECTOR(INTEGER COUNT,INTEGER BLOCKLENGTH,INTEGER STRIDE,
    INTEGER OLDTYPE,INTEGER NEWTYPE,INTEGER IERROR)

Parameters

count

is the number of blocks (non-negative integer) (IN)

blocklength

is the number of oldtype instances in each block (non-negative integer) (IN)

stride

is an integer specifying the number of bytes between start of each block. (IN)

oldtype

is the old datatype (handle) (IN)

newtype

is the new datatype (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine returns a new datatype that represents count equally spaced blocks. Each block is a concatenation of blocklength instances of oldtype. The origins of the blocks are spaced stride units apart where the counting unit is one byte.

Notes

newtype must be committed using MPI_TYPE_COMMIT before being used for communication.

Errors

Invalid count
count < 0

Invalid blocklength
blocklength < 0

Undefined oldtype

Oldtype is MPI_LB, MPI_UB or MPI_PACKED

MPI not initialized

MPI already finalized

Related Information

MPI_TYPE_COMMIT
MPI_TYPE_FREE
MPI_TYPE_GET_CONTENTS
MPI_TYPE_GET_ENVELOPE
MPI_TYPE_VECTOR

MPI_TYPE_INDEXED, MPI_Type_indexed

Purpose

Returns a new datatype that represents count blocks. Each block is defined by an entry in array_of_blocklengths and array_of_displacements. Displacements are expressed in units of extent(oldtype).

C Synopsis

#include <mpi.h>
int MPI_Type_indexed(int count,int *array_of_blocklengths,
    int *array_of_displacements,MPI_Datatype oldtype,
    MPI_datatype *newtype);

Fortran Synopsis

include 'mpif.h'
MPI_TYPE_INDEXED(INTEGER COUNT,INTEGER ARRAY_OF_BLOCKLENGTHS(*),
    INTEGER ARRAY_OF DISPLACEMENTS(*),INTEGER OLDTYPE,INTEGER NEWTYPE,
    INTEGER IERROR)

Parameters

count

is the number of blocks and the number of entries in array_of_displacements and array_of_blocklengths (non-negative integer) (IN)

array_of_blocklengths

is the number of instances of oldtype in each block (array of non-negative integers) (IN)

array_of_displacements

is the displacement of each block in units of extent(oldtype) (array of integer) (IN)

oldtype

is the old datatype (handle) (IN)

newtype

is the new datatype (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine returns a new datatype that represents count blocks. Each is defined by an entry in array_of_blocklengths and array_of_displacements. Displacements are expressed in units of extent(oldtype).

Notes

newtype must be committed using MPI_TYPE_COMMIT before being used for communication.

Errors

Invalid count
count < 0

Invalid count
blocklength [i] < 0

Undefined oldtype

Oldtype is MPI_LB, MPI_UB or MPI_PACKED

MPI not initialized

MPI already finalized

Related Information

MPI_TYPE_COMMIT
MPI_TYPE_FREE
MPI_TYPE_GET_CONTENTS
MPI_TYPE_GET_ENVELOPE
MPI_TYPE_HINDEXED

MPI_TYPE_LB, MPI_Type_lb

Purpose

Returns the lower bound of a datatype.

C Synopsis

#include <mpi.h>
int MPI_Type_lb(MPI_Datatype datatype,int *displacement);

Fortran Synopsis

include 'mpif.h'
MPI_TYPE_LB(INTEGER DATATYPE,INTEGER DISPLACEMENT,INTEGER IERROR)

Parameters

datatype

is the datatype (handle) (IN)

displacement

is the displacement of lower bound from the origin in bytes (integer) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine returns the lower bound of a specific datatype.

Normally the lower bound is the offset of the lowest address byte in the datatype. Datatype constructors with explicit MPI_LB and vector constructors with negative stride can produce lb < 0. Lower bound cannot be greater than upper bound. For a type with MPI_LB in its ancestry, the value returned by MPI_TYPE_LB may not be related to the displacement of the lowest address byte. Refer to MPI_TYPE_STRUCT for more information on MPI_LB and MPI_UB.

Errors

Invalid datatype

MPI not initialized

MPI already finalized

Related Information

MPI_TYPE_UB
MPI_TYPE_STRUCT

MPI_TYPE_SIZE, MPI_Type_size

Purpose

Returns the number of bytes represented by any defined datatype.

C Synopsis

#include <mpi.h>
int MPI_Type_size(MPI_Datatype datatype,int *size);

Fortran Synopsis

include 'mpif.h'
MPI_TYPE_SIZE(INTEGER DATATYPE,INTEGER SIZE,INTEGER IERROR)

Parameters

datatype

is the datatype (handle) (IN)

size

is the datatype size (integer) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine returns the total number of bytes in the type signature associated with datatype. Entries with multiple occurrences in the datatype are counted.

Errors

Invalid datatype

MPI not initialized

MPI already finalized

Related Information

MPI_TYPE_EXTENT

MPI_TYPE_STRUCT, MPI_Type_struct

Purpose

Returns a new datatype that represents count blocks. Each is defined by an entry in array_of_blocklengths, array_of_displacements and array_of_types. Displacements are expressed in bytes.

C Synopsis

#include <mpi.h>
int MPI_Type_struct(int count,int *array_of_blocklengths,
    MPI_Aint *array_of_displacements,MPI_Datatype *array_of_types,
    MPI_datatype *newtype);

Fortran Synopsis

include 'mpif.h'
MPI_TYPE_STRUCT(INTEGER COUNT,INTEGER ARRAY_OF_BLOCKLENGTHS(*),
    INTEGER ARRAY_OF DISPLACEMENTS(*),INTEGER ARRAY_OF_TYPES(*),
    INTEGER NEWTYPE,INTEGER IERROR)

Parameters

count

is an integer specifying the number of blocks. It is also the number of entries in arrays array_of_types, array_of_displacements and array_of_blocklengths. (IN)

array_of_blocklengths

is the number of elements in each block (array of integer). That is, array_of_blocklengths(i) specifies the number of instances of type array_of_types(i)in block(i). (IN)

array_of_displacements

is the byte displacement of each block (array of integer) (IN)

array_of_types

is the datatype comprising each block. That is, block(i) is made of a concatenation of type array_of_types(i). (array of handles to datatype objects) (IN)

newtype

is the new datatype (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine returns a new datatype that represents count blocks. Each is defined by an entry in array_of_blocklengths, array_of_displacements and array_of_types. Displacements are expressed in bytes.

MPI_TYPE_STRUCT is the most general type constructor. It allows each block to consist of replications of different datatypes. This is the only constructor which allows MPI pseudo types MPI_LB and MPI_UB. Without these pseudo types, the extent of a datatype is the range from the first byte to the last byte rounded up as needed to meet boundary requirements. For example, if a type is made of an integer followed by 2 characters, it will still have an extent of 8 because it is padded to meet the boundary constraints of an int. This is intended to match the behavior of a compiler defining an array of such structures.

Because there may be cases in which this default behavior is not correct, MPI provides a means to set explicit upper and lower bounds which may not be directly related to the lowest and highest displacement datatype. When the pseudo type MPI_UB is used, the upper bound will be the value specified as the displacement of the MPI_UB block. No rounding for alignment is done. MPI_LB can be used to set an explicit lower bound but its use does not suppress rounding. When MPI_UB is not used, the upper bound of the datatype is adjusted to make the extent a multiple of the type's most boundary constrained component.

The marker placed by a MPI_LB or MPI_UB is 'sticky'. For example, assume type A is defined with a MPI_UB at 100. Type B is defined with a type A at 0 and a MPI_UB at 50. In effect, type B has received a MPI_UB at 50 and an inherited MPI_UB at 100. Because the inherited MPI_UB is higher, it is kept in the type B definition and the MPI_UB explicitly placed at 50 is discarded.

Notes

newtype must be committed using MPI_TYPE_COMMIT before being used for communication.

Errors

Invalid count
count < 0

Invalid blocklength
blocklength[i] < 0

Undefined oldtype in array_of_types

MPI not initialized

MPI already finalized

Related Information

MPI_TYPE_COMMIT
MPI_TYPE_FREE
MPI_TYPE_GET_CONTENTS
MPI_TYPE_GET_ENVELOPE

MPI_TYPE_UB, MPI_Type_ub

Purpose

Returns the upper bound of a datatype.

C Synopsis

#include <mpi.h>
int MPI_Type_ub(MPI_Datatype datatype,int *displacement);

Fortran Synopsis

include 'mpif.h'
MPI_TYPE_UB(INTEGER DATATYPE,INTEGER DISPLACEMENT,
INTEGER IERROR)

Parameters

datatype

is the datatype (handle) (IN)

displacement

is the displacement of upper bound from origin in bytes (integer) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine returns the upper bound of a specific datatype.

The upper bound is the displacement you use in locating the origin byte of the next instance of datatype for operations which use count and datatype. In the normal case, ub represents the displacement of the highest address byte of the datatype + e (where e >= 0 and results in (ub - lb) being a multiple of the boundary requirement for the most boundary constrained type in the datatype). If MPI_UB is used in a type constructor, no alignment adjustment is done so ub is exactly as you set it.

For a type with MPI_UB in its ancestry, the value returned by MPI_TYPE_UB may not be related to the displacement of the highest address byte (with rounding). Refer to MPI_TYPE_STRUCT for more informatin on MPI_LB and MPI_UB.

Errors

Invalid datatype

MPI not initialized

MPI already finalized

Related Information

MPI_TYPE_LB
MPI_TYPE_STRUCT

MPI_TYPE_VECTOR, MPI_Type_vector

Purpose

Returns a new datatype that represents equally spaced blocks. The spacing between the start of each block is given in units of extent (oldtype).

C Synopsis

#include <mpi.h>
int MPI_Type_vector(int count,int blocklength,int stride,
    MPI_Datatype oldtype,MPI_Datatype *newtype);

Fortran Synopsis

include 'mpif.h'
MPI_TYPE_VECTOR(INTEGER COUNT,INTEGER BLOCKLENGTH,
    INTEGER STRIDE,INTEGER OLDTYPE,INTEGER NEWTYPE,INTEGER IERROR)

Parameters

count

is the number of blocks (non-negative integer) (IN)

blocklength

is the number of oldtype instances in each block (non-negative integer) (IN)

stride

is the number of units between the start of each block (integer) (IN)

oldtype

is the old datatype (handle) (IN)

newtype

is the new datatype (handle) (OUT)

IERROR

is the Fortran return code. It is always the last argument.

Description

This function returns a new datatype that represents count equally spaced blocks. Each block is a a concatenation of blocklength instances of oldtype. The origins of the blocks are spaced stride units apart where the counting unit is extent(oldtype). That is, from one origin to the next in bytes = stride * extent (oldtype).

Notes

newtype must be committed using MPI_TYPE_COMMIT before being used for communication.

Errors

Invalid count
count < 0

Invalid blocklength
blocklength < 0

Undefined oldtype

Oldtype is MPI_LB, MPI_UB or MPI_PACKED

MPI not initialized

MPI already finalized

Related Information

MPI_TYPE_COMMIT
MPI_TYPE_FREE
MPI_TYPE_GET_CONTENTS
MPI_TYPE_GET_ENVELOPE
MPI_TYPE_HVECTOR

MPI_UNPACK, MPI_Unpack

Purpose

Unpacks the message into the specified receive buffer from the specified packed buffer.

C Synopsis

#include <mpi.h>
int MPI_Unpack(void* inbuf,int insize,int *position,
    void *outbuf,int outcount,MPI_Datatype datatype,
    MPI_Comm comm);

Fortran Synopsis

include 'mpif.h'
MPI_UNPACK(CHOICE INBUF,INTEGER INSIZE,INTEGER POSITION,
     CHOICE OUTBUF,INTEGER OUTCOUNT,INTEGER DATATYPE,INTEGER COMM,
     INTEGER IERROR)

Parameters

inbuf

is the input buffer start (choice) (IN)

insize

is an integer specifying the size of input buffer in bytes (IN)

position

is an integer specifying the current packed buffer offset in bytes (INOUT)

outbuf

is the output buffer start (choice) (OUT)

outcount

is an integer specifying the number of instances of datatype to be unpacked (IN)

datatype

is the datatype of each output data item (handle) (IN)

comm

is the communicator for the packed message (handle) (IN)

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine unpacks the message specified by outbuf, outcount, and datatype from the buffer space specified by inbuf and insize. The output buffer is any receive buffer allowed in MPI_RECV. The input buffer is any contiguous storage space containing insize bytes and starting at address inbuf.

The input value of position is the beginning offset in the input buffer for the data to be unpacked. The output value of position is the offset in the input buffer following the data already unpacked. That is, the starting point for another call to MPI_UNPACK. comm is the communicator that was used to receive the packed message.

Notes

In MPI_UNPACK the outcount argument specifies the actual number of items to be unpacked. The size of the corresponding message is the increment in position.

Errors

Invalid outcount
outcount < 0

Invalid datatype

Type is not committed

Invalid communicator

Inbuf too small

MPI not initialized

MPI already finalized

Related Information

MPI_PACK

MPI_WAIT, MPI_Wait

Purpose

Waits for a nonblocking operation to complete.

C Synopsis

#include <mpi.h>
int MPI_Wait(MPI_Request *request,MPI_Status *status);

Fortran Synopsis

include 'mpif.h'
MPI_WAIT(INTEGER REQUEST,INTEGER STATUS(MPI_STATUS_SIZE),INTEGER IERROR)

Parameters

request

is the request to wait for (handle) (INOUT)

status

is the status object (status) (OUT). Note that in Fortran a single status object is an array of integers.

IERROR

is the Fortran return code. It is always the last argument.

Description

MPI_WAIT returns after the operation identified by request completes. If the object associated with request was created by a nonblocking operation, the object is deallocated and request is set to MPI_REQUEST_NULL. MPI_WAIT is a non-local operation.

You can call MPI_WAIT with a null or inactive request argument. The operation returns immediately. The status argument returns tag = MPI_ANY_TAG, source = MPI_ANY_SOURCE. The status argument is also internally configured so that calls to MPI_GET_COUNT and MPI_GET_ELEMENTS return count = 0. (This is called an empty status.)

Information on the completed operation is found in status. You can query the status object for a send or receive operation with a call to MPI_TEST_CANCELLED. For receive operations, you can also retrieve information from status with MPI_GET_COUNT and MPI_GET_ELEMENTS. If wildcards were used by the receive for either the source or tag, the actual source and tag can be retrieved by:

In C:
source = status.MPI_SOURCE
tag = status.MPI_TAG
In Fortran:
source = status(MPI_SOURCE)
tag = status(MPI_TAG)

The error field of MPI_Status is never modified. The success or failure is indicated by the return code only.

When one of the MPI wait or test calls returns status for a nonblocking operation request and the corresponding blocking operation does not provide a status argument, the status from this wait/test does not contain meaningful source, tag or message size information.

When you use this routine in a threaded application, make sure that the wait for a given request is done on only one thread. The wait does not have to be done on the thread that created the request. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid request handle

Truncation occurred

MPI not initialized

MPI already finalized

Develop mode error if:

Illegal buffer update

Related Information

MPI_WAITALL
MPI_WAITSOME
MPI_WAITANY
MPI_TEST

MPI_WAITALL, MPI_Waitall

Purpose

Waits for a collection of nonblocking operations to complete.

C Synopsis

#include <mpi.h>
int MPI_Waitall(int count,MPI_Request *array_of_requests,
    MPI_Status *array_of_statuses);

Fortran Synopsis

include 'mpif.h'
MPI_WAITALL(INTEGER COUNT,INTEGER ARRAY_OF_ REQUESTS(*),
      INTEGER ARRAY_OF_STATUSES(MPI_STATUS_SIZE,*),INTEGER IERROR)

Parameters

count

is the lists length (integer) (IN)

array_of_requests

is an array of requests of length count (array of handles) (INOUT)

array_of_statuses

is an array of status objects of length count (array of status) (OUT). Note that in Fortran a status object is itself an array.

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine blocks until all operations associated with active handles in the list complete, and returns the status of each operation. array_of_requests and array_of statuses contain count entries.

The ith entry in array_of_statuses is set to the return status of the ith operation. Requests created by nonblocking operations are deallocated and the corresponding handles in the array are set to MPI_REQUEST_NULL. If array_of_requests contains null or inactive handles, MPI_WAITALL sets the status of each one to empty.

MPI_WAITALL(count, array_of_requests, array_of_statuses) has the same effect as the execution of MPI_WAIT(array_of_requests[i], array_of_statuses[i]) for i = 0, 1, ..., count-1, in some arbitrary order. MPI_WAITALL with an array of length one is equivalent to MPI_WAIT.

The error fields are never modified unless the function gives a return code of MPI_ERR_IN_STATUS. In which case, the error field of every MPI_Status is modified to reflect the result of the corresponding request.

When you use this routine in a threaded application, make sure that the wait for a given request is done on only one thread. The wait does not have to be done on the thread that created it. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Errors

Invalid count
count <0

Invalid request array

Invalid request(s)

Truncation occurred

MPI not initialized

MPI already finalized

Related Information

MPI_WAIT
MPI_TESTALL

MPI_WAITANY, MPI_Waitany

Purpose

Waits for any specified nonblocking operation to complete.

C Synopsis

#include <mpi.h>
int MPI_Waitany(int count,MPI_Request *array_of_requests,
    int *index,MPI_Status *status);

Fortran Synopsis

include 'mpif.h'
MPI_WAITANY(INTEGER COUNT,INTEGER ARRAY_OF_REQUESTS(*),INTEGER INDEX,
      INTEGER STATUS(MPI_STATUS_SIZE),INTEGER IERROR)

Parameters

count

is the list length (integer) (IN)

array_of_requests

is the array of requests (array of handles) (INOUT)

index

is the index of the handle for the operation that completed (integer) (OUT)

status

status object (status) (OUT). Note that in Fortran a single status object is an array of integers.

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine blocks until one of the operations associated with the active requests in the array has completed. If more than one operation can complete, one is arbitrarily chosen. MPI_WAITANY returns in index the index of that request in the array, and in status the status of the completed operation. When the request is allocated by a nonblocking operation, it is deallocated and the request handle is set to MPI_REQUEST_NULL.

The array_of_requests list can contain null or inactive handles. When the list has a length of zero or all entries are null or inactive, the call returns immediately with index = MPI_UNDEFINED, and an empty status.

MPI_WAITANY(count, array_of_requests, index, status) has the same effect as the execution of MPI_WAIT(array_of_requests[i], status), where i is the value returned by index. MPI_WAITANY with an array containing one active entry is equivalent to MPI_WAIT.

The error fields are never modified unless the function gives a return code of MPI_ERR_IN_STATUS. In which case, the error field of every MPI_Status is modified to reflect the result of the corresponding request.

When one of the MPI wait or test calls returns status for a nonblocking operation request and the corresponding blocking operation does not provide a status argument, the status from this wait/test does not contain meaningful source, tag or message size information.

When you use this routine in a threaded application, make sure that the wait for a given request is done on only one thread. The wait does not have to be done on the thread that created it. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Notes

In C, the array is indexed from zero and in Fortran from one.

Errors

Invalid count
count < 0

Invalid requests array

Invalid request(s)

Truncation occurred

MPI not initialized

MPI already finalized

Related Information

MPI_WAIT
MPI_TESTANY

MPI_WAITSOME, MPI_Waitsome

Purpose

Waits for at least one of a list of nonblocking operations to complete.

C Synopsis

#include <mpi.h>
int MPI_Waitsome(int incount,MPI_Request *array_of_requests,
    int *outcount,int *array_of_indices,MPI_Status *array_of_statuses);

Fortran Synopsis

include 'mpif.h'
MPI_WAITSOME(INTEGER INCOUNT,INTEGER ARRAY_OF_REQUESTS,INTEGER OUTCOUNT,
      INTEGER ARRAY_OF_INDICES(*),INTEGER ARRAY_OF_STATUSES(MPI_STATUS_SIZE,*),
      INTEGER IERROR)

Parameters

incount

is the length of array_of_requests, array_of_indices, and array_of_statuses (integer) (IN)

array_of_requests

is an array of requests (array of handles) (INOUT)

outcount

is the number of completed requests (integer) (OUT)

array_of_indices

is the array of indices of operations that completed (array of integers) (OUT)

array_of_statuses

is the array of status objects for operations that completed (array of status) (OUT). Note that in Fortran a status object is itself an array.

IERROR

is the Fortran return code. It is always the last argument.

Description

This routine waits for at least one of a list of nonblocking operations associated with active handles in the list to complete. The number of completed requests from the list of array_of_requests is returned in outcount. Returns in the first outcount locations of the array array_of_indices the indices of these operations.

The status for the completed operations is returned in the first outcount locations of the array array_of_statuses. When a completed request is allocated by a nonblocking operation, it is deallocated and the associated handle is set to MPI_REQUEST_NULL.

When the list contains no active handles, then the call returns immediately with outcount = MPI_UNDEFINED.

When a request for a receive repeatedly appears in a list of requests passed to MPI_WAITSOME and a matching send was posted, then the receive eventually succeeds unless the send is satisfied by another receive. This fairness requirement also applies to send requests and to I/O requests.

The error fields are never modified unless the function gives a return code of MPI_ERR_IN_STATUS. In which case, the error field of every MPI_Status is modified to reflect the result of the corresponding request.

When one of the MPI wait or test calls returns status for a nonblocking operation request and the corresponding blocking operation does not provide a status argument, the status from this wait/test does not contain meaningful source, tag or message size information.

When you use this routine in a threaded application, make sure that the wait for a given request is done on only one thread. The wait does not have to be done on the thread that created it. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.

Notes

In C, the index within the array array_of_requests, is indexed from zero and from one in Fortran.

Errors

Invalid count
count <0

Invalid request(s)

Invalid index array

Truncation occurred

MPI not initialized

MPI already finalized

Related Information

MPI_WAIT
MPI_TESTSOME

MPI_WTICK, MPI_Wtick

Purpose

Returns the resolution of MPI_WTIME in seconds.

C Synopsis

#include <mpi.h>
double MPI_Wtick(void);

Fortran Synopsis

include 'mpif.h'
DOUBLE PRECISION MPI_WTICK()

Parameters

None.

Description

This routine returns the resolution of MPI_WTIME in seconds, the time in seconds between successive clock ticks.

Errors

MPI not initialized

MPI already finalized

Related Information

MPI_WTIME

MPI_WTIME, MPI_Wtime

Purpose

Returns the current value of time as a floating-point value.

C Synopsis

#include <mpi.h>
double MPI_Wtime(void);

Fortran Synopsis

include 'mpif.h'
DOUBLE PRECISION MPI_WTIME()

Parameters

None.

Description

This routine returns the current value of time as a double precision floating point number of seconds. This value represents elapsed time since some point in the past. This time in the past will not change during the life of the task. You are responsible for converting the number of seconds into other units if you prefer.

Notes

You can use the attribute key MPI_WTIME_IS_GLOBAL to determine if the values returned by MPI_WTIME on different nodes are synchronized. See MPI_ATTR_GET for more information.

The environment variable MP_CLOCK_SOURCE allows you to control where MPI_WTIME gets its time values from. See "Using the SP Switch Clock as a Time Source" for more information.

Errors

MPI not initialized

MPI already finalized

Related Information

MPI_WTICK
MPI_ATTR_GET


[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]