This chapter includes descriptions of the subroutines available for parallel programming. The subroutines are listed in alphabetical order. For each subroutine, a purpose, C synopsis, Fortran synopsis, description, notes, and error conditions are provided. Review the following sample subroutine before proceeding to better understand how the subroutine descriptions are structured.
Purpose
Shows how the subroutines described in this book are structured.
C Synopsis
Header file mpi.h supplies ANSI-C prototypes for every function described in the message passing subroutine section of this manual.
#include <mpi.h> int A_Sample (one or more parameters);
In the C prototype, a declaration of void * indicates that a pointer to any datatype is allowable.
Fortran Synopsis
include 'mpif.h' A_SAMPLE (ONE OR MORE PARAMETERS);
In the Fortran routines, formal parameters are described using a subroutine prototype format, even though Fortran does not support prototyping. The term CHOICE indicates that any Fortran datatype is valid.
Parameters
Argument or parameter definitions appear below:
...
Parameter types:
Description
This section contains a more detailed description of the subroutine or function.
Notes
If applicable, this section contains notes about the IBM MPI implementation and its relationship to the requirements of the MPI Standard. The IBM implementation intends to comply fully with the requirements of the MPI Standard. There are issues, however, which the Standard leaves open to the implementation's choice.
Errors
For non-file-handle errors, a single list appears here.
For errors on a file handle, up to 3 lists appear:
Non-recoverable errors are listed here.
Errors that by default return an error code to the caller appear here. These are normally recoverable errors and the error class is specified to allow you to identify the failure cause.
Errors that by default return an error code to the caller at one of the WAIT or TEST calls appear here. These are normally recoverable errors and the error class is specified to allow you to identify the failure cause.
In almost every routine, the C version is invoked as a function returning integer. In the Fortran version, the routine is called as a subroutine; that is, it has no return value. The Fortran version includes a return code parameter IERROR as the last parameter.
Related Information
This section contains a list of related functions or routines in this book.
For both C and Fortran, the Message-Passing Interface (MPI) uses the same spelling for function names. The only distinction is the capitalization. For the purpose of clarity, when referring to a function without specifying Fortran or C version, all uppercase letters are used.
Fortran refers to Fortran 77 (F77) bindings, which are officially supported for MPI. However, F77 bindings for MPI can be used by Fortran 90. Fortran 90 and High Performance Fortran (HPF) offer array section and assumed shape arrays as parameters on calls. These are not safe with MPI.
Purpose
Performs a nonblocking allgather operation.
C Synopsis
#include <mpi.h> int MPE_Iallgather(void* sendbuf,int sendcount,MPI_Datatype sendtype, void* recvbuf,int recvcount,MPI_Datatype recvtype,MPI_Comm comm, MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPE_IALLGATHER(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE, CHOICE RECVBUF,INTEGER RECVCOUNT,INTEGER RECVTYPE,INTEGER COMM, INTEGER REQUEST,INTEGER IERROR)
Parameters
Description
This routine is a nonblocking version of MPI_ALLGATHER. It performs the same function as MPI_ALLGATHER except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.
Notes
The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of your applications, but applications that use them will not be directly portable to other MPI implementations.
Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective communication routines generally do, tasks running at different speeds do not waste time waiting for each other.
When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.
The nonblocking collective routines can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.
Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.
Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Develop mode error if:
Related Information
Purpose
Performs a nonblocking allgatherv operation.
C Synopsis
#include <mpi.h> int MPE_Iallgatherv(void* sendbuf,int sendcount, MPI_Datatype sendtype,void* recvbuf,int recvcounts, int *displs,MPI_Datatype recvtype, MPI_Comm comm,MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPE_IALLGATHERV(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE, CHOICE RECVBUF,INTEGER RECVCOUNTS(*),INTEGER DISPLS(*), INTEGER RECVTYPE,INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)
Parameters
Description
This routine is a nonblocking version of MPI_ALLGATHERV. It performs the same function as MPI_ALLGATHERV except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.
Notes
The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.
Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective routines generally do, tasks running at different speeds do not waste time waiting for each other.
When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.
The nonblocking collective routines can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.
Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.
Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Develop mode error if:
Related Information
Purpose
Performs a nonblocking allreduce operation.
C Synopsis
#include <mpi.h> int MPE_Iallreduce(void* sendbuf,void* recvbuf,int count, MPI_Datatype datatype,MPI_Op op,MPI_Comm comm, MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPE_IALLREDUCE(CHOICE SENDBUF,CHOICE RECVBUF,INTEGER COUNT, INTEGER DATATYPE,INTEGER OP,INTEGER COMM,INTEGER REQUEST, INTEGER IERROR)
Parameters
Description
This routine is a nonblocking version of MPI_ALLREDUCE. It performs the same function as MPI_ALLREDUCE except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.
Notes
The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.
Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective routines generally do, tasks running at different speeds do not waste time waiting for each other.
When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.
The nonblocking collective routines can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.
Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.
Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Develop mode error if:
Related Information
Purpose
Performs a nonblocking alltoall operation.
C Synopsis
#include <mpi.h> int MPE_Ialltoall(void* sendbuf,int sendcount,MPI_Datatype sendtype, void* recvbuf,int recvcount,MPI_Datatype recvtype,MPI_Comm comm, MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPE_IALLTOALL(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE, CHOICE RECVBUF,INTEGER RECVCOUNT,INTEGER RECVTYPE,INTEGER COMM, INTEGER REQUEST,INTEGER IERROR)
Parameters
Description
This routine is a nonblocking version of MPI_ALLTOALL. It performs the same function as MPI_ALLTOALL except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.
Notes
The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.
Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective routines generally do, tasks running at different speeds do not waste time waiting for each other.
When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.
Nonblocking collective function can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.
Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.
Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Develop mode error if:
Related Information
Purpose
Performs a nonblocking alltoallv operation.
C Synopsis
#include <mpi.h> int MPE_Ialltoallv(void* sendbuf,int *sendcounts,int *sdispls, MPI_Datatype sendtype,void* recvbuf,int *recvcounts,int *rdispls, MPI_Datatype recvtype,MPI_Comm comm,MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPE_ALLTOALLV(CHOICE SENDBUF,INTEGER SENDCOUNTS(*), INTEGER SDISPLS(*),INTEGER SENDTYPE,CHOICE RECVBUF, INTEGER RECVCOUNTS(*),INTEGER RDISPLS(*),INTEGER RECVTYPE, INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)
Parameters
Description
This routine is a nonblocking version of MPI_ALLTOALLV. It performs the same function as MPI_ALLTOALLV except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.
Notes
The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.
Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective routines generally do, tasks running at different speeds do not waste time waiting for each other.
When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.
The nonblocking collective routines can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.
Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.
Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Related Information
Purpose
Performs a nonblocking barrier operation.
C Synopsis
#include <mpi.h> int MPE_Ibarrier(MPI_Comm comm,MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPE_IBARRIER(INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)
Parameters
Description
This routine is a nonblocking version of MPI_BARRIER. It returns immediately, without blocking, but will not complete (via MPI_WAIT or MPI_TEST) until all group members have called it.
Notes
The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.
When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.
A typical use of MPE_IBARRIER is to make a call to it, and then periodically test for completion with MPI_TEST. Completion indicates that all tasks in comm have arrived at the barrier. Until then, computation can continue.
Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.
Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Related Information
Purpose
Performs a nonblocking broadcast operation.
C Synopsis
#include <mpi.h> int MPE_Ibcast(void* buffer,int count,MPI_Datatype datatype, int root,MPI_Comm comm,MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPE_IBCAST(CHOICE BUFFER,INTEGER COUNT,INTEGER DATATYPE,INTEGER ROOT, INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)
Parameters
Description
This routine is a nonblocking version of MPI_BCAST. It performs the same function as MPI_BCAST except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.
Notes
The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.
Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective routines generally do, tasks running at different speeds do not waste time waiting for each other.
When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.
The nonblocking collective routines can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.
Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.
Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Error Conditions:
Develop mode error if:
Related Information
Purpose
Performs a nonblocking gather operation.
C Synopsis
#include <mpi.h> int MPE_Igather(void* sendbuf,int sendcount,MPI_Datatype sendtype, void* recvbuf,int recvcount,MPI_Datatype recvtype,int root, MPI_Comm comm,MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPE_IGATHER(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE, CHOICE RECVBUF,INTEGER RECVCOUNT,INTEGER RECVTYPE,INTEGER ROOT, INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)
Parameters
Description
This routine is a nonblocking version of MPI_GATHER. It performs the same function as MPI_GATHER except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.
Notes
The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.
Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective routines generally do, tasks running at different speeds do not waste time waiting for each other.
When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.
The nonblocking collective routines can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.
Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.
Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Develop mode error if:
Related Information
Purpose
Performs a nonblocking gatherv operation.
C Synopsis
#include <mpi.h> int MPE_Igatherv(void* sendbuf,int sendcount,MPI_Datatype sendtype, void* recvbuf,int recvcounts,int *displs,MPI_Datatype recvtype, int root,MPI_Comm comm,MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPE_IGATHERV(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE, CHOICE RECVBUF,INTEGER RECVCOUNTS(*),INTEGER DISPLS(*), INTEGER RECVTYPE,INTEGER ROOT,INTEGER COMM,INTEGER REQUEST, INTEGER IERROR)
Parameters
Description
This routine is a nonblocking version of MPI_GATHERV. It performs the same function as MPI_GATHERV except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.
Notes
The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.
Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective routines generally do, tasks running at different speeds do not waste time waiting for each other.
When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.
The nonblocking collective routines can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.
Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.
Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Develop mode error if:
Related Information
Purpose
Performs a nonblocking reduce operation.
C Synopsis
#include <mpi.h> int MPE_Ireduce(void* sendbuf,void* recvbuf,int count, MPI_Datatype datatype,MPI_Op op,int root,MPI_Comm comm, MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPE_IREDUCE(CHOICE SENDBUF,CHOICE RECVBUF,INTEGER COUNT, INTEGER DATATYPE,INTEGER OP,INTEGER ROOT,INTEGER COMM, INTEGER REQUEST,INTEGER IERROR)
Parameters
Description
This routine is a nonblocking version of MPI_REDUCE. It performs the same function as MPI_REDUCE except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.
Notes
The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.
Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective routines generally do, tasks running at different speeds do not waste time waiting for each other.
When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.
The nonblocking collective routines can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.
Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.
Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Develop mode error if:
Related Information
Purpose
Performs a nonblocking reduce_scatter operation.
C Synopsis
#include <mpi.h> int MPE_Ireduce_scatter(void* sendbuf,void* recvbuf,int *recvcounts, MPI_Datatype datatype,MPI_Op op,MPI_Comm comm, MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPE_IREDUCE_SCATTER(CHOICE SENDBUF,CHOICE RECVBUF, INTEGER RECVCOUNTS(*),INTEGER DATATYPE,INTEGER OP, INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)
Parameters
Description
This routine is a nonblocking version of MPI_REDUCE_SCATTER. It performs the same function as MPI_REDUCE_SCATTER except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.
Notes
The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.
Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective routines generally do, tasks running at different speeds do not waste time waiting for each other.
When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.
The nonblocking collective routines can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.
Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.
Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Develop mode error if:
Related Information
Purpose
Performs a nonblocking scan operation.
C Synopsis
#include <mpi.h> int MPE_Iscan(void* sendbuf,void* recvbuf,int count, MPI_Datatype datatype,MPI_Op op,MPI_Comm comm, MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPE_ISCAN(CHOICE SENDBUF,CHOICE RECVBUF,INTEGER COUNT, INTEGER DATATYPE,INTEGER OP,INTEGER COMM,INTEGER REQUEST, INTEGER IERROR)
Parameters
Description
This routine is a nonblocking version of MPI_SCAN. It performs the same function as MPI_SCAN except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.
Notes
The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.
Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective routines generally do, tasks running at different speeds do not waste time waiting for each other.
When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.
The nonblocking collective routines can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.
Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.
Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Develop mode error if:
Related Information
Purpose
Performs a nonblocking scatter operation.
C Synopsis
#include <mpi.h> int MPE_Iscatter(void* sendbuf,int sendcount,MPI_Datatype sendtype, void* recvbuf,int recvcount,MPI_Datatype recvtype,int root, MPI_Comm comm,MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPE_ISCATTER(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE, CHOICE RECVBUF,INTEGER RECVCOUNT,INTEGER RECVTYPE,INTEGER ROOT, INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)
Parameters
Description
This routine is a nonblocking version of MPI_SCATTER. It performs the same function as MPI_SCATTER except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.
Notes
The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.
Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective routines generally do, tasks running at different speeds do not waste time waiting for each other.
When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.
The nonblocking collective routines can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.
Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.
Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Develop mode error if:
Related Information
Purpose
Performs a nonblocking scatterv operation.
C Synopsis
#include <mpi.h> int MPE_Iscatterv(void* sendbuf,int *sendcounts,int *displs, MPI_Datatype sendtype,void* recvbuf,int recvcount, MPI_Datatype recvtype,int root,MPI_Comm comm,MPI_Comm *request);
Fortran Synopsis
include 'mpif.h' MPE_ISCATTERV(CHOICE SENDBUF,INTEGER SENDCOUNTS(*),INTEGER DISPLS(*), INTEGER SENDTYPE,CHOICE RECVBUF,INTEGER RECVCOUNT,INTEGER RECVTYPE, INTEGER ROOT,INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)
Parameters
Description
This routine is a nonblocking version of MPI_SCATTERV. It performs the same function as MPI_SCATTERV except that it returns a request handle that must be explicitly completed by using one of the MPI wait or test operations.
Notes
The MPE prefix used with this routine indicates that it is an IBM extension to the MPI standard and is not part of the standard itself. MPE routines are provided to enhance the function and the performance of user applications, but applications that use them will not be directly portable to other MPI implementations.
Nonblocking collective communication routines allow for increased efficiency and flexibility in some applications. Because these routines do not synchronize the participating tasks like blocking collective routines generally do, tasks running at different speeds do not waste time waiting for each other.
When it is expected that tasks will be reasonably synchronized, the blocking collective communication routines provided by standard MPI will commonly give better performance then the nonblocking versions.
The nonblocking collective routines can be used in conjunction with the MPI blocking collective routines and can be completed by any of the MPI wait or test functions. Use of MPI_REQUEST_FREE and MPI_CANCEL is not supported.
Beginning with Parallel Environment for AIX Version 2.4, the thread library has a limit of 7 outstanding nonblocking collective calls. A nonblocking call is considered outstanding between the time the call is made and the time the wait is completed. This restriction does not apply to the signal library. It does not apply to any call defined by the MPI standard.
Applications using nonblocking collective calls often provide their best performance when run in interrupt mode.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator are started in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Develop mode error if:
Related Information
Purpose
Forces all tasks of an MPI job to terminate.
C Synopsis
#include <mpi.h> int MPI_Abort(MPI_Comm comm,int errorcode);
Fortran Synopsis
include 'mpif.h' MPI_ABORT(INTEGER COMM,INTEGER ERRORCODE,INTEGER IERROR)
Parameters
Description
This routine forces an MPI program to terminate all tasks in the job. comm currently is not used. All tasks in the job are aborted. The low order 8 bits of errorcode are returned as an AIX return code.
Notes
MPI_ABORT causes all tasks to exit immediately.
Errors
Purpose
Returns the address of a variable in memory.
C Synopsis
#include <mpi.h> int MPI_Address(void* location,MPI_Aint *address);
Fortran Synopsis
include 'mpif.h' MPI_ADDRESS(CHOICE LOCATION,INTEGER ADDRESS,INTEGER IERROR)
Parameters
Description
This routine returns the byte address of location.
Notes
On the IBM RS/6000 SP, this is equivalent to address= (MPI_Aint) location in C, but the MPI_ADDRESS routine is portable to machines with less straightforward addressing.
Errors
Related Information
Purpose
Gathers individual messages from each task in comm and distributes the resulting message to each task.
C Synopsis
#include <mpi.h> int MPI_Allgather(void* sendbuf,int sendcount,MPI_Datatype sendtype, void* recvbuf,int recvcount,MPI_Datatype recvtype,MPI_Comm comm);
Fortran Synopsis
include 'mpif.h' MPI_ALLGATHER(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE, CHOICE RECVBUF,INTEGER RECVCOUNT,INTEGER RECVTYPE,INTEGER COMM, INTEGER IERROR)
Parameters
Description
MPI_ALLGATHER is similar to MPI_GATHER except that all tasks receive the result instead of just the root.
The block of data sent from task j is received by every task and placed in the jth block of the buffer recvbuf.
The type signature associated with sendcount, sendtype at a task must be equal to the type signature associated with recvcount, recvtype at any other task.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Develop mode error if:
Related Information
Purpose
Collects individual messages from each task in comm and distributes the resulting message to all tasks. Messages can have different sizes and displacements.
C Synopsis
#include <mpi.h> int MPI_Allgatherv(void* sendbuf,int sendcount,MPI_Datatype sendtype, void* recvbuf,int *recvcounts,int *displs,MPI_Datatype recvtype, MPI_Comm comm);
Fortran Synopsis
include 'mpif.h' MPI_ALLGATHERV(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE, CHOICE RECVBUF,INTEGER RECVCOUNTS(*),INTEGER DISPLS(*), INTEGER RECVTYPE,INTEGER COMM,INTEGER IERROR)
Parameters
Description
This routine collects individual messages from each task in comm and distributes the resulting message to all tasks. Messages can have different sizes and displacements.
The block of data sent from task j is recvcounts[j] elements long, and is received by every task and placed in recvbuf at offset displs[j].
The type signature associated with sendcount, sendtype at task j must be equal to the type signature of recvcounts[j], recvtype at any other task.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Develop mode error if:
Related Information
Purpose
Applies a reduction operation to the vector sendbuf over the set of tasks specified by comm and places the result in recvbuf on all of the tasks in comm.
C Synopsis
#include <mpi.h> int MPI_Allreduce(void* sendbuf,void* recvbuf,int count, MPI_Datatype datatype,MPI_Op op,MPI_Comm comm);
Fortran Synopsis
include 'mpif.h' MPI_ALLREDUCE(CHOICE SENDBUF,CHOICE RECVBUF,INTEGER COUNT, INTEGER DATATYPE,INTEGER OP,INTEGER COMM,INTEGER IERROR)
Parameters
Description
This routine applies a reduction operation to the vector sendbuf over the set of tasks specified by comm and places the result in recvbuf on all of the tasks.
This routine is similar to MPI_REDUCE except the result is returned to the receive buffer of all the group members.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Notes
See Appendix D. "Reduction Operations".
Errors
Develop mode error if:
Related Information
Purpose
Sends a distinct message from each task to every task.
C Synopsis
#include <mpi.h> int MPI_Alltoall(void* sendbuf,int sendcount,MPI_Datatype sendtype, void* recvbuf,int recvcount,MPI_Datatype recvtype, MPI_Comm comm):
Fortran Synopsis
include 'mpif.h' MPI_ALLTOALL(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE, CHOICE RECVBUF,INTEGER RECVCOUNT,INTEGER RECVTYPE,INTEGER COMM, INTEGER IERROR)
Parameters
Description
MPI_ALLTOALL sends a distinct message from each task to every task.
The jth block of data sent from task i is received by task j and placed in the ith block of the buffer recvbuf.
The type signature associated with sendcount, sendtype, at a task must be equal to the type signature associated with recvcount, recvtype at any other task. This means the amount of data sent must be equal to the amount of data received, pair wise between every pair of tasks. The type maps can be different.
All arguments on all tasks are significant.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Develop mode error if:
Related Information
Purpose
Sends a distinct message from each task to every task. Messages can have different sizes and displacements.
C Synopsis
#include <mpi.h> int MPI_Alltoallv(void* sendbuf,int *sendcounts,int *sdispls, MPI_Datatype sendtype,void* recvbuf,int *recvcounts,int *rdispls, MPI_Datatype recvtype,MPI_Comm comm);
Fortran Synopsis
include 'mpif.h' MPI_ALLTOALLV(CHOICE SENDBUF,INTEGER SENDCOUNTS(*), INTEGER SDISPLS(*),INTEGER SENDTYPE,CHOICE RECVBUF, INTEGER RECVCOUNTS(*),INTEGER RDISPLS(*),INTEGER RECVTYPE, INTEGER COMM,INTEGER IERROR)
Parameters
Description
MPI_ALLTOALLV sends a distinct message from each task to every task. Messages can have different sizes and displacements.
This routine is similar to MPI_ALLTOALL with the following differences. MPI_ALLTOALLV allows you the flexibility to specify the location of the data for the send with sdispls and the location of where the data will be placed on the receive with rdispls.
The block of data sent from task i is sendcounts[j] elements long, and is received by task j and placed in recvbuf at offset offset rdispls[i]. These blocks do not have to be the same size.
The type signature associated with sendcount[j], sendtype at task i must be equal to the type signature associated with recvcounts[i], recvtype at task j. This means the amount of data sent must be equal to the amount of data received, pair wise between every pair of tasks. Distinct type maps between sender and receiver are allowed.
All arguments on all tasks are significant.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Related Information
Purpose
Removes an attribute value from a communicator.
C Synopsis
#include <mpi.h> int MPI_Attr_delete(MPI_Comm comm,int keyval);
Fortran Synopsis
include 'mpif.h' MPI_ATTR_DELETE(INTEGER COMM,INTEGER KEYVAL,INTEGER IERROR)
Parameters
Description
This routine deletes an attribute from cache by key. MPI_ATTR_DELETE also invokes the attribute delete function delete_fn specified when the keyval is created.
Errors
Related Information
Purpose
Retrieves an attribute value from a communicator.
C Synopsis
#include <mpi.h> int MPI_Attr_get(MPI_Comm comm,int keyval,void *attribute_val, int *flag);
Fortran Synopsis
include 'mpif.h' MPI_ATTR_GET(INTEGER COMM,INTEGER KEYVAL,INTEGER ATTRIBUTE_VAL, LOGICAL FLAG,INTEGER IERROR)
Parameters
Description
This function retrieves an attribute value by key. If there is no key with value keyval, the call is erroneous. However, the call is valid if there is a key value keyval, but no attribute is attached on comm for that key. In this case, the call returns flag = false.
Notes
The implementation of the MPI_ATTR_PUT and MPI_ATTR_GET involves saving a single word of information in the communicator. The languages C and Fortran have different approaches to using this capability:
In C: As the programmer, you normally define a struct which holds arbitrary "attribute" information. Before calling MPI_ATTR_PUT, you allocate some storage for the attribute structure and then call MPI_ATTR_PUT to record the address of this structure. You must assure that the structure remains intact as long as it may be useful. As the programmer, you will also declare a variable of type "pointer to attribute structure" and pass the address of this variable when calling MPI_ATTR_GET. Both MPI_ATTR_PUT and MPI_ATTR_GET take a void* parameter but this does not imply the same parameter is passed to either one.
In Fortran: MPI_ATTR_PUT records an INTEGER*4 and MPI_ATTR_GET returns the INTEGER*4. As the programmer, you may choose to encode all attribute information in this integer or maintain a some kind of database in which the integer can index. Either of these approaches will port to other MPI implementations.
XL Fortran has an additional feature which will allow some of the same function a C programmer would use. This is the POINTER type which is described in the IBM XL Fortran Compiler V3.2 for AIX Language Reference Use of this will impact the program's portability.
Errors
Related Information
Purpose
Stores an attribute value in a communicator.
C Synopsis
#include <mpi.h> int MPI_Attr_put(MPI_Comm comm,int keyval,void* attribute_val);
Fortran Synopsis
include 'mpif.h' MPI_ATTR_PUT(INTEGER COMM,INTEGER KEYVAL,INTEGER ATTRIBUTE_VAL, INTEGER IERROR)
Parameters
Description
This routine stores the attribute value for retrieval by MPI_ATTR_GET. Any previous value is deleted with the attribute delete_fn being called and the new value is stored. If there is no key with value keyval, the call is erroneous.
Notes
The implementation of the MPI_ATTR_PUT and MPI_ATTR_GET involves saving a single word of information in the communicator. The languages C and Fortran have different approaches to using this capability:
In C: As the programmer, you normally define a struct which holds arbitrary "attribute" information. Before calling MPI_ATTR_PUT, you allocate some storage for the attribute structure and then call MPI_ATTR_PUT to record the address of this structure. You must assure that the structure remains intact as long as it may be useful. As the programmer, you will also declare a variable of type "pointer to attribute structure" and pass the address of this variable when calling MPI_ATTR_GET. Both MPI_ATTR_PUT and MPI_ATTR_GET take a void* parameter, but this does not imply the same parameter is passed to either one.
In Fortran: MPI_ATTR_PUT records an INTEGER*4 and MPI_ATTR_GET returns the INTEGER*4. As the programmer, you may choose to encode all attribute information in this integer or maintain a some kind of database in which the integer can index. Either of these approaches will port to other MPI implementations.
XL Fortran has an additional feature which will allow some of the same function a C programmer would use. This is the POINTER type which is described in the IBM XL Fortran Compiler V3.2 for AIX Language Reference Use of this will impact the program's portability.
Errors
Related Information
Purpose
Blocks each task in comm until all tasks have called it.
C Synopsis
#include <mpi.h> int MPI_Barrier(MPI_Comm comm);
Fortran Synopsis
include 'mpif.h' MPI_BARRIER(INTEGER COMM,INTEGER IERROR)
Parameters
Description
This routine blocks until all tasks have called it. Tasks cannot exit the operation until all group members have entered.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Related Information
Purpose
Broadcasts a message from root to all tasks in comm.
C Synopsis
#include <mpi.h> int MPI_Bcast(void* buffer,int count,MPI_Datatype datatype, int root,MPI_Comm comm);
Fortran Synopsis
include 'mpif.h' MPI_BCAST(CHOICE BUFFER,INTEGER COUNT,INTEGER DATATYPE,INTEGER ROOT, INTEGER COMM,INTEGER IERROR)
Parameters
Description
This routine broadcasts a message from root to all tasks in comm. The contents of root's communication buffer is copied to all tasks on return.
The type signature of count, datatype on any task must be equal to the type signature of count, datatype at the root. This means the amount of data sent must be equal to the amount of data received, pair wise between each task and the root. Distinct type maps between sender and receiver are allowed.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Develop mode error if:
Related Information
Purpose
Performs a blocking buffered mode send operation.
C Synopsis
#include <mpi.h> int MPI_Bsend(void* buf,int count,MPI_Datatype datatype, int dest,int tag,MPI_Comm comm);
Fortran Synopsis
include 'mpif.h' MPI_BSEND(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER DEST, INTEGER TAG,INTEGER COMM,INTEGER IERROR)
Parameters
Description
This routine is a blocking buffered mode send. This is a local operation. It does not depend on the occurrence of a matching receive in order to complete. If a send operation is started and no matching receive is posted, the outgoing message is buffered to allow the send call to complete.
Make sure you have enough buffer space available. An error occurs if the message must be buffered and there is there is insufficient buffer space.
Return from an MPI_BSEND does not guarantee the message was sent. It may remain in the buffer until a matching receive is posted. MPI_BUFFER_DETACH will block until all messages are received.
Errors
Related Information
Purpose
Creates a persistent buffered mode send request.
C Synopsis
#include <mpi.h> int MPI_Bsend_init(void* buf,int count,MPI_Datatype datatype, int dest,int tag,MPI_Comm comm,MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPI_BSEND_INIT(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE, INTEGER DEST,INTEGER TAG,INTEGER COMM,INTEGER REQUEST, INTEGER IERROR)
Parameters
Description
This routine creates a persistent communication request for a buffered mode send operation. MPI_START or MPI_STARTALL must be called to activate the send.
Notes
See MPI_BSEND for additional information.
Because it is the MPI_START which initiates communication, any error related to insufficient buffer space occurs at the MPI_START.
Errors
Related Information
Purpose
Provides MPI with a buffer to use for buffering messages sent with MPI_BSEND and MPI_IBSEND.
C Synopsis
#include <mpi.h> int MPI_Buffer_attach(void* buffer,int size);
Fortran Synopsis
include 'mpif.h' MPI_BUFFER_ATTACH(CHOICE BUFFER,INTEGER SIZE,INTEGER IERROR)
Parameters
Description
This routine provides MPI a buffer in the user's memory which is used for buffering outgoing messages. This buffer is used only by messages sent in buffered mode, and only one buffer is attached to a task at any time.
Notes
MPI uses part of the buffer space to store information about the buffered messages. The number of bytes required by MPI for each buffered message is given by MPI_BSEND_OVERHEAD.
If a buffer is already attached, it must be detached by MPI_BUFFER_DETACH before a new buffer can be attached.
Errors
Related Information
Purpose
C Synopsis
#include <mpi.h> int MPI_Buffer_detach(void* buffer,int *size);
Fortran Synopsis
include 'mpif.h' MPI_BUFFER_DETACH(CHOICE BUFFER,INTEGER SIZE,INTEGER IERROR)
Parameters
Description
This routine detaches the current buffer. Blocking occurs until all messages in the active buffer are transmitted. Once this function returns, you can reuse or deallocate the space taken by the buffer. There is an implicit MPI_BUFFER_DETACH inside MPI_FINALIZE. Because a buffer detach can block, the impicit detach creates some risk that an incorrect program will hang in MPI_FINALIZE.
If there is no active buffer, MPI acts as if a buffer of size 0 is associated with the task.
Notes
It is important to detach an attached buffer before it is deallocated. If this is not done, any buffered message may be lost.
In Fortran 77, the buffer argument for MPI_BUFFER_DETACH cannot return a useful value because Fortran 77 does not support pointers. If a fully portable MPI program written in Fortran calls MPI_BUFFER_DETACH, it either passes the name of the original buffer or a throwaway temp as the buffer argument.
If a buffer was attached, this implementation of MPI returns the address of the freed buffer in the first word of the buffer argument. If the size being returned is zero to four bytes, MPI_BUFFER_DETACH will not modify the buffer argument. This implementation is harmless for a program that uses either the original buffer or a throwaway temp of at least word size as buffer. It also allows the programmer who wants to use an XL Fortran POINTER as the buffer argument to do so. Using the POINTER type will affect portability.
Errors
Related Information
Purpose
Marks a nonblocking request for cancellation.
C Synopsis
#include <mpi.h> int MPI_Cancel(MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPI_CANCEL(INTEGER REQUEST,INTEGER IERROR)
Parameters
Description
This routine marks a nonblocking request for cancellation. The cancel call is local. It returns immediately; it can return even before the communication is actually cancelled. It is necessary to complete an operation marked for cancellation by using a call to MPI_WAIT or MPI_TEST (or any other wait or test call).
You can use MPI_CANCEL to cancel a persistent request in the same way it is used for nonpersistent requests. A successful cancellation cancels the active communication, but not the request itself. After the call to MPI_CANCEL and the subsequent call to MPI_WAIT or MPI_TEST, the request becomes inactive and can be activated for a new communication. It is erroneous to cancel an inactive persistent request.
The successful cancellation of a buffered send frees the buffer space occupied by the pending message.
Either the cancellation succeeds or the operation succeeds, but not both. If a send is marked for cancellation, then either the send completes normally, in which case the message sent was received at the destination task, or the send is successfully cancelled, in which case no part of the message was received at the destination. Then, any matching receive has to be satisfied by another send. If a receive is marked for cancellation, then the receive completes normally or the receive is successfully cancelled, in which case no part of the receive buffer is altered. Then, any matching send has to be satisfied by another receive.
If the operation has been cancelled successfully, information to that effect is returned in the status argument of the operation that completes the communication, and may be retrieved by a call to MPI_TEST_CANCELLED.
Notes
Nonblocking collective communication requests cannot be cancelled. MPI_CANCEL may be called on non-blocking file operation requests. The eventual call to MPI_TEST_CANCELLED will show that the cancellation did not succeed.
Errors
Related Information
Purpose
Translates task rank in a communicator into cartesian task coordinates.
C Synopsis
#include <mpi.h> MPI_Cart_coords(MPI_Comm comm,int rank,int maxdims,int *coords);
Fortran Synopsis
include 'mpif.h' MPI_CART_COORDS(INTEGER COMM,INTEGER RANK,INTEGER MAXDIMS, INTEGER COORDS(*),INTEGER IERROR)
Parameters
Description
This routine translates task rank in a communicator into task coordinates.
Notes
Task coordinates in a cartesian structure begin their numbering at 0. Row-major numbering is always used for the tasks in a cartesian structure.
Errors
Related Information
Purpose
Creates a communicator containing topology information.
C Synopsis
#include <mpi.h> int MPI_Cart_create(MPI_Comm comm_old,int ndims,int *dims, int *periods,int reorder,MPI_Comm *comm_cart);
Fortran Synopsis
include 'mpif.h' MPI_CART_CREATE(INTEGER COMM_OLD,INTEGER NDIMS,INTEGER DIMS(*), INTEGER PERIODS(*),INTEGER REORDER,INTEGER COMM_CART,INTEGER IERROR)
Parameters
Description
This routine creates a new communicator containing cartesian topology information defined by ndims, dims, periods and reorder. MPI_CART_CREATE returns a handle for this new communicator in comm_cart. If there are more tasks in comm than required by the grid, some tasks are returned comm_cart = MPI_COMM_NULL. comm_old must be an intracommunicator.
Notes
The reorder argument is ignored.
Errors
Related Information
Purpose
Retrieves cartesian topology information from a communicator.
C Synopsis
#include <mpi.h> MPI_Cart_get(MPI_Comm comm,int maxdims,int *dims,int *periods,int *coords);
Fortran Synopsis
include 'mpif.h' MPI_CART_GET(INTEGER COMM,INTEGER MAXDIMS,INTEGER DIMS(*), INTEGER PERIODS(*),INTEGER COORDS(*),INTEGER IERROR)
Parameters
Description
This routine retrieves the cartesian topology information associated with a communicator in dims, periods and coords.
Errors
Related Information
Purpose
Computes placement of tasks on the physical machine.
C Synopsis
#include <mpi.h> MPI_Cart_map(MPI_Comm comm,int ndims,int *dims,int *periods, int *newrank);
Fortran Synopsis
include 'mpif.h' MPI_CART_MAP(INTEGER COMM,INTEGER NDIMS,INTEGER DIMS(*), INTEGER PERIODS(*),INTEGER NEWRANK,INTEGER IERROR)
Parameters
Description
MPI_CART_MAP allows MPI to compute an optimal placement for the calling task on the physical machine by reordering the tasks in comm.
Notes
No reordering is done by this function; it would serve no purpose on an SP. MPI_CART_MAP returns newrank as the original rank of the calling task if it belongs to the grid, or MPI_UNDEFINED if it does not.
Errors
Purpose
Translates task coordinates into a task rank.
C Synopsis
#include <mpi.h> MPI_Cart_rank(MPI_Comm comm,int *coords,int *rank);
Fortran Synopsis
include 'mpif.h' MPI_CART_RANK(INTEGER COMM,INTEGER COORDS(*),INTEGER RANK, INTEGER IERROR)
Parameters
Description
This routine translates cartesian task coordinates into a task rank.
For dimension i with periods(i) = true, if the coordinate coords(i) is out of range, that is, coords(i) < 0 or coords(i) >equiv. dims(i), it is shifted back to the interval 0 >equiv. coords(i) < dims(i) automatically. Out of range coordinates are erroneous for non-periodic dimensions.
Notes
Task coordinates in a cartesian structure begin their numbering at 0. Row-major numbering is always used for the tasks in a cartesian structure.
Errors
Related Information
Purpose
Returns shifted source and destination ranks for a task.
C Synopsis
#include <mpi.h> MPI_Cart_shift(MPI_Comm comm,int direction,int disp, int *rank_source,int *rank_dest);
Fortran Synopsis
include 'mpif.h' MPI_CART_SHIFT(INTEGER COMM,INTEGER DIRECTION,INTEGER DISP, INTEGER RANK_SOURCE,INTEGER RANK_DEST,INTEGER IERROR)
Parameters
Description
This routine shifts the local rank along a specified coordinate dimension to generate source and destination ranks.
rank_source is obtained by subtracting disp from the nth coordinate of the local task, where n is equal to direction. Similarly, rank_dest is obtained by adding disp to the nth coordinate. Coordinate dimensions (direction) are numbered starting with 0.
If the dimension specified by direction is non-periodic, off-end shifts result in the value MPI_PROC_NULL being returned for rank_source and/or rank_dest.
Notes
In C and Fortran, the coordinate is identified by counting from 0. For example, Fortran A(X,Y) or C A[x] [y] both have x as direction 0.
Errors
Related Information
Purpose
Partitions a cartesian communicator into lower-dimensional subgroups.
C Synopsis
#include <mpi.h> MPI_Cart_sub(MPI_Comm comm,int *remain_dims,MPI_Comm *newcomm);
Fortran Synopsis
include 'mpif.h' MPI_CART_SUB(INTEGER COMM,LOGICAL REMAIN_DIMS(*),INTEGER NEWCOMM, INTEGER IERROR)
Parameters
Description
If a cartesian topology was created with MPI_CART_CREATE, you can use the function MPI_CART_SUB:
(This function is closely related to MPI_COMM_SPLIT.)
For example, MPI_CART_CREATE (..., comm) defined a 2 × 3 × 4 grid. Let remain_dims = (true, false, true). Then a call to:
MPI_CART_SUB(comm,remain_dims,comm_new),
creates three communicators. Each has eight tasks in a 2 × 4 cartesian topology. If remain_dims = (false, false, true), then the call to:
MPI_CART_SUB(comm,remain_dims,comm_new),
creates six non-overlapping communicators, each with four tasks in a one-dimensional cartesian topology.
Errors
Related Information
Purpose
Retrieves the number of cartesian dimensions from a communicator.
C Synopsis
#include <mpi.h> MPI_Cartdim_get(MPI_Comm comm,int *ndims);
Fortran Synopsis
include 'mpif.h' MPI_CARTDIM_GET(INTEGER COMM,INTEGER NDIMS,INTEGER IERROR)
Parameters
Description
This routine retrieves the number of dimensions in a cartesian topology.
Errors
Related Information
Purpose
Compares the groups and context of two communicators.
C Synopsis
#include <mpi.h> int MPI_Comm_compare(MPI_Comm comm1,MPI_Comm comm2,int *result);
Fortran Synopsis
include 'mpif.h' MPI_COMM_COMPARE(INTEGER COMM1,INTEGER COMM2,INTEGER RESULT,INTEGER IERROR)
Parameters
Description
This routine compares the groups and contexts of two communicators. The following is an explanation of each MPI_COMM_COMPARE defined value:
Errors
Related Information
Purpose
Creates a new intracommunicator with a given group.
C Synopsis
#include <mpi.h> int MPI_Comm_create(MPI_Comm comm,MPI_Group group,MPI_Comm *newcomm);
Fortran Synopsis
include 'mpif.h' MPI_COMM_CREATE(INTEGER COMM,INTEGER GROUP,INTEGER NEWCOMM, INTEGER IERROR)
Parameters
Description
MPI_COMM_CREATE is a collective function that is invoked by all tasks in the group associated with comm. This routine creates a new intracommunicator newcomm with communication group defined by group and a new context. Cached information is not propagated from comm to newcomm.
For tasks that are not in group, MPI_COMM_NULL is returned. The call is erroneous if group is not a subset of the group associated with comm. The call is executed by all tasks in comm even if they do not belong to the new group.
This call applies only to intracommunicators.
Notes
MPI_COMM_CREATE provides a way to subset a group of tasks for the purpose of separate MIMD computation with separate communication space. You can use newcomm in subsequent calls to MPI_COMM_CREATE or other communicator constructors to further subdivide a computation into parallel sub-computations.
Errors
Related Information
Purpose
Creates a new communicator that is a duplicate of an existing communicator.
C Synopsis
#include <mpi.h> int MPI_Comm_dup(MPI_Comm comm,MPI_Comm *newcomm);
Fortran Synopsis
include 'mpif.h' MPI_COMM_DUP(INTEGER COMM,INTEGER NEWCOMM,INTEGER IERROR)
Parameters
Description
MPI_COMM_DUP is a collective function that is invoked by the group associated with comm. This routine duplicates the existing communicator comm with its associated key values.
For each key value the respective copy callback function determines the attribute value associated with this key in the new communicator. One action that a copy callback may take is to delete the attribute from the new communicator. Returns in newcomm a new communicator with the same group and any copied cached information, but a new context.
This call applies to both intra and inter communicators.
Notes
Use this operation to produce a duplicate communication space that has the same properties as the original communicator. This includes attributes and topologies.
This call is valid even if there are pending point to point communications involving the communicator comm.
Remember that MPI_COMM_DUP is collective on the input communicator, so it is erroneous for a thread to attempt to duplicate a communicator that is simultaneously involved in an MPI_COMM_DUP or any collective on some other thread.
Errors
Related Information
Purpose
Marks a communicator for deallocation.
C Synopsis
#include <mpi.h> int MPI_Comm_free(MPI_Comm *comm);
Fortran Synopsis
include 'mpif.h' MPI_COMM_FREE(INTEGER COMM,INTEGER IERROR)
Parameters
Description
This collective function marks either an intra or an inter communicator object for deallocation. MPI_COMM_FREE sets the handle to MPI_COMM_NULL. Actual deallocation of the communicator object occurs when active references to it have completed. The delete callback functions for all cached attributes are called in arbitrary order. The delete functions are called immediately and not deferred until deallocation.
Errors
Related Information
Purpose
Returns the group handle associated with a communicator.
C Synopsis
#include <mpi.h> int MPI_Comm_group(MPI_Comm comm,MPI_Group *group);
Fortran Synopsis
include 'mpif.h' MPI_COMM_GROUP(INTEGER COMM,INTEGER GROUP,INTEGER IERROR)
Parameters
Description
This routine returns the group handle associated with a communicator.
Notes
If comm is an intercommunicator, then group is set to the local group. To determine the remote group of an intercommunicator, use MPI_COMM_REMOTE_GROUP.
Errors
Related Information
Purpose
Returns the rank of the local task in the group associated with a communicator.
C Synopsis
#include <mpi.h> int MPI_Comm_rank(MPI_Comm comm,int *rank);
Fortran Synopsis
include 'mpif.h' MPI_COMM_RANK(INTEGER COMM,INTEGER RANK,INTEGER IERROR)
Parameters
Description
This routine returns the rank of the local task in the group associated with a communicator.
You can use this routine with MPI_COMM_SIZE to determine the amount of concurrency available for a specific job. MPI_COMM_RANK indicates the rank of the task that calls it in the range from 0...size - 1, where size is the return value of MPI_COMM_SIZE.
This routine is a shortcut to accessing the communicator's group with MPI_COMM_GROUP, computing the rank using MPI_GROUP_RANK and freeing the temporary group by using MPI_GROUP_FREE.
If comm is an intercommunicator, rank is the rank of the local task in the local group.
Errors
Related Information
Purpose
Returns the handle of the remote group of an intercommunicator.
C Synopsis
#include <mpi.h> int MPI_Comm_remote_group(MPI_Comm comm,MPI_group *group);
Fortran Synopsis
include 'mpif.h' MPI_COMM_REMOTE_GROUP(INTEGER COMM,MPI_GROUP GROUP,INTEGER IERROR)
Parameters
Description
This routine is a local operation that returns the handle of the remote group of an intercommunicator.
Notes
To determine the local group of an intercommunicator, use MPI_COMM_GROUP.
Errors
Related Information
Purpose
Returns the size of the remote group of an intercommunicator.
C Synopsis
#include <mpi.h> int MPI_Comm_remote_size(MPI_Comm comm,int *size);
Fortran Synopsis
include 'mpif.h' MPI_COMM_REMOTE_SIZE(INTEGER COMM,INTEGER SIZE,INTEGER IERROR)
Parameters
Description
This routine is a local operation that returns the size of the remote group of an intercommunicator.
Notes
To determine the size of the local group of an intercommunicator, use MPI_COMM_SIZE.
Errors
Related Information
Purpose
Returns the size of the group associated with a communicator.
C Synopsis
#include <mpi.h> int MPI_Comm_size(MPI_Comm comm,int *size);
Fortran Synopsis
include 'mpif.h' MPI_COMM_SIZE(INTEGER COMM,INTEGER SIZE,INTEGER IERROR)
Parameters
Description
This routine returns the size of the group associated with a communicator. This routine is a shortcut to:
If comm is an intercommunicator, size will be the size of the local group. To determine the size of the remote group of an intercommunicator, use MPI_COMM_REMOTE_SIZE.
You can use this routine with MPI_COMM_RANK to determine the amount of concurrency available for a specific library or program. MPI_COMM_RANK indicates the rank of the task that calls it in the range from 0...size - 1, where size is the return value of MPI_COMM_SIZE. The rank and size information can then be used to partition work across the available tasks.
Notes
This function indicates the number of tasks in a communicator. For MPI_COMM_WORLD, it indicates the total number of tasks available.
Errors
Related Information
Purpose
Splits a communicator into multiple communicators based on color and key.
C Synopsis
#include <mpi.h> int MPI_Comm_split(MPI_Comm comm,int color,int key,MPI_Comm *newcomm);
Fortran Synopsis
include 'mpif.h' MPI_COMM_SPLIT(INTEGER COMM,INTEGER COLOR,INTEGER KEY, INTEGER NEWCOMM,INTEGER IERROR)
Parameters
Description
MPI_COMM_SPLIT is a collective function that partitions the group associated with comm into disjoint subgroups, one for each value of color. Each subgroup contains all tasks of the same color. Within each subgroup, the tasks are ranked in the order defined by the value of the argument key. Ties are broken according to their rank in the old group. A new communicator is created for each subgroup and returned in newcomm. If a task supplies the color value MPI_UNDEFINED, newcomm returns MPI_COMM_NULL. Even though this is a collective call, each task is allowed to provide different values for color and key.
This call applies only to intracommunicators.
The value of color must be greater than or equal to zero.
Errors
Related Information
Purpose
Returns the type of a communicator (intra or inter).
C Synopsis
#include <mpi.h> int MPI_Comm_test_inter(MPI_Comm comm,int *flag);
Fortran Synopsis
include 'mpif.h' MPI_COMM_TEST_INTER(INTEGER COMM,LOGICAL FLAG,INTEGER IERROR)
Parameters
Description
This routine is used to determine if a communicator is an inter or an intracommunicator.
If comm is an intercommunicator, the call returns true. If comm is an intracommunicator, the call returns false.
Notes
An intercommunicator can be used as an argument to some of the communicator access routines. However, intercommunicators cannot be used as input to some of the constructor routines for intracommunicators, such as MPI_COMM_CREATE.
Errors
Purpose
Defines a cartesian grid to balance tasks.
C Synopsis
#include <mpi.h> MPI_Dims_create(int nnodes,int ndims,int *dims);
Fortran Synopsis
include 'mpif.h' MPI_DIMS_CREATE(INTEGER NNODES,INTEGER NDIMS,INTEGER DIMS(*), INTEGER IERROR)
Parameters
Description
This routine creates a cartesian grid with a given number of dimensions and a given number of nodes. The dimensions are constrained to be as close to each other as possible.
If dims[i] is a positive number when MPI_DIMS_CREATE is called, the routine will not modify the number of nodes in dimension i. Only those entries where dims[i]=0 are modified by the call.
Notes
MPI_DIMS_CREATE chooses dimensions so that the resulting grid is as close as possible to being an ndims-dimensional cube.
Errors
Related Information
Purpose
Registers a user-defined error handler.
C Synopsis
#include <mpi.h> int MPI_Errhandler_create(MPI_Handler_function *function, MPI_Errhandler *errhandler);
Fortran Synopsis
include 'mpif.h' MPI_ERRHANDLER_CREATE(EXTERNAL FUNCTION,INTEGER ERRHANDLER, INTEGER IERROR)
Parameters
Description
MPI_ERRHANDLER_CREATE registers the user routine function for use as an MPI error handler.
You can associate an error handler with a communicator. MPI will use the specified error handling routine for any exception that takes place during a call on this communicator. Different tasks can attach different error handlers to the same communicator. MPI calls not related to a specific communicator are considered as attached to the communicator MPI_COMM_WORLD.
Notes
The MPI standard specifies the following error handler prototype. A correct user error handler would be coded as:
void my_handler(MPI_Comm *comm, int *errcode, ...){}
The Parallel Environment for AIX implementation of MPI passes additional arguments to an error handler. The MPI standard allows this and urges an MPI implementation that does so to document the additional arguments. These additional arguments will be ignored by fully portable user error handlers. Anyone who wants to use the extra errhandler arguments can do so by using the C varargs (or stdargs) facility, but will be writing code that does not port cleanly to other MPI implementations, which happen to have different additional arguments.
The effective prototype for an error handler in IBM's implementation is:
typedef void (MPI_Handler_function) (MPI_Comm *comm, int *code, char *routine_name, int *flag, int *badval)
The additional arguments are:
The interpretation of badval is context-dependent, so badval is not likely to be useful to a user error handler function that cannot identify this context. The routine_name string is more likely to be useful.
Errors
Related Information
Purpose
Marks an error handler for deallocation.
C Synopsis
#include <mpi.h> int MPI_Errhandler_free(MPI_Errhandler *errhandler);
Fortran Synopsis
include 'mpif.h' MPI_ERRHANDLER_FREE(INTEGER ERRHANDLER,INTEGER IERROR)
Parameters
Description
This routine marks error handler errhandler for deallocation and sets errhandler to MPI_ERRHANDLER_NULL. Actual deallocation occurs when all communicators associated with the error handler have been deallocated.
Errors
Related Information
Purpose
Gets an error handler associated with a communicator.
C Synopsis
#include <mpi.h> int MPI_Errhandler_get(MPI_Comm comm,MPI_Errhandler *errhandler);
Fortran Synopsis
include 'mpif.h' MPI_ERRHANDLER_GET(INTEGER COMM,INTEGER ERRHANDLER,INTEGER IERROR)
Parameters
Description
This routine returns the error handler errhandler currently associated with communicator comm.
Errors
Related Information
Purpose
Associates a new error handler with a communicator.
C Synopsis
#include <mpi.h> int MPI_Errhandler_set(MPI_Comm comm,MPI_Errhandler errhandler);
Fortran Synopsis
include 'mpif.h' MPI_ERRHANDLER_SET(INTEGER COMM, INTEGER ERRHANDLER, INTEGER IERROR)
Parameters
Description
This routine associates error handler errhandler with communicator comm. The association is local.
MPI will use the specified error handling routine for any exception that takes place during a call on this communicator. Different tasks can attach different error handlers to the same communicator. MPI calls not related to a specific communicator are considered as attached to the communicator MPI_COMM_WORLD.
Notes
An error handler that does not end in the MPI job being terminated, creates undefined risks. Some errors are harmless while others are catastrophic. For example, an error detected by one member of a collective operation can result in other members waiting indefinitely for an operation which will never occur.
It is also important to note that the MPI standard does not specify the state the MPI library should be in after an error occurs. MPI does not provide a way for users to determine how much, if any, damage has been done to the MPI state by a particular error.
The default error handler is MPI_ERRORS_ARE_FATAL, which behaves as if it contains a call to MPI_ABORT. MPI_ERRHANDLER_SET allows users to replace MPI_ERRORS_ARE_FATAL with an alternate error handler. The MPI standard provides MPI_ERRORS_RETURN, and IBM adds the non-standard MPE_ERRORS_WARN. These are pre-defined handlers that cause the error code to be returned and MPI to continue to run. Error handlers that are written by MPI users may call MPI_ABORT. If they do not abort, they too will cause MPI to deliver an error return code to the caller and continue to run.
Error handlers that let MPI return should only be used if every MPI call checks its return code. Continuing to use MPI after an error involves undefined risks. You may do cleanup after an MPI error is detected, as long as it doesn't use MPI calls. This should normally be followed by a call to MPI_ABORT.
The error Invalid error handler will be raised if errhandler is a file error handler (created with the routine MPI_FILE_CREATE_ERRHANDLER). Predefined error handlers, MPI_ERRORS_ARE_FATAL and MPI_ERRORS_RETURN, can be associated with both communicators and file handles.
Errors
Related Information
Purpose
Returns the error class for the corresponding error code.
C Synopsis
#include <mpi.h> int MPI_Error_class(int errorcode,int *errorclass);
Fortran Synopsis
include 'mpif.h' MPI_ERROR_CLASS(INTEGER ERRORCODE,INTEGER ERRORCLASS,INTEGER IERROR)
Parameters
Description
This routine returns the error class corresponding to an error code.
Table 2 lists the valid error classes for threaded and non-threaded libraries.
Table 2. MPI Error Classes: Threaded and Non-Threaded Libraries
Error Classes | Description |
---|---|
MPI_SUCCESS | No error |
MPI_ERR_BUFFER | Non-valid buffer pointer |
MPI_ERR_COUNT | Non-valid count argument |
MPI_ERR_TYPE | Non-valid datatype argument |
MPI_ERR_TAG | Non-valid tag argument |
MPI_ERR_COMM | Non-valid communicator |
MPI_ERR_RANK | Non-valid rank |
MPI_ERR_REQUEST | Non-valid request (handle) |
MPI_ERR_ROOT | Non-valid root |
MPI_ERR_GROUP | Non-valid group |
MPI_ERR_OP | Non-valid operation |
MPI_ERR_TOPOLOGY | Non-valid topology |
MPI_ERR_DIMS | Non-valid dimension argument |
MPI_ERR_ARG | Non-valid argument |
MPI_ERR_IN_STATUS | Error code is in status |
MPI_ERR_PENDING | Pending request |
MPI_ERR_TRUNCATE | Message truncated on receive |
MPI_ERR_INTERN | Internal MPI error |
MPI_ERR_OTHER | Known error not provided |
MPI_ERR_UNKNOWN | Unknown error |
MPI_ERR_LASTCODE | Last standard error code |
Table 3 lists the valid error classes for threaded libraries only.
Table 3. MPI Error Classes: Threaded Libraries Only
Error Classes | Description |
---|---|
MPI_ERR_FILE | Non-valid file handle |
MPI_ERR_NOT_SAME | Collective argument is not identical on all tasks |
MPI_ERR_AMODE | Error related to the amode passed to MPI_FILE_OPEN |
MPI_ERR_UNSUPPORTED_DATAREP | Unsupported datarep passed to MPI_FILE_SET_VIEW |
MPI_ERR_UNSUPPORTED_OPERATION | Unsupported operation, such as seeking on a file that supports sequential access only |
MPI_ERR_NO_SUCH_FILE | File does not exist |
MPI_ERR_FILE_EXISTS | File exists |
MPI_ERR_BAD_FILE | Non-valid file name (the path name is too long, for example) |
MPI_ERR_ACCESS | Permission denied |
MPI_ERR_NO_SPACE | Not enough space |
MPI_ERR_QUOTA | Quota exceeded |
MPI_ERR_READ_ONLY | Read-only file or file system |
MPI_ERR_FILE_IN_USE | File operation could not be completed because the file is currently opened by some task |
MPI_ERR_DUP_DATAREP | Conversion functions could not be registered because a previously-defined data representation was passed to MPI_REGISTER_DATAREP |
MPI_ERR_CONVERSION | An error occurred in a user-supplied data conversion function |
MPI_ERR_IO | Other I/O error |
Notes
For this implementation of MPI, refer to the IBM Parallel Environment for AIX: Messages, which provides a listing of all the error messages issued as well as the error class to which the message belongs. Be aware that the MPI standard is not explicit enough about error classes to guarantee that every implementation of MPI will use the same error class for every detectable user error.
Errors
Related Information
Purpose
Returns the error string for a given error code.
C Synopsis
#include <mpi.h> int MPI_Error_string(int errorcode,char *string, int *resultlen);
Fortran Synopsis
include 'mpif.h' MPI_ERROR_STRING(INTEGER ERRORCODE,CHARCTER STRING(*), INTEGER RESULTLEN,INTEGER IERROR)
Parameters
Description
This routine returns the error string for a given error code. The returned string is null terminated with the terminating byte not counted in resultlen.
Storage for string must be at least MPI_MAX_ERROR_STRING characters long. The number of characters actually written is returned in resultlen.
Errors
Related Information
Purpose
Closes the file referred to by its file handle fh. It may also delete the file if the appropriate mode was set when the file was opened.
C Synopsis
#include <mpi.h> int MPI_File_close (MPI_File *fh);
Fortran Synopsis
include 'mpif.h' MPI_FILE_CLOSE(INTEGER FH,INTEGER IERROR)
Parameters
Description
MPI_FILE_CLOSE closes the file referred to by fh and deallocates associated internal data structures. This is a collective operation. The file is also deleted if MPI_MODE_DELETE_ON_CLOSE was set when the file was opened. In this situation, if other tasks have already opened the file and are still accessing it concurrently, these accesses will proceed normally, as if the file had not been deleted, until the tasks close the file. However, new open operations on the file will fail. If I/O operations are pending on fh, an error is returned to all the participating tasks, the file is neither closed nor deleted, and fh remains a valid file handle.
Notes
You are responsible for making sure all outstanding nonblocking requests and split collective operations associated with fh made by a task have completed before that task calls MPI_FILE_CLOSE.
If you call MPI_FINALIZE before all files are closed, an error will be raised on MPI_COMM_WORLD.
MPI_FILE_CLOSE deallocates the file handle object and sets fh to MPI_FILE_NULL.
Errors
Fatal Errors:
Returning Errors (MPI Error Class):
Returning Errors When a File Is To Be Deleted (MPI Error Class):
Related Information
Purpose
Registers a user-defined error handler that you can associate with an open file.
C Synopsis
#include <mpi.h> int MPI_File_create_errhandler (MPI_File_errhandler_fn *function, MPI_Errhandler *errhandler);
Fortran Synopsis
include 'mpif.h' MPI_FILE_CREATE_ERRHANDLER(EXTERNAL FUNCTION,INTEGER ERRHANDLER, INTEGER IERROR)
Parameters
Description
MPI_FILE_CREATE_ERRHANDLER registers the user routine function for use as an MPI error handler that can be associated with a file handle. Once associated with a file handle, MPI uses the specified error handling routine for any exception that takes place during a call on this file handle.
Notes
Different tasks can associate different error handlers with the same file. MPI_ERRHANDLER_FREE is used to free any error handler.
The MPI standard specifies the following error handler prototype:
typedef void (MPI_File_errhandler_fn) (MPI_File *, int *, ...);
A correct user error handler would be coded as:
void my_handler(MPI_File *fh, int *errcode,...){}
The Parallel Environment for AIX implementation of MPI passes additional arguments to an error handler. The MPI standard allows this and urges an MPI implementation that does so to document the additional arguments. These additional arguments will be ignored by fully portable user error handlers. Anyone who wants to use the extra errhandler arguments can do so by using the C varargs (or stdargs) facility, but will be writing code that does not port cleanly to other MPI implementations, which happen to have different additional arguments.
The effective prototype for an error handler in IBM's implementation is:
typedef void (MPI_File_errhandler_fn) (MPI_File *fh, int *code, char *routine_name, int *flag, int *badval)
The additional arguments are:
The interpretation of badval is context-dependent, so badval is not likely to be useful to a user error handler function that cannot identify this context. The routine_name string is more likely to be useful.
Errors
Fatal Errors:
Related Information
Purpose
Deletes the file referred to by filename after pending operations on the file complete. New operations cannot be initiated on the file.
C Synopsis
#include <mpi.h> int MPI_File_delete (char *filename,MPI_Info info);
Fortran Synopsis
include 'mpif.h' MPI_FILE_DELETE(CHARACTER*(*) FILENAME,INTEGER INFO, INTEGER IERROR)
Parameters
Description
This routine deletes the file referred to by filename. If other tasks have already opened the file and are still accessing it concurrently, these accesses will proceed normally, as if the file had not been deleted, until the tasks close the file. However, new open operations on the file will fail. There are no hints defined for MPI_FILE_DELETE.
Errors
Fatal Errors:
Returning Errors (MPI Error Class):
Related Information
Purpose
Retrieves the access mode specified when the file was opened.
C Synopsis
#include <mpi.h> int MPI_File_get_amode (MPI_File fh,int *amode);
Fortran Synopsis
include 'mpif.h' MPI_FILE_GET_AMODE(INTEGER FH,INTEGER AMODE,INTEGER IERROR)
Parameters
Description
MPI_FILE_GET_AMODE allows you to retrieve the access mode specified when the file referred to by fh was opened.
Errors
Fatal Errors:
Returning Errors (MPI Error Class):
Related Information
Purpose
Retrieves the current atomicity mode in which the file is accessed.
C Synopsis
#include <mpi.h> int MPI_File_get_atomicity (MPI_File fh,int *flag);
Fortran Synopsis
include 'mpif.h' MPI_FILE_GET_ATOMICITY (INTEGER FH,LOGICAL FLAG,INTEGER IERROR)
Parameters
Description
MPI_FILE_GET_ATOMICITY returns in flag 1 if the atomic mode is enabled for the file referred to by fh, otherwise flag returns 0.
Notes
The atomic mode is set to FALSE by default when the file is first opened. In MPI-2, MPI_FILE_SET_ATOMICITY is defined as the way to set atomicity. However, it is not provided in this release.
Errors
Fatal Errors:
Returning Errors (MPI Error Class):
Related Information
Purpose
Retrieves the error handler currently associated with a file handle.
C Synopsis
#include <mpi.h> int MPI_File_get_errhandler (MPI_File file,MPI_Errhandler *errhandler);
Fortran Synopsis
include 'mpif.h' MPI_FILE_GET_ERRHANDLER (INTEGER FILE,INTEGER ERRHANDLER, INTEGER IERROR)
Parameters
Description
If fh is MPI_FILE_NULL, then MPI_FILE_GET_ERRHANDLER returns in errhandler the default file error handler currently assigned to the calling task. If fh is a valid file handle, then MPI_FILE_GET_ERRHANDLER returns in errhandler, the error handler currently associated with the file handle fh. Error handlers may be different at each task.
Notes
At MPI_INIT time, the default file error handler is MPI_ERRORS_RETURN. You can alter the default by calling the routine MPI_FILE_SET_ERRHANDLER and passing MPI_FILE_NULL as the file handle parameter. Any program that uses MPI_ERRORS_RETURN should check function return codes.
Errors
Fatal Errors:
Related Information
Purpose
Retrieves the group of tasks that opened the file.
C Synopsis
#include <mpi.h> int MPI_File_get_group (MPI_File fh,MPI_Group *group);
Fortran Synopsis
include 'mpif.h' MPI_FILE GET_GROUP (INTEGER FH,INTEGER GROUP,INTEGER IERROR)
Parameters
Description
MPI_FILE_GET_GROUP lets you retrieve in group the group of tasks that opened the file referred to by fh. You are responsible for freeing group via MPI_GROUP_FREE.
Errors
Fatal Errors:
Returning Errors (MPI Error Class):
Related Information
Purpose
Returns a new info object identifying the hints associated with fh.
C Synopsis
#include <mpi.h> int MPI_File_get_info (MPI_File fh,MPI_Info *info_used);
Fortran Synopsis
include 'mpif.h' MPI_FILE_GET_INFO (INTEGER FH,INTEGER INFO_USED, INTEGER IERROR)
Parameters
Description
Because no file hints are defined in this release, MPI_FILE_GET_INFO simply creates a new empty info object and returns its handle in info_used after checking for the validity of the file handle fh. You are responsible for freeing info_used via MPI_INFO_FREE.
Notes
File hints can be specified by the user through the info parameter of routines: MPI_FILE_SET_INFO, MPI_FILE_OPEN, MPI_FILE_SET_VIEW. MPI can also assign default values to file hints it supports when these hints are not specified by the user.
Errors
Fatal Errors:
Returning Errors (MPI Error Class):
Related Information
Purpose
Retrieves the current file size.
C Synopsis
#include <mpi.h> int MPI_File_get_size (MPI_File fh,MPI_Offset size);
Fortran Synopsis
include 'mpif.h' MPI_FILE_GET_SIZE (INTEGER FH,INTEGER(KIND=MPI_OFFSET_KIND) SIZE, INTEGER IERROR)
Parameters
Description
MPI_FILE_GET_SIZE returns in size the current length in bytes of the open file referred to by fh.
Notes
You can alter the size of the file by calling the routine MPI_FILE_SET_SIZE. The size of the file will also be altered when a write operation to the file results in adding data beyond the current end of the file.
Errors
Fatal Errors:
Returning Errors (MPI Error Class):
Related Information
Purpose
Retrieves the current file view.
C Synopsis
#include <mpi.h> int MPI_File_get_view (MPI_File fh,MPI_Offset *disp, MPI_Datatype *etype,MPI_Datatype *filetype,char *datarep);
Fortran Synopsis
include 'mpif.h' MPI_FILE_GET_VIEW (INTEGER FH,INTEGER(KIND=MPI_OFFSET_KIND) DISP, INTEGER ETYPE,INTEGER FILETYPE,INTEGER DATAREP,INTEGER IERROR)
Parameters
Description
MPI_FILE_GET_VIEW retrieves the current view associated with the open file referred to by fh. The current view displacement is returned in disp. A reference to the current elementary datatype is returned in etype and a reference to the current file type is returned in filetype. The current data representation is returned in datarep. If etype and filetype are named types, they cannot be freed. If either one is a user-defined types, it should be freed. Use MPI_TYPE_GET_ENVELOPE to identify which types should be freed via MPI_TYPE_FREE. Freeing the MPI_Datatype reference returned by MPI_FILE_GET_VIEW invalidates only this reference.
Notes
To alter the view of the file, you can call the routine MPI_FILE_SET_VIEW.
The use of reference-counted objects is encouraged, but not mandated, by the MPI standard. Another MPI implementation may create new objects instead. The user should be aware of a side effect of the reference count approach. Suppose mytype was created by a call to MPI_TYPE_VECTOR and used so that a later call to MPI_TYPE_GET_VIEW returns its handle in hertype. Because both handles identify the same datatype object, attribute changes made with either handle are changes in the single object. That object will exist at least until MPI_TYPE_FREE has been called on both mytype and hertype. Freeing either handle alone will leave the object intact and the other handle will remain valid.
Errors
Fatal Errors:
Returning Errors (MPI Error Class):
Related Information
Purpose
A nonblocking version of MPI_FILE_READ_AT. The call returns immediately with a request handle that you can use to check for the completion of the read operation.
C Synopsis
#include <mpi.h> int MPI_File_iread_at (MPI_File fh,MPI_Offset offset,void *buf, int count,MPI_Datatype datatype,MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPI_FILE_IREAD_AT (INTEGER FH,INTEGER (KIND=MPI_OFFSET_KIND) OFFSET, CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER REQUEST, INTEGER IERROR)
Parameters
Description
This routine, MPI_FILE_IREAD_AT, is the nonblocking version of MPI_FILE_READ_AT and it performs the same function as MPI_FILE_READ_AT except it immediately returns in request a handle. This request handle can be used to either test or wait for the completion of the read operation or it can be used to cancel the read operation. The memory buffer buf cannot be accessed until the request has completed via a completion routine call. Completion of the request guarantees that the read operation is complete.
When MPI_FILE_IREAD_AT completes, the actual number of bytes read is stored in the completion routine's status argument. If an error occurs during the read operation, the error is returned by the completion routine through its return value or in the appropriate index of the array_of_statuses argument.
If the completion routine is associated with multiple requests, it returns when requests complete successfully. Or, if one of the requests fails, the errorhandler associated with that request is triggered. If that is an "error return" errorhandler, each element of the array_of_statuses argument is updated to contain MPI_ERR_PENDING for each request that did not yet complete. The first error dictates the outcome of the entire completion routine whether the error is on a file request or a communication request. The order in which requests are processed is not defined.
Notes
A valid call to MPI_CANCEL on the request will return MPI_SUCCESS. The eventual call to MPI_TEST_CANCELLED on the status will show that the cancel was unsuccessful.
Note that when you specify a value for the offset argument, constants of the appropriate type should be used. In Fortran, constants of type INTEGER(KIND=8) should be used, for example, 45_8.
Passing MPI_STATUS_IGNORE for the status argument or MPI_STATUSES_IGNORE for the array_of_statuses argument in the completion routine call is not supported in this release.
If an error occurs during the read operation, the number of bytes contained in the status argument of the completion routine is meaningless.
For additional information, see MPI_FILE_READ_AT.
Errors
Fatal Errors:
Returning Errors (MPI Error Class):
Error Returned By Completion Routine (MPI Error Class):
Related Information
Purpose
A nonblocking version of MPI_FILE_WRITE_AT. The call returns immediately with a request handle that you can use to check for the completion of the write operation.
C Synopsis
#include <mpi.h> int MPI_File_iwrite_at (MPI_File fh,MPI_Offset offset,void *buf, int count,MPI_Datatype datatype,MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPI_FILE_IWRITE_AT(INTEGER FH,INTEGER(KIND=MPI_OFFSET_KIND) OFFSET, CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER REQUEST, INTEGER IERROR)
Parameters
Description
This routine, MPI_FILE_IWRITE_AT, is the nonblocking version of MPI_FILE_WRITE_AT and it performs the same function as MPI_FILE_WRITE_AT except it immediately returns in request a handle. This request handle can be used to either test or wait for the completion of the write operation or it can be used to cancel the write operation. The memory buffer buf cannot be modified until the request has completed via a completion routine call. For example, MPI_WAIT, MPI_TEST, or one of the other MPI wait or test functions. Completion of the request does not guarantee that the data has been written to the storage device(s). In particular, written data may still be present in system buffers. However, it guarantees that the memory buffer can be safely reused.
When MPI_FILE_IWRITE_AT completes, the actual number of bytes written is stored in the completion routine's status argument. If an error occurs during the write operation, then the error is returned by the completion routine through its return code or in the appropriate index of the array_of_statuses argument.
If the completion routine is associated with multiple requests, it returns when all requests complete successfully. Or, if one of the requests fails, the errorhandler associated with that request is triggered. If that is an "error return" errorhandler, each element of the array_of_statuses argument is updated to contain MPI_ERR_PENDING for each request that did not yet complete. The first error dictates the outcome of the entire completion routine whether the error is on a file request or a communication request. The order in which requests are processed is not defined.
Notes
A valid call to MPI_CANCEL on the request will return MPI_SUCCESS. The eventual call to MPI_TEST_CANCELLED on the status will show that the cancel was unsuccessful.
Note that when you specify a value for the offset argument, constants of the appropriate type should be used. In Fortran, constants of type INTEGER(KIND=8) should be used, for example, 45_8.
Passing MPI_STATUSES_IGNORE for the status argument or MPI_STATUSES_IGNORE for the array_of_statuses argument in the completion routine call is not supported in this release.
If an error occurs during the write operation, the number of bytes contained in the status argument of the completion routine is meaningless.
For more information, see MPI_FILE_WRITE_AT.
Errors
Fatal Errors:
Returning Errors (MPI Error Class):
Errors Returned By Completion Routine (MPI Error Class):
Related Information
Purpose
Opens the file called filename.
C Synopsis
#include <mpi.h> int MPI_File_open (MPI_Comm comm,char *filename,int amode,MPI_info, MPI_File *fh);
Fortran Synopsis
include 'mpif.h' MPI_FILE_OPEN(INTEGER COMM,CHARACTER FILENAME(*),INTEGER AMODE, INTEGER INFO,INTEGER FH,INTEGER IERROR)
Parameters
Description
MPI_FILE_OPEN opens the file referred to by filename, sets the default view on the file, and sets the access mode amode. MPI_FILE_OPEN returns a file handle fh used for all subsequent operations on the file. The file handle fh remains valid until the file is closed (MPI_FILE_CLOSE). The default view is similar to a linear byte stream in the native representation starting at file offset 0. You can call MPI_FILE_SET_VIEW to set a different view of the file.
MPI_FILE_OPEN is a collective operation. comm must be a valid intracommunicator. Values specified for amode by all participating tasks must be identical. The program is erroneous when participating tasks do not refer to the same file through their own instances of filename.
No hints are defined in this release; therefore, info is presumed to be empty.
Notes
This implementation is targeted to the IBM Generalized Parallel File System (GPFS) for production use. It requires that a single GPFS file system be available across all tasks of the MPI job. It can also be used for development purposes on any other file system that supports the POSIX interface (AFS, DFS, JFS, or NFS), as long as the application runs on only one node or workstation.
For AFS, DFS, and NFS, MPI-IO uses file locking for all accesses by default. If other tasks on the same node share the file and also use file locking, file consistency is preserved. If the MPI_FILE_OPEN is done with mode MPI_MODE_UNIQUE_OPEN, file locking is not done.
If you call MPI_FINALIZE before all files are closed, an error will be raised on MPI_COMM_WORLD.
The following access modes (specified in amode), are supported:
In C and C++: You can use bit vector OR to combine these integer constants.
In Fortran: You can use the bit vector IOR intrinsic to combine these integers. If addition is used, each constant should only appear once.
MPI-IO depends on hidden threads that use MPI message passing. MPI-IO cannot be used with MP_SINGLE_THREAD set to yes.
The default for MP_CSS_INTERRUPT is no. If you do not override the default, MPI-IO enables interrupts while files are open. If you have forced interrupts to yes or no, MPI-IO does not alter your selection.
Parameter consistency checking is only performed if the environment variable MP_EUIDEVELOP is set to yes. If this variable is set and the amodes specified are not identical, the error Inconsistent amodes will be raised on some tasks. Similarly, if this variable is set and the file inodes associated with the file names are not identical, the error Inconsistent file inodes will be raised on some tasks. In either case, the error Consistency error occurred on another task will be raised on the other tasks.
When MPI-IO is used correctly, a file name will be represented at every task by the same file system. In one detectable error situation, a file will appear to be on different file system types. For example, a particular file could be visible to some tasks as a GPFS file and to others as NFS-mounted.
Errors
Fatal Errors:
Returning Errors (MPI Error Class):
Related Information
Purpose
Reads a file starting at the position specified by offset.
C Synopsis
#include <mpi.h> int MPI_File_read_at (MPI_File fh,MPI_Offset offset,void *buf, int count,MPI_Datatype datatype,MPI_Status *status);
Fortran Synopsis
include 'mpif.h' MPI_FILE_READ_AT(INTEGER FH,INTEGER(KIND=MPI_OFFSET_KIND) OFFSET, CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE, INTEGER STATUS(MPI_STATUS_SIZE),INTEGER IERROR)
Parameters
Description
MPI_FILE_READ_AT attempts to read from the file referred to by fh count items of type datatype into the buffer buf, starting at the offset offset, relative to the current view. The call returns only when data is available in buf. status contains the number of bytes successfully read and accessor functions MPI_GET_COUNT and MPI_GET_ELEMENTS allow you to extract from status the number of items and the number of intrinsic MPI elements successfully read, respectively. You can check for a read beyond the end of file condition by comparing the number of items requested with the number of items actually read.
Notes
Note that when you specify a value for the offset argument, constants of the appropriate type should be used. In Fortran, constants of type INTEGER(KIND=8) should be used, for example, 45_8.
Passing MPI_STATUS_IGNORE for the status argument is not supported in this release.
If an error is raised, the number of bytes contained in the status argument is meaningless.
Errors
Fatal Errors:
Returning Errors (MPI Error Class):
Related Information
Purpose
A collective version of MPI_FILE_READ_AT.
C Synopsis
#include <mpi.h> int MPI_File_read_at_all (MPI_File fh,MPI_Offset offset,void *buf, int count,MPI_Datatype datatype,MPI_Status *status);
Fortran Synopsis
include 'mpif.h' MPI_FILE_READ_AT_ALL(INTEGER FH,INTEGER(KIND=MPI_OFFSET_KIND) OFFSET, CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE, INTEGER STATUS(MPI_STATUS_SIZE),INTEGER IERROR)
Parameters
Description
MPI_FILE_READ_AT_ALL is the collective version of the routine MPI_FILE_READ_AT. It has the exact semantics as its counterpart. The number of bytes actually read by the calling task is returned in status. The call returns when the data requested by the calling task is available in buf. The call does not wait for accesses from other tasks associated with the file handle fh to have data available in their buffers.
Notes
Note that when you specify a value for the offset argument, constants of the appropriate type should be used. In Fortran, constants of type INTEGER(KIND=8) should be used, for example, 45_8.
Passing MPI_STATUS_IGNORE for the status argument is not supported in this release.
If an error is raised, the number of bytes contained in status is meaningless.
For additional information, see MPI_FILE_READ_AT.
Errors
Fatal Errors:
Returning Errors (MPI Error Class):
Related Information
Purpose
Associates a new error handler to a file.
C Synopsis
#include <mpi.h> int MPI_File_set_errhandler (MPI_File fh, MPI_Errhandler errhandler);
Fortran Synopsis
include 'mpif.h' MPI_FILE_SET_ERRHANDLER(INTEGER FH,INTEGER ERRHANLDER, INTEGER IERROR)
Parameters
Description
MPI_FILE_SET_ERRHANDLER associates a new error handler to a file. If fh is equal to MPI_FILE_NULL, then MPI_FILE_SET_ERRHANDLER defines the new default file error handler on the calling task to be error handler errhandler. If fh is a valid file handle, then this routine associates the error handler errhandler with the file referred to by fh.
Notes
The error Invalid error handler is raised if errhandler was created with any error handler create routine other than MPI_FILE_CREATE_ERRHANDLER. You can associate the predefined error handlers, MPI_ERRORS_ARE_FATAL and MPI_ERRORS_RETURN, as well as the implementation-specific MPE_ERRORS_WARN, with file handles.
Errors
Fatal Errors:
Related Information
Purpose
Specifies new hints for an open file.
C Synopsis
#include <mpi.h> int MPI_File_set_info (MPI_File fh,MPI_Info info);
Fortran Synopsis
include 'mpif.h' MPI_FILE_SET_INFO(INTEGER FH,INTEGER INFO,INTEGER IERROR)
Parameters
Description
MPI_FILE_SET_INFO sets any hints that the info object contains for fh. In this release, file hints are not supported, so all info objects will be empty. However, you are free to associate new hints with an open file. They will just be ignored by MPI.
Errors
Fatal Errors:
Returning Errors (MPI Error Class):
Related Information
Purpose
Expands or truncates an open file.
C Synopsis
#include <mpi.h> int MPI_File_set_size (MPI_File fh,MPI_Offset size);
Fortran Synopsis
include 'mpif.h' MPI_FILE_SET_SIZE (INTEGER FH,INTEGER(KIND=MPI_OFFSET_KIND) SIZE, INTEGER IERROR)
Parameters
Description
MPI_FILE_SET_SIZE is a collective operation that allows you to expand or truncate the open file referred to by fh. All participating tasks must specify the same value for size. If I/O operations are pending on fh, then an error is returned to the participating tasks and the file is not resized.
If size is larger than the current file size, the file length is increased to size and a read of unwritten data in the extended area returns zeros. However, file blocks are not allocated in the extended area. If size is smaller than the current file size, the file is truncated at the position defined by size. File blocks located beyond this point are de-allocated.
Notes
Note that when you specify a value for the size argument, constants of the appropriate type should be used. In Fortran, constants of type INTEGER(KIND=8) should be used, for example, 45_8.
Parameter consistency checking is only performed if the environment variable MP_EUIDEVELOP is set to yes. If this variable is set and the sizes specified are not identical, the error Inconsistent file sizes will be raised on some tasks, and the error Consistency error occurred on another task will be raised on the other tasks.
Errors
Fatal Errors:
Returning Errors (MPI Error Class):
Related Information
Purpose
Associates a new view with the open file.
C Synopsis
#include <mpi.h> int MPI_File_set_view (MPI_File fh,MPI_Offset disp, MPI_Datatype etype,MPI_Datatype filetype, char *datarep,MPI_Info info);
Fortran Synopsis
include 'mpif.h' MPI_FILE_SET_VIEW (INTEGER FH,INTEGER(KIND=MPI_OFFSET_KIND) DISP, INTEGER ETYPE,INTEGER FILETYPE,CHARACTER DATAREP(*),INTEGER INFO, INTEGER IERROR)
Parameters
Description
MPI_FILE_SET_VIEW is a collective operation and associates a new view defined by disp, etype, filetype, and datarep with the open file referred to by fh. All participating tasks must specify the same values for datarep and the same extents for etype.
There are no further restrictions on etype and filetype except those referred to in the MPI-2 standard. No checking is performed on the validity of these datatypes. If I/O operations are pending on fh, an error is returned to the participating tasks and the new view is not associated with the file. The only data representation currently supported is native. Since in this release file hints are not supported, the info argument will be ignored, after its validity is checked.
Notes
Note that when you specify a value for the disp argument, constants of the appropriate type should be used. In Fortran, constants of type INTEGER(KIND=8) should be used, for example, 45_8.
It is expected that a call to MPI_FILE_SET_VIEW will immediately follow MPI_FILE_OPEN in many instances.
Parameter consistency checking is only performed if the environment variable MP_EUIDEVELOP is set to yes. If this variable is set and the extents of the elementary datatypes specified are not identical, the error Inconsistent elementary datatypes will be raised on some tasks and the error Consistency error occurred on another task will be raised on the other tasks.
Errors
Fatal Errors:
Returning Errors (MPI Error Class):
Related Information
Purpose
Commits file updates of an open file to one or more storage devices.
C Synopsis
#include <mpi.h> int MPI_File_sync (MPI_File fh);
Fortran Synopsis
include 'mpif.h' MPI_FILE_SYNC (INTEGER FH,INTEGER IERROR)
Parameters
Description
MPI_FILE_SYNC is a collective operation. It forces the updates to the file referred to by fh to be propagated to the storage device(s) before it returns. If I/O operations are pending on fh, an error is returned to the participating tasks and no sync operation is performed on the file.
Errors
Fatal Errors:
Returning Errors (MPI Error Class):
Related Information
Purpose
Writes to a file starting at the position specified by offset.
C Synopsis
#include <mpi.h> int MPI_File_write_at (MPI_File fh,MPI_Offset offset,void *buf, int count,MPI_Datatype datatype,MPI_Status *status);
Fortran Synopsis
include 'mpif.h' MPI_FILE_WRITE_AT(INTEGER FH,INTEGER(KIND_MPI_OFFSET_KIND) OFFSET, CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE, INTEGER STATUS(MPI_STATUS_SIZE), INTEGER IERROR)
Parameters
Description
MPI_FILE_WRITE_AT attempts to write into the file referred to by fh count items of type datatype out of the buffer buf, starting at the offset offset and relative to the current view. MPI_FILE_WRITE_AT returns when it is safe to reuse buf. status contains the number of bytes successfully written and accessor functions MPI_GET_COUNT and MPI_GET_ELEMENTS allows you to extract from status the number of items and the number of intrinsic MPI elements successfully written, respectively.
Notes
Note that when you specify a value for the offset argument, constants of the appropriate type should be used. In Fortran, constants of type INTEGER(KIND=8) should be used, for example, 45_8.
Passing MPI_STATUS_IGNORE for the status argument is not supported in this release.
If an error is raised, the number of bytes contained in status is meaningless.
When the call returns, it does not necessarily mean that the write operation has completed. In particular, written data may still be in system buffers and may not have been written to storage device(s) yet. To ensure that written data is committed to the storage device(s), you must use MPI_FILE_SYNC.
Errors
Fatal Errors:
Returning Errors (MPI Error Class):
Related Information
Purpose
A collective version of MPI_FILE_WRITE_AT.
C Synopsis
#include <mpi.h> int MPI_File_write_at_all (MPI_File fh,MPI_Offset offset,void *buf, int count,MPI_Datatype datatype,MPI_Status *status);
Fortran Synopsis
include 'mpif.h' MPI_FILE_WRITE_AT_ALL (INTEGER FH, INTEGER (KIND=MPI_OFFSET_KIND) OFFSET, CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE, INTEGER STATUS(MPI_STATUS_SIZE),INTEGER IERROR)
Parameters
Description
MPI_FILE_WRITE_AT_ALL is the collective version of MPI_FILE_WRITE_AT. In status is stored the number of bytes actually written by the calling task. The call returns when the calling task can safely reuse buf. It does not wait until the storing buffers in other participating tasks can safely be re-used.
Notes
Note that when you specify a value for the offset argument, constants of the appropriate type should be used. In Fortran, constants of type INTEGER(KIND=8) should be used, for example, 45_8.
Passing MPI_STATUS_IGNORE for the status argument is not supported in this release.
If an error is raised, the number of bytes contained in status is meaningless.
When the call returns, it does not necessarily mean that the write operation has completed. In particular, written data may still be in system buffers and may not have been written to storage device(s) yet. To ensure that written data is committed to the storage device(s), you must use MPI_FILE_SYNC.
Errors
Fatal Errors:
Returning Errors (MPI Error Class):
Related Information
Purpose
Terminates all MPI processing.
C Synopsis
#include <mpi.h> int MPI_Finalize(void);
Fortran Synopsis
include 'mpif.h' MPI_FINALIZE(INTEGER IERROR)
Parameters
Description
Make sure this routine is the last MPI call. Any MPI calls made after MPI_FINALIZE raise an error. You must be sure that all pending communications involving a task have completed before the task calls MPI_FINALIZE. You must also be sure that all files opened by the task have been closed before the task calls MPI_FINALIZE.
Although MPI_FINALIZE terminates MPI processing, it does not terminate the task. It is possible to continue with non-MPI processing after calling MPI_FINALIZE, but no other MPI calls (including MPI_INIT) can be made.
In a threaded environment both MPI_INIT and MPI_FINALIZE must be called on the same thread. MPI_FINALIZE closes the communication library and terminates the service threads. It does not affect any threads you created, other than returning an error if one subsequently makes an MPI call. If you had registered a SIGIO handler, it is restored as a signal handler; however, the SIGIO signal is blocked when MPI_FINALIZE returns. If you want to catch SIGIO after MPI_FINALIZE has been called, you should unblock it.
Notes
The MPI standard does not specify the state of MPI tasks after MPI_FINALIZE, therefore, an assumption that all tasks continue may not be portable. If MPI_BUFFER_ATTACH has been used and MPI_BUFFER_DETACH has been not called, there will be an implicit MPI_BUFFER_DETACH within MPI_FINALIZE. See MPI_BUFFER_DETACH.
Errors
Related Information
Purpose
Collects individual messages from each task in comm at the root task.
C Synopsis
#include <mpi.h> int MPI_Gather(void* sendbuf,int sendcount,MPI_Datatype sendtype, void* recvbuf,int recvcount,MPI_Datatype recvtype,int root, MPI_Comm comm);
Fortran Synopsis
include 'mpif.h' MPI_GATHER(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE, CHOICE RECVBUF,INTEGER RECVCOUNT,INTEGER RECVTYPE,INTEGER ROOT, INTEGER COMM,INTEGER IERROR)
Parameters
Description
This routine collects individual messages from each task in comm at the root task and stores them in rank order.
The type signature of sendcount, sendtype on task i must be equal to the type signature of recvcount, recvtype at the root. This means the amount of data sent must be equal to the amount of data received, pairwise between each task and the root. Distinct type maps between sender and receiver are allowed.
The following is information regarding MPI_GATHER arguments and tasks:
Note that the argument revcount at the root indicates the number of items it receives from each task. It is not the total number of items received.
A call where the specification of counts and types causes any location on the root to be written more than once is erroneous.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Develop mode error if:
Related Information
Purpose
Collects individual messages from each task in comm at the root task. Messages can have different sizes and displacements.
C Synopsis
#include <mpi.h> int MPI_Gatherv(void* sendbuf,int sendcount,MPI_Datatype sendtype, void* recvbuf,int recvcounts,int *displs,MPI_Datatype recvtype, int root,MPI_Comm comm);
Fortran Synopsis
include 'mpif.h' MPI_GATHERV(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE, CHOICE RECVBUF,INTEGER RECVCOUNTS(*),INTEGER DISPLS(*), INTEGER RECVTYPE,INTEGER ROOT,INTEGER COMM,INTEGER IERROR)
Parameters
Description
This routine collects individual messages from each task in comm at the root task and stores them in rank order. With recvcounts as an array, messages can have varying sizes, and displs allows you the flexibility of where the data is placed on the root.
The type signature of sendcount, sendtype on task i must be equal to the type signature of recvcounts[i], recvtype at the root. This means the amount of data sent must be equal to the amount of data received, pairwise between each task and the root. Distinct type maps between sender and receiver are allowed.
The following is information regarding MPI_GATHERV arguments and tasks:
A call where the specification of sizes, types and displacements causes any location on the root to be written more than once is erroneous.
Notes
Displacements are expressed as elements of type recvtype, not as bytes.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Develop mode error if:
Related Information
Purpose
Returns the number of elements in a message.
C Synopsis
#include <mpi.h> int MPI_Get_count(MPI_Status *status,MPI_Datatype datatype, int *count);
Fortran Synopsis
include 'mpif.h' MPI_GET_COUNT(INTEGER STATUS(MPI_STATUS_SIZE),INTEGER DATATYPE, INTEGER COUNT,INTEGER IERROR)
Parameters
Description
This subroutine returns the number of elements in a message. The datatype argument and the argument provided by the call that set the status variable should match.
When one of the MPI wait or test calls returns status for a non-blocking operation request and the corresponding blocking operation does not provide a status argument, the status from this wait/test does not contain meaningful source, tag or message size information.
Errors
Related Information
Purpose
Returns the number of basic elements in a message.
C Synopsis
#include <mpi.h> int MPI_Get_elements(MPI_Status *status,MPI_Datatype datatype, int *count);
Fortran Synopsis
include 'mpif.h' MPI_GET_ELEMENTS(INTEGER STATUS(MPI_STATUS_SIZE),INTEGER DATATYPE, INTEGER COUNT,INTEGER IERROR)
Parameters
Description
This routine returns the number of type map elements in a message. When the number of bytes does not align with the type signature, MPI_GET_ELEMENTS returns MPI_UNDEFINED. For example, given type signature (int, short, int, short) a 10 byte message would return 3 while an 8 byte message would return MPI_UNDEFINED.
When one of the MPI wait or test calls returns status for a nonblocking operation request and the corresponding blocking operation does not provide a status argument, the status from this wait/test does not contain meaningful source, tag or message size information.
Errors
Related Information
Purpose
Returns the name of the local processor.
C Synopsis
#include <mpi.h> int MPI_Get_processor_name(char *name,int *resultlen);
Fortran Synopsis
include 'mpif.h' MPI_GET_PROCESSOR_NAME(CHARACTER NAME(*),INTEGER RESULTLEN, INTEGER IERROR)
Parameters
Description
This routine returns the name of the local processor at the time of the call. The name is a character string from which it is possible to identify a specific piece of hardware. name represents storage that is at least MPI_MAX_PROCESSOR_NAME characters long and MPI_GET_PROCESSOR_NAME can write up to this many characters in name.
The actual number of characters written is returned in resultlen. The returned name is a null terminated C string with the terminating byte not counted in resultlen.
Errors
Purpose
Returns the version of the MPI standard supported in this release.
C Synopsis
#include <mpi.h> int MPI_Get_version(int *version,int *subversion);
Fortran Synopsis
include 'mpif.h' MPI_GET_VERSION(INTEGER VERSION, INTEGER SUBVERSION, INTEGER IERROR)
Parameters
Description
This routine is used to determine the version of the MPI standard supported by the MPI implementation.
There are also new symbolic constants, MPI_VERSION and MPI_SUBVERSION, provided in mpi.h and mpif.h that provide similar compile-time information.
MPI_GET_VERSION can be called before MPI_INIT.
Purpose
Creates a new communicator containing graph topology information.
C Synopsis
#include <mpi.h> MPI_Graph_create(MPI_Comm comm_old,int nnodes, int *index, int *edges,int reorder,MPI_Comm *comm_graph);
Fortran Synopsis
include 'mpif.h' MPI_GRAPH_CREATE(INTEGER COMM_OLD,INTEGER NNODES,INTEGER INDEX(*), INTEGER EDGES(*),INTEGER REORDER,INTEGER COMM_GRAPH, INTEGER IERROR)
Parameters
Description
This routine creates a new communicator containing graph topology information provided by nnodes, index, edges, and reorder. MPI_GRAPH_CREATE returns the handle for this new communicator in comm_graph.
If there are more tasks in comm_old then nnodes, some tasks are returned comm_graph as MPI_COMM_NULL.
Notes
The reorder argument is currently ignored.
The following is an example showing how to define the arguments
nnodes, index, and edges. Assume there are
four tasks (0, 1, 2, 3) with the following adjacency matrix:
Task | Neighbors |
---|---|
0 | 1, 3 |
1 | 0 |
2 | 3 |
3 | 0, 2 |
Then the input arguments are:
Argument | Input |
---|---|
nnodes | 4 |
index | 2, 3, 4, 6 |
edges | 1, 3, 0, 3, 0, 2 |
Thus, in C, index[0] is the degree of node zero, and index[i]-index[i-1] is the degree of node i, i=1, ..., nnodes-1. The list of neighbors of node zero is stored in edges[j], for 0 >equiv. j >equiv. index[0]-1 and the list of neighbors of node i, i > 0, is stored in edges[j], index[i-1] >equiv. j >equiv. index[i]-1.
In Fortran, index(1) is the degree of node zero, and index(i+1)- index(i) is the degree of node i, i=1, ..., nnodes-1. The list of neighbors of node zero is stored in edges(j), for 1 >equiv. j >equiv. index(1) and the list of neighbors of node i, i > 0, is stored in edges(j), index(i)+1 >equiv. j >equiv. index(i+1).
Observe that because node 0 indicates node 1 is a neighbor, that node 1 must indicate that node 0 is its' neighbor. For any edge A>B the edge B>A must also be specified.
Errors
Related Information
Purpose
Retrieves graph topology information from a communicator.
C Synopsis
#include <mpi.h> MPI_Graph_get(MPI_Comm comm,int maxindex,int maxedges, int *index,int *edges);
Fortran Synopsis
include 'mpif.h' MPI_GRAPH_GET(INTEGER COMM,INTEGER MAXINDEX,INTEGER MAXEDGES, INTEGER INDEX(*),INTEGER EDGES(*),INTEGER IERROR)
Parameters
Description
This routine retrieves the index and edges graph topology information associated with a communicator.
Errors
Related Information
Purpose
Computes placement of tasks on the physical machine.
C Synopsis
#include <mpi.h> MPI_Graph_map(MPI_Comm comm,int nnodes,int *index,int *edges,int *newrank);
Fortran Synopsis
include 'mpif.h' MPI_GRAPH_MAP(INTEGER COMM,INTEGER NNODES,INTEGER INDEX(*), INTEGER EDGES(*),INTEGER NEWRANK,INTEGER IERROR)
Parameters
Description
MPI_GRAPH_MAP allows MPI to compute an optimal placement for the calling task on the physical machine by reordering the tasks in comm.
Notes
MPI_CART_MAP returns newrank as the original rank of the calling task if it belongs to the grid or MPI_UNDEFINED if it does not. Currently, no reordering is done by this function.
Errors
Related Information
Purpose
Returns the neighbors of the given task.
C Synopsis
#include <mpi.h> MPI_Graph_neighbors(MPI_Comm comm,int rank,int maxneighbors,int *neighbors);
Fortran Synopsis
include 'mpif.h' MPI_GRAPH_NEIGHBORS(MPI_COMM COMM,INTEGER RANK,INTEGER MAXNEIGHBORS, INTEGER NNEIGHBORS(*),INTEGER IERROR)
Parameters
Description
This routine retrieves the adjacency information for a particular task.
Errors
Related Information
Purpose
Returns the number of neighbors of the given task.
C Synopsis
#include <mpi.h> MPI_Graph_neighbors_count(MPI_Comm comm,int rank, int *neighbors);
Fortran Synopsis
include 'mpif.h' MPI_GRAPH_NEIGHBORS_COUNT(INTEGER COMM,INTEGER RANK, INTEGER NEIGHBORS(*),INTEGER IERROR)
Parameters
Description
This routine returns the number of neighbors of the given task.
Errors
Related Information
Purpose
Retrieves graph topology information from a communicator.
C Synopsis
#include <mpi.h> MPI_Graphdims_get(MPI_Comm comm,int *nnodes,int *nedges);
Fortran Synopsis
include 'mpif.h' MPI_GRAPHDIMS_GET(INTEGER COMM,INTEGER NNDODES,INTEGER NEDGES, INTEGER IERROR)
Parameters
Description
This routine retrieves the number of nodes and the number of edges in the graph topology associated with a communicator.
Errors
Related Information
Purpose
Compares the contents of two task groups.
C Synopsis
#include <mpi.h> int MPI_Group_compare(MPI_Group group1,MPI_Group group2, int *result);
Fortran Synopsis
include 'mpif.h' MPI_GROUP_COMPARE(INTEGER GROUP1,INTEGER GROUP2,INTEGER RESULT, INTEGER IERROR)
Parameters
Description
This routine compares the contents of two task groups and returns one of the following:
Errors
Related Information
Purpose
Creates a new group that is the difference of two existing groups.
C Synopsis
#include <mpi.h> int MPI_Group_difference(MPI_Group group1,MPI_Group group2, MPI_Group *newgroup);
Fortran Synopsis
include 'mpif.h'
MPI_GROUP_DIFFERENCE(INTEGER GROUP1,INTEGER GROUP2, INTEGER NEWGROUP,INTEGER IERROR)
Parameters
Description
This routine creates a new group that is the difference of two existing groups. The new group consists of all elements of the first group (group1) that are not in the second group (group2), and is ordered as in the first group.
Errors
Related Information
Purpose
Creates a new group by excluding selected tasks of an existing group.
C Synopsis
#include <mpi.h> int MPI_Group_excl(MPI_Group group,int n,int *ranks, MPI_Group *newgroup);
Fortran Synopsis
include 'mpif.h' MPI_GROUP_EXCL(INTEGER GROUP,INTEGER N,INTEGER RANKS(*), INTEGER NEWGROUP,INTEGER IERROR)
Parameters
Description
This routine removes selected tasks from an existing group to create a new group.
MPI_GROUP_EXCL creates a group of tasks newgroup obtained by deleting from group tasks with ranks ranks[0],... ranks[n-1]. The ordering of tasks in newgroup is identical to the ordering in group. Each of the n elements of ranks must be a valid rank in group and all elements must be distinct. If n= 0, then newgroup is identical to group.
Errors
Related Information
Purpose
Marks a group for deallocation.
C Synopsis
#include <mpi.h> int MPI_Group_free(MPI_Group *group);
Fortran Synopsis
include 'mpif.h' MPI_GROUP_FREE(INTEGER GROUP,INTEGER IERROR)
Parameters
Description
MPI_GROUP_FREE sets the handle group to MPI_GROUP_NULL and marks the group object for deallocation. Actual deallocation occurs only after all operations involving group are completed. Any active operation using group completes normally but no new calls with meaningful references to the freed group are possible.
Errors
Purpose
Creates a new group consisting of selected tasks from an existing group.
C Synopsis
#include <mpi.h> int MPI_Group_incl(MPI_Group group,int n,int *ranks, MPI_Group *newgroup);
Fortran Synopsis
include 'mpif.h' MPI_GROUP_INCL(INTEGER GROUP,INTEGER N,INTEGER RANKS(*), INTEGER NEWGROUP,INTEGER IERROR)
Parameters
Description
This routine creates a new group consisting of selected tasks from an existing group.
MPI_GROUP_INCL creates a group newgroup consisting of n tasks in group with ranks rank[0], ..., rank[n-1]. The task with rank i in newgroup is the task with rank ranks[i] in group.
Each of the n elements of ranks must be a valid rank in group and all elements must be distinct. If n = 0, then newgroup is MPI_GROUP_EMPTY. This function can be used to reorder the elements of a group.
Errors
Related Information
Purpose
Creates a new group that is the intersection of two existing groups.
C Synopsis
#include <mpi.h> int MPI_Group_intersection(MPI_Group group1,MPI_Group group2, MPI_Group *newgroup);
Fortran Synopsis
include 'mpif.h'
MPI_GROUP_INTERSECTION(INTEGER GROUP1,INTEGER GROUP2, INTEGER NEWGROUP,INTEGER IERROR)
Parameters
Description
This routine creates a new group that is the intersection of two existing groups. The new group consists of all elements of the first group (group1) that are also part of the second group (group2), and is ordered as in the first group.
Errors
Related Information
Purpose
Creates a new group by removing selected ranges of tasks from an existing group.
C Synopsis
#include <mpi.h> int MPI_Group_range_excl(MPI_Group group,int n, int ranges[][3],MPI_Group *newgroup);
Fortran Synopsis
include 'mpif.h' MPI_GROUP_RANGE_EXCL(INTEGER GROUP,INTEGER N,INTEGER RANGES(3,*), INTEGER NEWGROUP,INTEGER IERROR)
Parameters
Description
This routine creates a new group by removing selected ranges of tasks from an existing group. Each computed rank must be a valid rank in group and all computed ranks must be distinct.
The function of this routine is equivalent to expanding the array ranges to an array of the excluded ranks and passing the resulting array of ranks and other arguments to MPI_GROUP_EXCL. A call to MPI_GROUP_EXCL is equivalent to a call to MPI_GROUP_RANGE_EXCL with each rank i in ranks replaced by the triplet (i,i,1) in the argument ranges.
Errors
Related Information
Purpose
Creates a new group consisting of selected ranges of tasks from an existing group.
C Synopsis
#include <mpi.h> int MPI_Group_range_incl(MPI_Group group,int n, int ranges[][3],MPI_Group *newgroup);
Fortran Synopsis
include 'mpif.h' MPI_GROUP_RANGE_INCL(INTEGER GROUP,INTEGER N,INTEGER RANGES(3,*), INTEGER NEWGROUP,INTEGER IERROR)
Parameters
Description
This routine creates a new group consisting of selected ranges of tasks from an existing group. The function of this routine is equivalent to expanding the array of ranges to an array of the included ranks and passing the resulting array of ranks and other arguments to MPI_GROUP_INCL. A call to MPI_GROUP_INCL is equivalent to a call to MPI_GROUP_RANGE_INCL with each rank i in ranks replaced by the triplet (i,i,1) in the argument ranges.
Errors
Related Information
Purpose
Returns the rank of the local task with respect to group.
C Synopsis
#include <mpi.h> int MPI_Group_rank(MPI_Group group,int *rank);
Fortran Synopsis
include 'mpif.h' MPI_GROUP_RANK(INTEGER GROUP,INTEGER RANK,INTEGER IERROR)
Parameters
Description
This routine returns the rank of the local task with respect to group. This local operation does not require any intertask communication.
Errors
Related Information
Purpose
Returns the number of tasks in a group.
C Synopsis
#include <mpi.h> int MPI_Group_size(MPI_Group group,int *size);
Fortran Synopsis
include 'mpif.h' MPI_GROUP_SIZE(INTEGER GROUP,INTEGER SIZE,INTEGER IERROR)
Parameters
Description
This routine returns the number of tasks in a group. This is a local operation and does not require any intertask communication.
Errors
Related Information
Purpose
Converts task ranks of one group into ranks of another group.
C Synopsis
#include <mpi.h> int MPI_Group_translate_ranks(MPI_Group group1,int n, int *ranks1,MPI_Group group2,int *ranks2);
Fortran Synopsis
include 'mpif.h' MPI_GROUP_TRANSLATE_RANKS(INTEGER GROUP1, INTEGER N, INTEGER RANKS1(*),INTEGER GROUP2,INTEGER RANKS2(*),INTEGER IERROR)
Parameters
Description
This subroutine converts task ranks of one group into ranks of another group. For example, if you know the ranks of tasks in one group, you can use this function to find the ranks of tasks in another group.
Errors
Related Information
MPI_COMM_COMPARE
Purpose
Creates a new group that is the union of two existing groups.
C Synopsis
#include <mpi.h> int MPI_Group_union(MPI_Group group1,MPI_Group group2, MPI_Group *newgroup);
Fortran Synopsis
include 'mpif.h'
MPI_GROUP_UNION(INTEGER GROUP1,INTEGER GROUP2,INTEGER NEWGROUP, INTEGER IERROR)
Parameters
Description
This routine creates a new group that is the union of two existing groups. The new group consists of the elements of the first group (group1) followed by all the elements of the second group (group2) not in the first group.
Errors
Related Information
Purpose
Performs a nonblocking buffered mode send operation.
C Synopsis
#include <mpi.h> int MPI_Ibsend(void* buf,int count,MPI_Datatype datatype, int dest,int tag,MPI_Comm comm,MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPI_IBSEND(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER DEST, INTEGER TAG,INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)
Parameters
Description
MPI_IBSEND starts a buffered mode, nonblocking send. The send buffer may not be modified until the request has been completed by MPI_WAIT, MPI_TEST, or one of the other MPI wait or test functions.
Notes
See MPI_BSEND for additional information.
Errors
Develop mode error if:
Related Information
Purpose
C Synopsis
#include <mpi.h> int MPI_Info_create (MPI_Info *info);
Fortran Synopsis
include 'mpif.h' MPI_INFO_CREATE (INTEGER INFO,INTEGER IERROR)
Parameters
Description
MPI_INFO_CREATE creates a new info object and returns a handle to it in the info argument.
Because this release does not recognize any key, info objects are always empty.
Errors
Fatal Errors:
Related Information
Purpose
Deletes a (key, value) pair from an info object.
C Synopsis
#include <mpi.h> int MPI_Info_delete (MPI_Info info,char *key);
Fortran Synopsis
include 'mpif.h' MPI_INFO_DELETE (INTEGER INFO,CHARACTER KEY(*), INTEGER IERROR)
Parameters
Description
MPI_INFO_DELETE deletes a pair (key, value) from info. If the key is not recognized by MPI, it is ignored and the call returns MPI_SUCCESS and has no effect on info.
Because this release does not recognize any key, this call always returns MPI_SUCCESS and has no effect on info.
Errors
Fatal Errors:
Related Information
Purpose
C Synopsis
#include <mpi.h> int MPI_Info_dup (MPI_Info info,MPI_Info *newinfo);
Fortran Synopsis
include 'mpif.h' MPI_INFO_DUP (INTEGER INFO,INTEGER NEWINFO,INTEGER IERROR)
Parameters
Description
MPI_INFO_DUP duplicates the info object referred to by info and returns in newinfo a handle to this newly created info object.
Because this release does not recognize any key, the new info object is empty.
Errors
Fatal Errors:
Related Information
Purpose
Frees the info object referred to by the info argument and sets it to MPI_INFO_NULL.
C Synopsis
#include <mpi.h> int MPI_Info_free (MPI_Info *info);
Fortran Synopsis
include 'mpif.h' MPI_INFO_FREE (INTEGER INFO,INTEGER IERROR)
Parameters
Description
MPI_INFO_FREE frees the info object referred to by the info argument and sets info to MPI_INFO_NULL.
Errors
Fatal Errors:
Related Information
Purpose
Retrieves the value associated with key in an info object.
C Synopsis
#include <mpi.h> int MPI_Info_get (MPI_Info info,char *key,int valuelen, char *value,int *flag);
Fortran Synopsis
include 'mpif.h' MPI_INFO_GET (INTEGER INFO,CHARACTER KEY(*),INTEGER VALUELEN, CHARACTER VALUE(*),LOGICAL FLAG,INTEGER IERROR)
Parameters
Description
MPI_INFO_GET retrieves the value associated with key in the info object referred to by info.
Because this release does not recognize any key, flag is set to false, value remains unchanged, and valuelen is ignored.
Notes
In order to determine how much space should be allocated for the value argument, call MPI_INFO_GET_VALUELEN first.
Errors
Fatal Errors:
Related Information
Purpose
Returns the number of keys defined in an info object.
C Synopsis
#include <mpi.h> int MPI_Info_get_nkeys (MPI_Info info,int *nkeys);
Fortran Synopsis
include 'mpif.h' MPI_INFO_GET_NKEYS (INTEGER INFO,INTEGER NKEYS,INTEGER IERROR)
Parameters
Description
MPI_INFO_GET_NKEYS returns in nkeys the number of keys currently defined in the info object referred to by info.
Because this release does not recognize any key, the number of keys returned is zero.
Errors
Fatal Errors:
Related Information
Purpose
Retrieves the nth key defined in an info object.
C Synopsis
#include <mpi.h> int MPI_Info_get_nthkey (MPI_Info info, int n, char *key);
Fortran Synopsis
include 'mpif.h' MPI_INFO_GET_NTHKEY (INTEGER INFO,INTEGER N,CHARACTER KEY(*), INTEGER IERROR)
Parameters
Description
MPI_INFO_GET_NTHKEY retrieves the nth key defined in the info object referred to by info. The first key defined has the rank of 0 so n must be greater than - 1 but less than the number of keys returned by MPI_INFO_GET_NKEYS.
Because this release does not recognize any key, this function always raises an error.
Errors
Fatal Errors:
Related Information
Purpose
Retrieves the length of the value associated with a key of an info object.
C Synopsis
#include <mpi.h> int MPI_Info_get_valuelen (MPI_Info info,char *key,int *valuelen, int *flag);
Fortran Synopsis
include 'mpif.h' MPI_INFO_GET_VALUELEN (INTEGER INFO,CHARACTER KEY(*),INTEGER VALUELEN, LOGICAL FLAG,INTEGER IERROR)
Parameters
Description
MPI_INFO_GET_VALUELEN retrieves the length of the value associated with the key in the info object referred to by info.
Because this release does not recognize any key, flag is set to false and valuelen remains unchanged.
Notes
Use this routine prior to calling MPI_INFO_GET to determine how much space must be allocated for the value parameter of MPI_INFO_GET.
Errors
Fatal Errors:
Related Information
Purpose
Adds a pair (key, value) to an info object.
C Synopsis
#include <mpi.h> int MPI_Info_set(MPI_Info info,char *key,char *value);
Fortran Synopsis
include 'mpif.h' MPI_INFO_SET (INTEGER INFO,CHARACTER KEY(*),CHARACTER VALUE(*), INTEGER IERROR)
Parameters
Description
MPI_INFO_SET adds a recognized (key, value) pair to the info object referred to by info. When MPI_INFO_SET is called with a key which is not recognized, it behaves as a no-op.
Because this release does not recognize any key, the info object remains unchanged.
Errors
Fatal Errors:
Related Information
Purpose
C Synopsis
#include <mpi.h> int MPI_Init(int *argc,char ***argv);
Fortran Synopsis
include 'mpif.h' MPI_INIT(INTEGER IERROR)
Parameters
Description
This routine initializes MPI. All MPI programs must call this routine before any other MPI routine (with the exception of MPI_INITIALIZED). More than one call to MPI_INIT by any task is erroneous.
Notes
argc and argv are the arguments passed to main. The IBM MPI implementation of the MPI Standard does not examine or modify these arguments when passed to MPI_INIT.
In a threaded environment, MPI_INIT needs to be called once per task and not once per thread. You don't need to call it on the main thread but both MPI_INIT and MPI_FINALIZE must be called on the same thread.
MPI_INIT opens a local socket and binds it to a port, sends that information to POE, receives a list of destination addresses and ports, opens a socket to send to each one, verifies that communication can be established, and distributes MPI internal state to each task.
In the signal-handling library, this work is done in the initialization stub added by POE, so that the library is open when your main program is called. MPI_INIT sets a flag saying that you called it.
In the threaded library, the work of MPI_INIT is done when the function is called. The local socket is not open when your main program starts. This may affect the numbering of file descriptors, the use of the environment strings, and the treatment of stdin (the MP_HOLD_STDIN variable). If an existing non-threaded program is relinked using the threaded library, the code prior to calling MPI_INIT should be examined with these thoughts in mind.
Also for the threaded library, if you had registered a function as an AIX signal handler for the SIGIO signal at the time that MPI_INIT was called, that function will be added to the interrupt service thread and be processed as a thread function rather than as a signal handler. You'll need to set the environment variable MP_CSS_INTERRUPT=YES to get arriving packets to invoke the interrupt service thread.
Errors
Related Information
Purpose
Determines whether MPI is initialized.
C Synopsis
#include <mpi.h> int MPI_Initialized(int *flag);
Fortran Synopsis
include 'mpif.h' MPI_INITIALIZED(INTEGER FLAG,INTEGER IERROR)
Parameters
Description
This routine determines if MPI is initialized. This and MPI_GET_VERSION are the only MPI calls that can be made before MPI_INIT is called.
Notes
Because it is erroneous to call MPI_INIT more than once per task, use MPI_INITIALIZED if there is doubt as to the state of MPI.
Related Information
Purpose
Creates an intercommunicator from two intracommunicators.
C Synopsis
#include <mpi.h> int MPI_Intercomm_create(MPI_Comm local_comm,int local_leader, MPI_Comm peer_comm,int remote_leader,int tag,MPI_Comm *newintercom);
Fortran Synopsis
include 'mpif.h' MPI_INTERCOMM_CREATE(INTEGER LOCAL_COMM,INTEGER LOCAL_LEADER, INTEGER PEER_COMM,INTEGER REMOTE_LEADER,INTEGER TAG, INTEGER NEWINTERCOM,INTEGER IERROR)
Parameters
Description
This routine creates an intercommunicator from two intracommunicators and is collective over the union of the local and the remote groups. Tasks should provide identical local_comm and local_leader arguments within each group. Wildcards are not permitted for remote_leader, local_leader, and tag.
MPI_INTERCOMM_CREATE uses point-to-point communication with communicator peer_comm and tag tag between the leaders. Make sure that there are no pending communications on peer_comm that could interfere with this communication. It is recommended that you use a dedicated peer communicator, such as a duplicate of MPI_COMM_WORLD, to avoid trouble with peer communicators.
Errors
Related Information
Purpose
Creates an intracommunicator by merging the local and the remote groups of an intercommunicator.
C Synopsis
#include <mpi.h> int MPI_Intercomm_merge(MPI_Comm intercomm,int high, MPI_Comm *newintracom);
Fortran Synopsis
include 'mpif.h' MPI_INTERCOMM_MERGE(INTEGER INTERCOMM,INTEGER HIGH, INTEGER NEWINTRACOMM,INTEGER IERROR)
Parameters
Description
This routine creates an intracommunicator from the union of two groups associated with intercomm. Tasks should provide the same high value within each of the two groups. If tasks in one group provide the value high = false and tasks in the other group provide the value high = true, then the union orders the "low" group before the "high" group. If all tasks provided the same high argument, then the order of the union is arbitrary.
This call is blocking and collective within the union of the two groups.
Errors
Related Information
Purpose
Checks to see if a message matching source, tag, and comm has arrived.
C Synopsis
#include <mpi.h> int MPI_Iprobe(int source,int tag,MPI_Comm comm,int *flag, MPI_Status *status);
Fortran Synopsis
include 'mpif.h' MPI_IPROBE(INTEGER SOURCE,INTEGER TAG,INTEGER COMM,INTEGER FLAG, INTEGER STATUS(MPI_STATUS_SIZE),INTEGER IERROR)
Parameters
Description
This routine allows you to check for incoming messages without actually receiving them.
MPI_IPROBE(source, tag, comm, flag, status) returns flag = true when there is a message that can be received that matches the pattern specified by the arguments source, tag, and comm. The call matches the same message that would have been received by a call to MPI_RECV(..., source, tag, comm, status) executed at the same point in the program and returns in status the same values that would have been returned by MPI_RECV(). Otherwise, the call returns flag = false and leaves status undefined.
When MPI_IPROBE returns flag = true, the content of the status object can be accessed to find the source, tag and length of the probed message.
A subsequent receive executed with the same comm, and the source and tag returned in status by MPI_IPROBE receives the message that was matched by the probe, if no other intervening receive occurs after the initial probe.
source can be MPI_ANY_SOURCE and tag can be MPI_ANY_TAG. This allows you to probe messages from any source and/or with any tag, but you must provide a specific communicator with comm.
When a message is not received immediately after it is probed, the same message can be probed for several times before it is received.
Notes
In a threaded environment, MPI_PROBE or MPI_IPROBE followed by MPI_RECV, based on the information from the probe, may not be a thread-safe operation. You must ensure that no other thread received the detected message.
An MPI_IPROBE cannot prevent a message from being cancelled successfully by the sender, making it unavailable for the MPI_RECV. Structure your program so this will not occur.
Errors
Related Information
Purpose
Performs a nonblocking receive operation.
C Synopsis
#include <mpi.h> int MPI_Irecv(void* buf,int count,MPI_Datatype datatype, int source,int tag,MPI_Comm comm,MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPI_IRECV(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER SOURCE, INTEGER TAG,INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)
Parameters
Description
This routine starts a nonblocking receive and returns a handle to a request object. You can later use the request to query the status of the communication or wait for it to complete.
A nonblocking receive call means the system may start writing data into the receive buffer. Once the nonblocking receive operation is called, do not access any part of the receive buffer until the receive is complete.
Notes
The message received must be less than or equal to the length of the receive buffer. If all incoming messages do not fit without truncation, an overflow error occurs. If a message arrives that is shorter than the receive buffer, then only those locations corresponding to the actual message are changed. If an overflow occurs, it is flagged at the MPI_WAIT or MPI_TEST. See MPI_RECV for additional information.
Errors
Related Information
Purpose
Performs a nonblocking ready mode send operation.
C Synopsis
#include <mpi.h> int MPI_Irsend(void* buf,int count,MPI_Datatype datatype, int dest,int tag,MPI_Comm comm,MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPI_IRSEND(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER DEST, INTEGER TAG,INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)
Parameters
Description
MPI_IRSEND starts a ready mode, nonblocking send. The send buffer may not be modified until the request has been completed by MPI_WAIT, MPI_TEST, or one of the other MPI wait or test functions.
Notes
See MPI_RSEND for additional information.
Errors
Develop mode error if:
Related Information
Purpose
Performs a nonblocking standard mode send operation.
C Synopsis
#include <mpi.h> int MPI_Isend(void* buf,int count,MPI_Datatype datatype, int dest,int tag,MPI_Comm comm,MPI_request *request);
Fortran Synopsis
include 'mpif.h' MPI_ISEND(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER DEST, INTEGER TAG,INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)
Parameters
Description
This routine starts a nonblocking standard mode send. The send buffer may not be modified until the request has been completed by MPI_WAIT, MPI_TEST, or one of the other MPI wait or test functions.
Notes
See MPI_SEND for additional information.
Errors
Develop mode error if:
Related Information
Purpose
Performs a nonblocking synchronous mode send operation.
C Synopsis
#include <mpi.h> int MPI_Issend(void* buf,int count,MPI_Datatype datatype, int dest,int tag,MPI_Comm comm,MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPI_ISSEND(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER DEST, INTEGER TAG,INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)
Parameters
Description
MPI_ISSEND starts a synchronous mode, nonblocking send. The send buffer may not be modified until the request has been completed by MPI_WAIT, MPI_TEST, or one of the other MPI wait or test functions.
Notes
See MPI_SSEND for additional information.
Errors
Develop mode error if:
Related Information
Purpose
Generates a new attribute key.
C Synopsis
#include <mpi.h> int MPI_Keyval_create(MPI_Copy_function *copy_fn, MPI_Delete_function *delete_fn,int *keyval, void* extra_state);
Fortran Synopsis
include 'mpif.h' MPI_KEYVAL_CREATE(EXTERNAL COPY_FN,EXTERNAL DELETE_FN, INTEGER KEYVAL,INTEGER EXTRA_STATE,INTEGER IERROR)
Parameters
Description
This routine generates a new attribute key. Keys are locally unique in a task, opaque to the user, and are explicitly stored in integers. Once allocated, keyval can be used to associate attributes and access them on any locally defined communicator. copy_fn is invoked when a communicator is duplicated by MPI_COMM_DUP. It should be of type MPI_COPY_FUNCTION, which is defined as follows:
In C:
typedef int MPI_Copy_function (MPI_Comm oldcomm,int keyval, void *extra_state,void *attribute_val_in, void *attribute_val_out,int *flag);
In Fortran:
SUBROUTINE COPY_FUNCTION(INTEGER OLDCOMM,INTEGER KEYVAL, INTEGER EXTRA_STATE,INTEGER ATTRIBUTE_VAL_IN, INTEGER ATTRIBUTE_VAL_OUT,LOGICAL FLAG,INTEGER IERROR)
You can use the predefined functions MPI_NULL_COPY_FN and MPI_DUP_FN to never copy or to always copy, respectively.
delete_fn is invoked when a communicator is deleted by MPI_COMM_FREE or when a call is made to MPI_ATTR_DELETE. A call to MPI_ATTR_PUT that overlays a previously put attribute also causes delete_fn to be called. It should be defined as follows:
In C:
typedef int MPI_Delete_function (MPI_Comm comm,int keyval, void *attribute_val, void *extra_state);
In Fortran:
SUBROUTINE DELETE_FUNCTION(INTEGER COMM,INTEGER KEYVAL, INTEGER ATTRIBUTE_VAL,INTEGER EXTRA_STATE, INTEGER IERROR)
You can use the predefined function MPI_NULL_DELETE_FN if no special handling of attribute deletions is required.
In Fortran, the value of extra_state is recorded by MPI_KEYVAL_CREATE and the callback functions should not attempt to modify this value.
The MPI standard requires that when copy_fn or delete_fn gives a return code other than MPI_SUCCESS, the MPI routine in which this occurs must fail. The standard does not suggest that the copy_fn or delete_fn return code be used as the MPI routine's return value. The standard does require that an MPI return code be in the range between MPI_SUCCESS and MPI_ERR_LASTCODE. It places no range limits on copy_fn or delete_fn return codes. For this reason, we provide a specific error code for a copy_fn failure and another for a delete_fn failure. These error codes can be found in error class MPI_ERR_OTHER. The copy_fn or the delete_fn return code is not preserved.
Errors
Related Information
Purpose
Marks an attribute key for deallocation.
C Synopsis
#include <mpi.h> int MPI_Keyval_free(int *keyval);
Fortran Synopsis
include 'mpif.h' MPI_KEYVAL_FREE(INTEGER KEYVAL,INTEGER IERROR)
Parameters
Description
This routine sets keyval to MPI_KEYVAL_INVALID and marks the attribute key for deallocation. You can free an attribute key that is in use because the actual deallocation occurs only when all active references to it are complete. These references, however, need to be explicitly freed. Use calls to MPI_ATTR_DELETE to free one attribute instance. To free all attribute instances associated with a communicator, use MPI_COMM_FREE.
Errors
Related Information
Purpose
Binds a user-defined reduction operation to an op handle.
C Synopsis
#include <mpi.h> int MPI_Op_create(MPI_User_function *function,int commute, MPI_Op *op);
Fortran Synopsis
include 'mpif.h' MPI_OP_CREATE(EXTERNAL FUNCTION,INTEGER COMMUTE,INTEGER OP, INTEGER IERROR)
Parameters
Description
This routine binds a user-defined reduction operation to an op handle which you can then use in MPI_REDUCE, MPI_ALLREDUCE, MPI_REDUCE_SCATTER and MPI_SCAN and their nonblocking equivalents.
The user-defined operation is assumed to be associative. If commute = true, then the operation must be both commutative and associative. If commute = false, then the order of the operation is fixed. The order is defined in ascending, task rank order and begins with task zero.
function is user-defined function. It must have the following four arguments: invec, inoutvec, len, and datatype.
The following is the ANSI-C prototype for the function:
typedef void MPI_User_function(void *invec, void *inoutvec, int *len, MPI_Datatype *datatype);
The following is the Fortran declaration for the function:
SUBROUTINE USER_FUNCTION(INVEC(*), INOUTVEC(*), LEN, TYPE) <type> INVEC(LEN), INOUTVEC(LEN) INTEGER LEN, TYPE
Notes
See Appendix D. "Reduction Operations" for information about reduction functions.
Errors
Related Information
Purpose
Marks a user-defined reduction operation for deallocation.
C Synopsis
#include <mpi.h> int MPI_Op_free(MPI_Op *op);
Fortran Synopsis
include 'mpif.h' MPI_OP_FREE(INTEGER OP,INTEGER IERROR)
Parameters
Description
This function marks a reduction operation for deallocation, and set op to MPI_OP_NULL. Actual deallocation occurs when the operation's reference count is zero.
Errors
Related Information
Purpose
Packs the message in the specified send buffer into the specified buffer space.
C Synopsis
#include <mpi.h> int MPI_Pack(void* inbuf,int incount,MPI_Datatype datatype, void *outbuf,int outsize,int *position,MPI_Comm comm);
Fortran Synopsis
include 'mpif.h' MPI_PACK(CHOICE INBUF,INTEGER INCOUNT,INTEGER DATATYPE, CHOICE OUTBUF,INTEGER OUTSIZE,INTEGER POSITION,INTEGER COMM INTEGER IERROR)
Parameters
Description
This routine packs the message specified by inbuf, incount, and datatype into the buffer space specified by outbuf and outsize. The input buffer is any communication buffer allowed in MPI_SEND. The output buffer is any contiguous storage space containing outsize bytes and starting at the address outbuf.
The input value of position is the beginning offset in the output buffer that will be used for packing. The output value of position is the offset in the output buffer following the locations occupied by the packed message. comm is the communicator that will be used for sending the packed message.
Errors
Related Information
Purpose
Returns the number of bytes required to hold the data.
C Synopsis
#include <mpi.h> int MPI_Pack_size(int incount,MPI_Datatype datatype, MPI_Comm comm, int *size);
Fortran Synopsis
include 'mpif.h' MPI_PACK_SIZE(INTEGER INCOUNT,INTEGER DATATYPE,INTEGER COMM, INTEGER SIZE,INTEGER IERROR)
Parameters
Description
This routine returns the number of bytes required to pack incount replications of the datatype. You can use MPI_PACK_SIZE to determine the size required for a packing buffer or to track space needed for buffered sends.
Errors
Related Information
Purpose
C Synopsis
#include <mpi.h> int MPI_Pcontrol(const int level, ...);
Fortran Synopsis
include 'mpif.h' MPI_PCONTROL(INTEGER LEVEL, ...)
Parameters
The proper values for level and the meanings of those values are determined by the profiler being used.
Description
MPI_PCONTROL is a placeholder to allow applications to run with or without an independent profiling package without modification. MPI implementations do not use this routine and do not have any control of the implementation of the profiling code.
Calls to this routine allow a profiling package to be controlled from MPI programs. The nature of control and the arguments required are determined by the profiling package. The MPI library routine by this name returns to the caller without any action.
Notes
For each additional call level introduced by the profiling code, the global variable VT_instaddr_depth needs to be incremented so the Visualization Tool Tracing Subsystem(VT) can record where the application called the MPI message passing library routine. The VT_instaddr_depth variable is defined in /usr/lpp/ppe.vt/include/VT_mpi.h.
Errors
MPI does not report any errors for MPI_PCONTROL.
Purpose
Waits until a message matching source, tag, and comm arrives.
C Synopsis
#include <mpi.h> int MPI_Probe(int source,int tag,MPI_Comm comm,MPI_Status *status);
Fortran Synopsis
include 'mpif.h' MPI_PROBE(INTEGER SOURCE,INTEGER TAG,INTEGER COMM, INTEGER STATUS(MPI_STATUS_SIZE),INTEGER IERROR)
Parameters
Description
MPI_PROBE behaves like MPI_IPROBE. It allows you to check for an incoming message without actually receiving it. MPI_PROBE is different in that it is a blocking call that returns only after a matching message has been found.
Notes
In a threaded environment, MPI_PROBE or MPI_IPROBE followed by MPI_RECV, based on the information from the probe, may not be a thread-safe operation. You must ensure that no other thread received the detected message.
An MPI_IPROBE cannot prevent a message from being cancelled successfully by the sender, making it unavailable for the MPI_RECV. Structure your program so this will not occur.
Errors
Related Information
Purpose
Performs a blocking receive operation.
C Synopsis
#include <mpi.h> int MPI_Recv(void* buf,int count,MPI_Datatype datatype, int source,int tag,MPI_Comm comm,MPI_Status *status);
Fortran Synopsis
include 'mpif.h' MPI_RECV(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER SOURCE, INTEGER TAG,INTEGER COMM,INTEGER STATUS(MPI_STATUS_SIZE),INTEGER IERROR)
Parameters
Description
MPI_RECV is a blocking receive. The receive buffer is storage containing room for count consecutive elements of the type specified by datatype, starting at address buf.
The message received must be less than or equal to the length of the receive buffer. If all incoming messages do not fit without truncation, an overflow error occurs. If a message arrives that is shorter than the receive buffer, then only those locations corresponding to the actual message are changed.
Errors
Related Information
Purpose
Creates a persistent receive request.
C Synopsis
#include <mpi.h> int MPI_Recv_init(void* buf,int count,MPI_Datatype datatype, int source,int tag,MPI_Comm comm,MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPI_RECV_INIT(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE, INTEGER SOURCE,INTEGER TAG,INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)
Parameters
Description
This routine creates a persistent communication request for a receive operation. The argument buf is marked as OUT because the user gives permission to write to the receive buffer by passing the argument to MPI_RECV_INIT.
A persistent communication request is inactive after it is created. No active communication is attached to the request.
A send or receive communication using a persistent request is initiated by the function MPI_START.
Notes
See MPI_RECV for additional information.
Errors
Related Information
Purpose
Applies a reduction operation to the vector sendbuf over the set of tasks specified by comm and places the result in recvbuf on root.
C Synopsis
#include <mpi.h> int MPI_Reduce(void* sendbuf,void* recvbuf,int count, MPI_Datatype datatype,MPI_Op op,int root,MPI_Comm comm);
Fortran Synopsis
include 'mpif.h' MPI_REDUCE(CHOICE SENDBUF,CHOICE RECVBUF,INTEGER COUNT, INTEGER DATATYPE,INTEGER OP,INTEGER ROOT,INTEGER COMM, INTEGER IERROR)
Parameters
Description
This routine applies a reduction operation to the vector sendbuf over the set of tasks specified by comm and places the result in recvbuf on root.
Both the input and output buffers have the same number of elements with the same type. The arguments sendbuf, count, and datatype define the send or input buffer and recvbuf, count and datatype define the output buffer. MPI_REDUCE is called by all group members using the same arguments for count, datatype, op, and root. If a sequence of elements is provided to a task, then the reduce operation is executed element-wise on each entry of the sequence. Here's an example. If the operation is MPI_MAX and the send buffer contains two elements that are floating point numbers (count = 2 and datatype = MPI_FLOAT), then recvbuf(1) = global max(sendbuf(1)) and recvbuf (2) = global max(sendbuf(2)).
Users may define their own operations or use the predefined operations provided by MPI. User defined operations can be overloaded to operate on several datatypes, either basic or derived. A list of the MPI predefined operations is in this manual. Refer to Appendix D. "Reduction Operations".
The argument datatype of MPI_REDUCE must be compatible with op. For a list of predefined operations refer to Appendix I. "Predefined Datatypes".
When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Notes
See Appendix D. "Reduction Operations".
Errors
Develop mode error if:
Related Information
Purpose
Applies a reduction operation to the vector sendbuf over the set of tasks specified by comm and scatters the result according to the values in recvcounts.
C Synopsis
#include <mpi.h> int MPI_Reduce_scatter(void* sendbuf,void* recvbuf,int *recvcounts, MPI_Datatype datatype,MPI_Op op,MPI_Comm comm);
Fortran Synopsis
include 'mpif.h' MPI_REDUCE_SCATTER(CHOICE SENDBUF,CHOICE RECVBUF, INTEGER RECVCOUNTS(*),INTEGER DATATYPE,INTEGER OP, INTEGER COMM,INTEGER IERROR)
Parameters
Description
MPI_REDUCE_SCATTER first performs an element-wise reduction on vector of count = &Sigma. i recvcounts[i] elements in the send buffer defined by sendbuf, count and datatype. Next, the resulting vector is split into n disjoint segments, where n is the number of members in the group. Segment i contains recvcounts[i] elements. The ith segment is sent to task i and stored in the receive buffer defined by recvbuf, recvcounts[i] and datatype.
Notes
MPI_REDUCE_SCATTER is functionally equivalent to MPI_REDUCE with count equal to the sum of recvcounts[i] followed by MPI_SCATTERV with sendcounts equal to recvcounts. When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Develop mode error if:
Related Information
Purpose
Marks a request for deallocation.
C Synopsis
#include <mpi.h> int MPI_Request_free(int MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPI_REQUEST_FREE(INTEGER REQUEST,INTEGER IERROR)
Parameters
Description
This routine marks a request object for deallocation and sets request to MPI_REQUEST_NULL. An ongoing communication associated with the request is allowed to complete before deallocation occurs.
Notes
This function marks a communication request as free. Actual deallocation occurs when the request is complete. Active receive requests and collective communication requests cannot be freed.
Errors
Related Information
Purpose
Performs a blocking ready mode send operation.
C Synopsis
#include <mpi.h> int MPI_Rsend(void* buf,int count,MPI_Datatype datatype, int dest,int tag,MPI_Comm comm);
Fortran Synopsis
include 'mpif.h' MPI_RSEND(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER DEST, INTEGER TAG,INTEGER COMM,INTEGER IERROR)
Parameters
Description
This routine is a blocking ready mode send. It can be started only when a matching receive is posted. If a matching receive is not posted, the operation is erroneous and its outcome is undefined.
The completion of MPI_RSEND indicates that the send buffer can be reused.
Notes
A ready send for which no receive exists produces an asynchronous error at the destination. The error is not detected at the MPI_RSEND and it returns MPI_SUCCESS.
Errors
Related Information
Purpose
Creates a persistent ready mode send request.
C Synopsis
#include <mpi.h> int MPI_Rsend_init(void* buf,int count,MPI_Datatype datatype, int dest,int tag,MPI_Comm comm,MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPI_RSEND_INIT(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE, INTEGER DEST,INTEGER TAG,INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)
Parameters
Description
MPI_RSEND_INIT creates a persistent communication object for a ready mode send operation. MPI_START or MPI_STARTALL is used to activate the send.
Notes
See MPI_RSEND for additional information.
Errors
Related Information
Purpose
Performs a parallel prefix reduction on data distributed across a group.
C Synopsis
#include <mpi.h> int MPI_Scan(void* sendbuf,void* recvbuf,int count, MPI_Datatype datatype,MPI_Op op,MPI_Comm comm);
Fortran Synopsis
include 'mpif.h' MPI_SCAN(CHOICE SENDBUF,CHOICE RECVBUF,INTEGER COUNT, INTEGER DATATYPE,INTEGER OP,INTEGER COMM,INTEGER IERROR)
Parameters
Description
MPI_SCAN is used to perform a prefix reduction on data distributed across the group. The operation returns, in the receive buffer of the task with rank i, the reduction of the values in the send buffers of tasks with ranks 0, ..., i (inclusive). The type of operations supported, their semantics, and the restrictions on send and receive buffers are the same as for MPI_REDUCE.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Develop mode error if:
Related Information
Purpose
Distributes individual messages from root to each task in comm.
C Synopsis
#include <mpi.h> int MPI_Scatter(void* sendbuf,int sendcount,MPI_Datatype sendtype, void* recvbuf,int recvcount,MPI_Datatype recvtype,int root, MPI_Comm comm);
Fortran Synopsis
include 'mpif.h' MPI_SCATTER(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE, CHOICE RECVBUF,INTEGER RECVCOUNT,INTEGER RECVTYPE,INTEGER ROOT, INTEGER COMM,INTEGER IERROR)
Parameters
Description
MPI_SCATTER distributes individual messages from root to each task in comm. This routine is the inverse operation to MPI_GATHER.
The type signature associated with sendcount, sendtype at the root must be equal to the type signature associated with recvcount, recvtype at all tasks. (Type maps can be different.) This means the amount of data sent must be equal to the amount of data received, pairwise between each task and the root. Distinct type maps between sender and receiver are allowed.
The following is information regarding MPI_SCATTER arguments and tasks:
A call where the specification of counts and types causes any location on the root to be read more than once is erroneous.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Develop mode error if:
Related Information
Purpose
Distributes individual messages from root to each task in comm. Messages can have different sizes and displacements.
C Synopsis
#include <mpi.h> int MPI_Scatterv(void* sendbuf,int *sendcounts, int *displs,MPI_Datatype sendtype,void* recvbuf, int recvcount,MPI_Datatype recvtype,int root, MPI_Comm comm);
Fortran Synopsis
include 'mpif.h' MPI_SCATTERV(CHOICE SENDBUF,INTEGER SENDCOUNTS(*),INTEGER DISPLS(*), INTEGER SENDTYPE,CHOICE RECVBUF,INTEGER RECVCOUNT,INTEGER RECVTYPE, INTEGER ROOT,INTEGER COMM,INTEGER IERROR)
Parameters
Description
This routine distributes individual messages from root to each task in comm. Messages can have different sizes and displacements.
With sendcounts as an array, messages can have varying sizes of data that can be sent to each task. displs allows you the flexibility of where the data can be taken from on the root.
The type signature of sendcount[i], sendtype at the root must be equal to the type signature of recvcount, recvtype at task i. (The type maps can be different.) This means the amount of data sent must be equal to the amount of data received, pairwise between each task and the root. Distinct type maps between sender and receiver are allowed.
The following is information regarding MPI_SCATTERV arguments and tasks:
A call where the specification of sizes, types and displacements causes any location on the root to be read more than once is erroneous.
When you use this routine in a threaded application, make sure all collective operations on a particular communicator occur in the same order at each task. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Develop mode error if:
Related Information
Purpose
Performs a blocking standard mode send operation.
C Synopsis
#include <mpi.h> int MPI_Send(void* buf,int count,MPI_Datatype datatype, int dest,int tag,MPI_Comm comm);
Fortran Synopsis
include 'mpif.h' MPI_SEND(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER DEST, INTEGER TAG,INTEGER COMM,INTEGER IERROR)
Parameters
Description
This routine is a blocking standard mode send. MPI_SEND causes count elements of type datatype to be sent from buf to the task specified by dest. dest is a task rank which can be any value from 0 to n-1, where n is the number of tasks in comm.
Errors
Related Information
Purpose
Creates a persistent standard mode send request.
C Synopsis
#include <mpi.h> int MPI_Send_init(void* buf,int count,MPI_Datatype datatype, int dest,int tag,MPI_Comm comm,MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPI_SEND_INIT(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER DEST, INTEGER TAG,INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)
Parameters
Description
This routine creates a persistent communication request for a standard mode send operation, and binds to it all arguments of a send operation. MPI_START or MPI_STARTALL is used to activate the send.
Notes
See MPI_SEND for additional information.
Errors
Related Information
Purpose
Performs a blocking send and receive operation.
C Synopsis
#include <mpi.h> int MPI_Sendrecv(void* sendbuf,int sendcount,MPI_Datatype sendtype, int dest,int sendtag,void *recvbuf,int recvcount,MPI_Datatype recvtype, int source,int recvtag,MPI_Comm comm,MPI_Status *status);
Fortran Synopsis
include 'mpif.h' MPI_SENDRECV(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE, INTEGER DEST,INTEGER SENDTAG,CHOICE RECVBUF,INTEGER RECVCOUNT, INTEGER RECVTYPE,INTEGER SOURCE,INTEGER RECVTAG,INTEGER COMM, INTEGER STATUS(MPI_STATUS_SIZE),INTEGER IERROR)
Parameters
Description
This routine is a blocking send and receive operation. Send and receive use the same communicator but can use different tags. The send and the receive buffers must be disjoint and can have different lengths and datatypes.
Errors
Related Information
Purpose
Performs a blocking send and receive operation using a common buffer.
C Synopsis
#include <mpi.h> int MPI_Sendrecv_replace(void* buf,int count,MPI_Datatype datatype, int dest,int sendtag,int source,int recvtag,MPI_Comm comm, MPI_Status *status);
Fortran Synopsis
include 'mpif.h' MPI_SENDRECV_REPLACE(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE, INTEGER DEST,INTEGER SENDTAG,INTEGER SOURCE,INTEGER RECVTAG, INTEGER COMM,INTEGER STATUS(MPI_STATUS_SIZE),INTEGER IERROR)
Parameters
Description
This routine is a blocking send and receive operation using a common buffer. Send and receive use the same buffer so the message sent is replaced with the message received.
Errors
Related Information
Purpose
Performs a blocking synchronous mode send operation.
C Synopsis
#include <mpi.h> int MPI_Ssend(void* buf,int count,MPI_Datatype datatype, int dest,int tag,MPI_Comm comm);
Fortran Synopsis
include 'mpif.h' MPI_SSEND(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER DEST, INTEGER TAG,INTEGER COMM,INTEGER IERROR)
Parameters
Description
This routine is a blocking synchronous mode send. This a non-local operation. It can be started whether or not a matching receive was posted. However, the send will complete only when a matching receive is posted and the receive operation has started to receive the message sent by MPI_SSEND.
The completion of MPI_SSEND indicates that the send buffer is freed and also that the receiver has started executing the matching receive. If both sends and receives are blocking operations, the synchronous mode provides synchronous communication.
Errors
Related Information
Purpose
Creates a persistent synchronous mode send request.
C Synopsis
#include <mpi.h> int MPI_Ssend_init(void* buf,int count,MPI_Datatype datatype, int dest,int tag,MPI_Comm comm,MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPI_SSEND_INIT(CHOICE BUF,INTEGER COUNT,INTEGER DATATYPE,INTEGER DEST INTEGER TAG,INTEGER COMM,INTEGER REQUEST,INTEGER IERROR)
Parameters
Description
This routine creates a persistent communication object for a synchronous mode send operation. MPI_START or MPI_STARTALL can be used to activate the send.
Notes
See MPI_SSEND for additional information.
Errors
Related Information
Purpose
Activates a persistent request operation.
C Synopsis
#include <mpi.h> int MPI_Start(MPI_Request *request);
Fortran Synopsis
include 'mpif.h' MPI_START(INTEGER REQUEST,INTEGER IERROR)
Parameters
Description
MPI_START activates a persistent request operation. request is a handle returned by MPI_RECV_INIT, MPI_RSEND_INIT, MPI_SSEND_INIT, MPI_BSEND_INIT or MPI_SEND_INIT. Once the call is made, do not access the communication buffer until the operation completes.
If the request is for a send with ready mode, then a matching receive must be posted before the call is made. If the request is for a buffered send, adequate buffer space must be available.
Errors
Related Information
Purpose
Activates a collection of persistent request operations.
C Synopsis
#include <mpi.h> int MPI_Startall(int count,MPI_request *array_of_requests);
Fortran Synopsis
include 'mpif.h' MPI_STARTALL(INTEGER COUNT,INTEGER ARRAY_OF_REQUESTS(*),INTEGER IERROR)
Parameters
Description
MPI_STARTALL starts all communications associated with request operations in array_of_requests.
A communication started with MPI_STARTALL is completed by a call to one of the MPI wait or test operations. The request becomes inactive after successful completion but is not deallocated and can be reactivated by an MPI_STARTALL. If a request is for a send with ready mode, then a matching receive must be posted before the call. If a request is for a buffered send, adequate buffer space must be available.
Errors
Related Information
Purpose
Checks to see if a nonblocking request has completed.
C Synopsis
#include <mpi.h> int MPI_Test(MPI_Request *request,int *flag,MPI_Status *status);
Fortran Synopsis
include 'mpif.h' MPI_TEST(INTEGER REQUEST,INTEGER FLAG,INTEGER STATUS(MPI_STATUS_SIZE), INTEGER IERROR)
Parameters
Description
MPI_TEST returns flag = true if the operation identified by request is complete. The status object is set to contain information on the completed operation. The request object is deallocated and the request handle is set to MPI_REQUEST_NULL. Otherwise, flag = false and the status object is undefined. MPI_TEST is a local operation. The status object can be queried for information about the operation. (See MPI_WAIT.)
You can call MPI_TEST with a null or inactive request argument. The operation returns flag = true and empty status.
The error field of MPI_Status is never modified. The success or failure is indicated by the return code only.
When one of the MPI wait or test calls returns status for a nonblocking operation request and the corresponding blocking operation does not provide a status argument, the status from this wait/test does not contain meaningful source, tag or message size information.
When you use this routine in a threaded application, make sure the request is tested on only one thread. The request does not have to be tested on the thread that created the request. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Develop mode error if:
Related Information
Purpose
Tests whether a nonblocking operation was cancelled.
C Synopsis
#include <mpi.h> int MPI_Test_cancelled(MPI_Status * status,int *flag);
Fortran Synopsis
include 'mpif.h' MPI_TEST_CANCELLED(INTEGER STATUS(MPI_STATUS_SIZE),INTEGER FLAG, INTEGER IERROR)
Parameters
Description
MPI_TEST_CANCELLED returns flag = true if the communication associated with the status object was cancelled successfully. In this case, all other fields of status (such as count or tag) are undefined. Otherwise, flag = false is returned. If a receive operation might be cancelled, you should call MPI_TEST_CANCELLED first to check if the operation was cancelled, before checking on the other fields of the return status.
Notes
In this release, nonblocking I/O operations are never cancelled successfully.
Errors
Related Information
Purpose
Tests a collection of nonblocking operations for completion.
C Synopsis
#include <mpi.h> int MPI_Testall(int count,MPI_Request *array_of_requests, int *flag,MPI_Status *array_of_statuses);
Fortran Synopsis
include 'mpif.h' MPI_TESTALL(INTEGER COUNT,INTEGER ARRAY_OF_REQUESTS(*),INTEGER FLAG, INTEGER ARRAY_OF_STATUSES(MPI_STATUS_SIZE,*),INTEGER IERROR)
Parameters
Description
This routine tests a collection of nonblocking operations for completion. flag = true is returned if all operations associated with active handles in the array completed, or when no handle in the list is active.
Each status entry of an active handle request is set to the status of the corresponding operation. A request allocated by a nonblocking operation call is deallocated and the handle is set to MPI_REQUEST_NULL.
Each status entry of a null or inactive handle is set to empty. If one or more requests have not completed, flag = false is returned. No request is modified and the values of the status entries are undefined.
The error fields are never modified unless the function gives a return code of MPI_ERR_IN_STATUS. In which case, the error field of every MPI_Status is modified to reflect the result of the corresponding request.
When one of the MPI wait or test calls returns status for a nonblocking operation request and the corresponding blocking operation does not provide a status argument, the status from this wait/test does not contain meaningful source, tag or message size information.
When you use this routine in a threaded application, make sure the request is tested on only one thread. The request does not have to be tested on the thread that created it. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Related Information
Purpose
Tests for the completion of any nonblocking operation.
C Synopsis
#include <mpi.h> int MPI_Testany(int count,MPI_Request *array_of_requests, int *index,int *flag,MPI_Status *status);
Fortran Synopsis
include 'mpif.h' MPI_TESTANY(INTEGER COUNT,INTEGER ARRAY_OF_REQUESTS(*),INTEGER INDEX, INTEGER FLAG,INTEGER STATUS(MPI_STATUS_SIZE),INTEGER IERROR)
Parameters
Description
If one of the operations has completed, MPI_TESTANY returns flag = true and returns in index the index of this request in the array, and returns in status the status of the operation. If the request was allocated by a nonblocking operation, the request is deallocated and the handle is set to MPI_REQUEST_NULL.
If none of the operations has completed, it returns flag = false and returns a value of MPI_UNDEFINED in index, and status is undefined. The array can contain null or inactive handles. When the array contains no active handles, then the call returns immediately with flag = true, index = MPI_UNDEFINED, and empty status.
MPI_TESTANY(count, array_of_requests, index, flag, status) has the same effect as the execution of MPI_TEST(array_of_requests[i], flag, status), for i = 0, 1, ..., count-1, in some arbitrary order, until one call returns flag = true, or all fail.
The error fields are never modified unless the function gives a return code of MPI_ERR_IN_STATUS. In which case, the error field of every MPI_Status is modified to reflect the result of the corresponding request.
When one of the MPI wait or test calls returns status for a nonblocking operation request and the corresponding blocking operation does not provide a status argument, the status from this wait/test does not contain meaningful source, tag or message size information.
When you use this routine in a threaded application, make sure the request is tested on only one thread. The request does not have to be tested on the thread that created it. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Notes
The array is indexed from zero in C and from one in Fortran.
Errors
Related Information
Purpose
Tests a collection of nonblocking operations for completion.
C Synopsis
#include <mpi.h> int MPI_Testsome(int incount,MPI_Request *array_of_requests, int *outcount,int *array_of_indices, MPI_Status *array_of_statuses);
Fortran Synopsis
include 'mpif.h' MPI_TESTSOME(INTEGER INCOUNT,INTEGER ARRAY_OF_REQUESTS(*), INTEGER OUTCOUNT,INTEGER ARRAY_OF_INDICES(*), INTEGER ARRAY_OF_STATUSES(MPI_STATUS_SIZE,*),INTEGER IERROR)
Parameters
Description
This routine tests a collection of nonblocking operations for completion. MPI_TESTSOME behaves like MPI_WAITSOME except that MPI_TESTSOME is a local operation and returns immediately. outcount = 0 is returned when no operation has completed.
When a request for a receive repeatedly appears in a list of requests passed to MPI_TESTSOME and a matching send is posted, then the receive eventually succeeds unless the send is satisfied by another receive. This fairness requirement also applies to send requests and to I/O requests.
The error fields are never modified unless the function gives a return code of MPI_ERR_IN_STATUS. In which case, the error field of every MPI_Status is modified to reflect the result of the corresponding request.
When one of the MPI wait or test calls returns status for a nonblocking operation request and the corresponding blocking operation does not provide a status argument, the status from this wait/test does not contain meaningful source, tag or message size information.
When you use this routine in a threaded application, make sure the request is tested on only one thread. The request does not have to be tested on the thread that created it. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Related Information
Purpose
Returns the type of virtual topology associated with a communicator.
C Synopsis
#include <mpi.h> MPI_Topo_test(MPI_Comm comm,int *status);
Fortran Synopsis
include 'mpif.h' MPI_TOPO_TEST(INTEGER COMM,INTEGER STATUS,INTEGER IERROR)
Parameters
Description
This routine returns the type of virtual topology associated with a communicator. The output of status will be as follows:
Errors
Related Information
Purpose
Makes a datatype ready for use in communication.
C Synopsis
#include <mpi.h> int MPI_Type_commit(MPI_Datatype *datatype);
Fortran Synopsis
include 'mpif.h' MPI_TYPE_COMMIT(INTEGER DATATYPE,INTEGER IERROR)
Parameters
Description
A datatype object must be committed before you can use it in communication. You can use an uncommitted datatype as an argument in datatype constructors.
This routine makes a datatype ready for use in communication. The datatype is the formal description of a communication buffer. It is not the content of the buffer.
Once the datatype is committed it can be repeatedly reused to communicate the changing contents of a buffer or buffers with different starting addresses.
Notes
Basic datatypes are precommitted. It is not an error to call MPI_TYPE_COMMIT on a type that is already committed. Types returned by MPI_TYPE_GET_CONTENTS may or may not already be committed.
Errors
Related Information
Purpose
Returns a new datatype that represents the concatenation of count instances of oldtype.
C Synopsis
#include <mpi.h> int MPI_Type_contiguous(int count,MPI_Datatype oldtype, MPI_Datatype *newtype);
Fortran Synopsis
include 'mpif.h' MPI_TYPE_CONTIGUOUS(INTEGER COUNT,INTEGER OLDTYPE,INTEGER NEWTYPE, INTEGER IERROR)
Parameters
Description
This routine returns a new datatype that represents the concatenation of count instances of oldtype. MPI_TYPE_CONTIGUOUS allows replication of a datatype into contiguous locations.
Notes
newtype must be committed using MPI_TYPE_COMMIT before being used for communication.
Errors
Related Information
Purpose
Generates the datatypes corresponding to the distribution of an ndims-dimensional array of oldtype elements onto an ndims-dimensional grid of logical tasks.
C Synopsis
#include <mpi.h> int MPI_Type_create_darray (int size,int rank,int ndims, int array_of_gsizes[],int array_of_distribs[], int array_of_dargs[],int array_of_psizes[], int order,MPI_Datatype oldtype,MPI_Datatype *newtype);
Fortran Synopsis
include 'mpif.h' MPI_TYPE_CREATE_DARRAY (INTEGER SIZE,INTEGER RANK,INTEGER NDIMS, INTEGER ARRAY_OF_GSIZES(*),INTEGER ARRAY_OF_DISTRIBS(*), INTEGER ARRAY_OF_DARGS(*),INTEGER ARRAY_OF_PSIZES(*), INTEGER ORDER,INTEGER OLDTYPE,INTEGER NEWTYPE,INTEGER IERROR)
Parameters
Description
MPI_TYPE_CREATE_DARRAY generates the datatypes corresponding to an HPF-like distribution of an ndims-dimensional array of oldtype elements onto an ndims-dimensional grid of logical tasks. The ordering of tasks in the task grid is assumed to be row-major. See The High Performance Fortran Handbook for more information.
Errors
Fatal Errors:
Related Information
Purpose
Returns a new datatype that represents an ndims-dimensional subarray of an ndims-dimensional array.
C Synopsis
#include <mpi.h> int MPI_Type_create_subarray (int ndims,int array_of_sizes[], int array_of_subsizes[],int array_of_starts[], int order,MPI_Datatype oldtype,MPI_Datatype *newtype);
Fortran Synopsis
include 'mpif.h' MPI_TYPE_CREATE_SUBARRAY (INTEGER NDIMS,INTEGER ARRAY_OF_SUBSIZES(*), INTEGER ARRAY_OF_SIZES(*),INTEGER ARRAY_OF_STARTS(*), INTEGER ORDER,INTEGER OLDTYPE,INTEGER NEWTYPE,INTEGER IERROR)
Parameters
Description
MPI_TYPE_CREATE_SUBARRAY creates an MPI datatype describing an ndims-dimensional subarray of an ndims-dimensional array. The subarray may be situated anywhere within the full array and may be of any nonzero size up to the size of the larger array as long as it is confined within this array.
This function facilitates creating filetypes to access arrays distributed in blocks among tasks to a single file that contains the full array.
Errors
Fatal Errors:
Related Information
Purpose
Returns the extent of any defined datatype.
C Synopsis
#include <mpi.h> int MPI_Type_extent(MPI_Datatype datatype,int *extent);
Fortran Synopsis
include 'mpif.h' MPI_TYPE_EXTENT(INTEGER DATATYPE,INTEGER EXTENT,INTEGER IERROR)
Parameters
Description
This routine returns the extent of a datatype. The extent of a datatype is the span from the first byte to the last byte occupied by entries in this datatype and rounded up to satisfy alignment requirements.
Notes
Rounding for alignment is not done when MPI_UB is used to define the datatype. Types defined with MPI_LB, MP_UB or with any type that itself contains MPI_LB or MPI_UB may return an extent which is not directly related to the layout of data in memory. Refer to MPI_Type_struct for more information on MPI_LB and MPI_UB.
Errors
Related Information
Purpose
Marks a datatype for deallocation.
C Synopsis
#include <mpi.h> int MPI_Type_free(MPI_Datatype *datatype);
Fortran Synopsis
include 'mpif.h' MPI_TYPE_FREE(INTEGER DATATYPE,INTEGER IERROR)
Parameters
Description
This routine marks the datatype object associated with datatype for deallocation. It sets datatype to MPI_DATATYPE_NULL. All communication currently using this datatype completes normally. Derived datatypes defined from the freed datatype are not affected.
Notes
MPI_FILE_GET_VIEW and MPI_TYPE_GET_CONTENTS both return new references or handles for existing MPI_Datatypes. Each new reference to a derived type should be freed after the reference is no longer needed. New references to named types must not be freed. You can identify a derived datatype by calling MPI_TYPE_GET_ENVELOPE and checking that the combiner is not MPI_COMBINER_NAMED. MPI cannot discard a derived MPI_datatype if there are any references to it that have not been freed by MPI_TYPE_FREE.
Errors
Related Information
Purpose
Obtains the arguments used in the creation of the datatype.
C Synopsis
#include <mpi.h> int MPI_Type_get_contents(MPI_Datatype datatype, int *max_integers, int *max_addresses, int *max_datatypes, int array_of_integers[], int array_of_addresses[], int array_of_datatypes[]);
Fortran Synopsis
include 'mpif.h' MPI_TYPE_GET_CONTENTS(INTEGER DATATYPE, INTEGER MAX_INTEGERS, INTEGER MAX_ADDRESSES, INTEGER MAX_DATATYPES, INTEGER ARRAY_of_INTEGERS, INTEGER ARRAY_OF_ADDRESSES, INTEGER ARRAY_of_DATATYPES, INTEGER IERROR)
Parameters
If the combiner is MPI_COMBINER_NAMED, it is erroneous to call MPI_TYPE_GET_CONTENTS.
Table 4 lists the combiners and constructor arguments. The lowercase names of
the arguments are shown.
Table 4. Combiners and Constructor Arguments
Constructor Argument | C Location | Fortran Location |
ni na nd |
---|---|---|---|
MPI_COMBINER_DUP | |||
oldtype | d[0] | D(1) |
0 0 1 |
MPI_COMBINER_CONTIGUOUS | |||
count oldtype |
i[0] d[0] |
I(1) D(1) |
1 0 1 |
MPI_COMBINER_VECTOR | |||
count blocklength stride oldtype |
i[0] i[1] i[2] d[0] |
I(1) I(2) I(3) D(1) |
3 0 1 |
MPI_COMBINER_HVECTOR MPI_COMBINER_HVECTOR_INTEGER | |||
count blocklength stride oldtype |
i[0] i[1] a[0] d[0] |
I(1) I(2) A(1) D(1) |
2 1 1 |
MPI_COMBINER_INDEXED | |||
count array_of_blocklengths array_of_displacements oldtype |
i[0] i[1] to i[i[0]] i[i[0]+1] to i[2*i[0]] d[0] |
I(1) I(2) to I(I(1)+1) I(I(1)+2) to I(2*I(1)+1) D(1) |
2*count+1 0 1 |
MPI_COMBINER_HINDEXED MPI_COMBINER_HINDEXED_INTEGER | |||
count array_of_blocklengths array_of_displacements oldtype |
i[0] i[1] to i[i[0]] a[0] to a[i[0]-1] d[0] |
I(1) I(2) to I(I(1)+1) A(1) to A(I(1)) D(1) |
count+1 count 1 |
MPI_COMBINER_INDEXED_BLOCK | |||
count blocklength array_of_displacements oldtype |
i[0] i[1] i[2] to i[i[0]+1] d[0] |
I(1) I(2) I(3) to I(I(1)+2) D(1) |
count+2 0 1 |
MPI_COMBINER_STRUCT MPI_COMBINER_STRUCT_INTEGER | |||
count array_of_blocklengths array_of_displacements array_of_types |
i[0] i[1] to i[i[0]] a[0] to a[i[0]-1] d[0] to d[i[0]-1] |
I(1) I(2) to I(I(1)+1) A(1) to A(I(1)) D(1) |
count+1 count count |
MPI_COMBINER_SUBARRAY | |||
ndims array_of_sizes array_of_subsizes array_of_starts order oldtype |
i[0] i[1] to i[i[0]] i[i[0]+1] to i[2*i[0]] i[2*i[0]+1] to i[3*i[0]] d[0] |
I(1) I(2) to I(I(1)+1) I(I(1)+2) to I(2*I(1)+1) I(2*I(1)+2) to I(3*I(1)+1) I(3*I(1)+2) D(1) |
3*ndims+2 0 1 |
MPI_COMBINER_DARRAY | |||
size rank ndims array_of_gsizes array_of_distribs array_of_dargs array_of_psizes order oldtype |
i[0] i[1] i[2] i[3] to i[i[2]+2] i[i[2]+3] to i[2*i[2]+2] i[2*i[2]+3] to i[3*i[2]+2] i[3*i[2]+3] to i[4*i[2]+2] i[4*i[2]+3] d[0] |
I(1) I(2) I(3) I(4) to I(I(3)+3) I(I(3)+4) to I(2*I(3)+3) I(2*I(3)+4) to I(3*I(3)+3) I(3*I(3)+4) to I(4*I(3)+3) I(4*I(3)+4) D(1) |
4*ndims+4 0 1 |
MPI_COMBINER_F90_REAL MPI_COMBINER_F90_COMPLEX | |||
p r |
i[0] i[1] |
I(1) I(2) |
2 0 0 |
MPI_COMBINER_F90_INTEGER | |||
r | i[0] | I(1) |
1 0 0 |
MPI_COMBINER_RESIZED | |||
lb extent oldtype |
a[0] a[1] d[0] |
A(1) A(2) D(1) |
0 2 1 |
Description
MPI_TYPE_GET_CONTENTS identifies the combiner and returns the arguments that were used with this combiner to create the datatype of interest. A call to MPI_TYPE_GET_CONTENTS is normally preceded by a call to MPI_TYPE_GET_ENVELOPE to discover whether the type of interest is one that can be decoded and if so, how large the output arrays must be. An MPI_COMBINER_NAMED datatype is a predefined type that may not be decoded. The datatype handles returned in array_of_datatypes can include both named and derived types. The derived types may or may not already be committed. Each entry in array_of_datatypes is a separate datatype handle that must eventually be freed if it represents a derived type.
Notes
An MPI type constructor, such as MPI_TYPE_CONTIGUOUS, creates a datatype object within MPI and gives a handle for that object to the caller. This handle represents one reference to the object. In this implementation of MPI, the MPI datatypes obtained with calls to MPI_TYPE_GET_CONTENTS are new handles for the existing datatype objects. The number of handles (references) given to the user is tracked by a reference counter in the object. MPI cannot discard a datatype object unless MPI_TYPE_FREE has been called on every handle the user has obtained.
The use of reference-counted objects is encouraged, but not mandated, by the MPI standard. Another MPI implementation may create new objects instead. The user should be aware of a side effect of the reference count approach. Suppose mytype was created by a call to MPI_TYPE_VECTOR and used so that a later call to MPI_TYPE_GET_CONTENTS returns its handle in hertype. Because both handles identify the same datatype object, attribute changes made with either handle are changes in the single object. That object will exist at least until MPI_TYPE_FREE has been called on both mytype and hertype. Freeing either handle alone will leave the object intact and the other handle will remain valid.
Errors
Related Information
Purpose
Determines the constructor that was used to create the datatype and the amount of data that will be returned by a call to MPI_TYPE_GET_CONTENTS for the same datatype.
C Synopsis
#include <mpi.h> int MPI_Type_get_envelope(MPI_Datatype datatype, int *num_integers, int *num_addresses, int *num_datatypes, int *combiner);
Fortran Synopsis
include 'mpif.h' MPI_TYPE_GET_ENVELOPE(INTEGER DATATYPE, INTEGER NUM_INTEGERS, INTEGER NUM_ADDRESSES, INTEGER NUM_DATATYPES, INTEGER COMBINER, INTEGER IERROR)
Parameters
Table 5 lists the combiners and the calls associated with them.
Combiner | What It Represents |
---|---|
MPI_COMBINER_NAMED | A named, predefined datatype |
MPI_COMBINER_DUP | MPI_TYPE_DUP |
MPI_COMBINER_CONTIGUOUS | MPI_TYPE_CONTIGUOUS |
MPI_COMBINER_VECTOR | MPI_TYPE_VECTOR |
MPI_COMBINER_HVECTOR | MPI_TYPE_HVECTOR from C and in some cases Fortran or MPI_TYPE_CREATE_HVECTOR |
MPI_COMBINER_HVECTOR_INTEGER | MPI_TYPE_HVECTOR from Fortran |
MPI_COMBINER_INDEXED | MPI_TYPE_INDEXED |
MPI_COMBINER_HINDEXED | MPI_TYPE_HINDEXED from C and in some cases Fortran or MPI_TYPE_CREATE_HINDEXED |
MPI_COMBINER_HINDEXED_INTEGER | MPI_TYPE_HINDEXED from Fortran |
MPI_COMBINER_INDEXED_BLOCK | MPI_TYPE_CREATE_INDEXED_BLOCK |
MPI_COMBINER_STRUCT | MPI_TYPE_STRUCT from C and in some cases Fortran or MPI_TYPE_CREATE_STRUCT |
MPI_COMBINER_STRUCT_INTEGER | MPI_TYPE_STRUCT from Fortran |
MPI_COMBINER_SUBARRAY | MPI_TYPE_CREATE_SUBARRAY |
MPI_COMBINER_DARRAY | MPI_TYPE_CREATE_DARRAY |
MPI_COMBINER_F90_REAL | MPI_TYPE_CREATE_F90_REAL |
MPI_COMBINER_F90_COMPLEX | MPI_TYPE_CREATE_F90_COMPLEX |
MPI_COMBINER_F90_INTEGER | MPI_TYPE_CREATE_F90_INTEGER |
MPI_COMBINER_RESIZED | MPI_TYPE_CREATE_RESIZED |
Description
MPI_TYPE_GET_ENVELOPE provides information about an unknown datatype which will allow it to be decoded if appropriate. This includes identifying the combiner used to create the unknown type and the sizes that the arrays must be if MPI_TYPE_GET_CONTENTS is to be called. MPI_TYPE_GET_ENVELOPE is also used to determine whether a datatype handle returned by MPI_TYPE_GET_CONTENTS or MPI_FILE_GET_VIEW is for a predefined, named datatype. When the combiner is MPI_COMBINER_NAMED, it is an error to call MPI_TYPE_GET_CONTENTS or MPI_TYPE_FREE with the datatype.
Errors
Related Information
Purpose
Returns a new datatype that represents count blocks. Each block is defined by an entry in array_of_blocklengths and array_of_displacements. Displacements are expressed in bytes.
C Synopsis
#include <mpi.h> int MPI_Type_hindexed(int count,int *array_of_blocklengths, MPI_Aint *array_of_displacements,MPI_Datatype oldtype, MPI_Datatype *newtype);
Fortran Synopsis
include 'mpif.h' MPI_TYPE_HINDEXED(INTEGER COUNT,INTEGER ARRAY_OF_BLOCKLENGTHS(*), INTEGER ARRAY_OF DISPLACEMENTS(*),INTEGER OLDTYPE,INTEGER NEWTYPE, INTEGER IERROR)
Parameters
Description
This routine returns a new datatype that represents count blocks. Each is defined by an entry in array_of_blocklengths and array_of_displacements. Displacements are expressed in bytes rather than in multiples of the oldtype extent as in MPI_TYPE_INDEXED.
Notes
newtype must be committed using MPI_TYPE_COMMIT before being used for communication.
Errors
Related Information
Purpose
Returns a new datatype that represents equally-spaced blocks. The spacing between the start of each block is given in bytes.
C Synopsis
#include <mpi.h> int MPI_Type_hvector(int count,int blocklength,MPI_Aint stride, MPI_Datatype oldtype,MPI_Datatype *newtype);
Fortran Synopsis
include 'mpif.h' MPI_TYPE_HVECTOR(INTEGER COUNT,INTEGER BLOCKLENGTH,INTEGER STRIDE, INTEGER OLDTYPE,INTEGER NEWTYPE,INTEGER IERROR)
Parameters
Description
This routine returns a new datatype that represents count equally spaced blocks. Each block is a concatenation of blocklength instances of oldtype. The origins of the blocks are spaced stride units apart where the counting unit is one byte.
Notes
newtype must be committed using MPI_TYPE_COMMIT before being used for communication.
Errors
Related Information
Purpose
Returns a new datatype that represents count blocks. Each block is defined by an entry in array_of_blocklengths and array_of_displacements. Displacements are expressed in units of extent(oldtype).
C Synopsis
#include <mpi.h> int MPI_Type_indexed(int count,int *array_of_blocklengths, int *array_of_displacements,MPI_Datatype oldtype, MPI_datatype *newtype);
Fortran Synopsis
include 'mpif.h' MPI_TYPE_INDEXED(INTEGER COUNT,INTEGER ARRAY_OF_BLOCKLENGTHS(*), INTEGER ARRAY_OF DISPLACEMENTS(*),INTEGER OLDTYPE,INTEGER NEWTYPE, INTEGER IERROR)
Parameters
Description
This routine returns a new datatype that represents count blocks. Each is defined by an entry in array_of_blocklengths and array_of_displacements. Displacements are expressed in units of extent(oldtype).
Notes
newtype must be committed using MPI_TYPE_COMMIT before being used for communication.
Errors
Related Information
Purpose
Returns the lower bound of a datatype.
C Synopsis
#include <mpi.h> int MPI_Type_lb(MPI_Datatype datatype,int *displacement);
Fortran Synopsis
include 'mpif.h' MPI_TYPE_LB(INTEGER DATATYPE,INTEGER DISPLACEMENT,INTEGER IERROR)
Parameters
Description
This routine returns the lower bound of a specific datatype.
Normally the lower bound is the offset of the lowest address byte in the datatype. Datatype constructors with explicit MPI_LB and vector constructors with negative stride can produce lb < 0. Lower bound cannot be greater than upper bound. For a type with MPI_LB in its ancestry, the value returned by MPI_TYPE_LB may not be related to the displacement of the lowest address byte. Refer to MPI_TYPE_STRUCT for more information on MPI_LB and MPI_UB.
Errors
Related Information
Purpose
Returns the number of bytes represented by any defined datatype.
C Synopsis
#include <mpi.h> int MPI_Type_size(MPI_Datatype datatype,int *size);
Fortran Synopsis
include 'mpif.h' MPI_TYPE_SIZE(INTEGER DATATYPE,INTEGER SIZE,INTEGER IERROR)
Parameters
Description
This routine returns the total number of bytes in the type signature associated with datatype. Entries with multiple occurrences in the datatype are counted.
Errors
Related Information
Purpose
Returns a new datatype that represents count blocks. Each is defined by an entry in array_of_blocklengths, array_of_displacements and array_of_types. Displacements are expressed in bytes.
C Synopsis
#include <mpi.h> int MPI_Type_struct(int count,int *array_of_blocklengths, MPI_Aint *array_of_displacements,MPI_Datatype *array_of_types, MPI_datatype *newtype);
Fortran Synopsis
include 'mpif.h' MPI_TYPE_STRUCT(INTEGER COUNT,INTEGER ARRAY_OF_BLOCKLENGTHS(*), INTEGER ARRAY_OF DISPLACEMENTS(*),INTEGER ARRAY_OF_TYPES(*), INTEGER NEWTYPE,INTEGER IERROR)
Parameters
Description
This routine returns a new datatype that represents count blocks. Each is defined by an entry in array_of_blocklengths, array_of_displacements and array_of_types. Displacements are expressed in bytes.
MPI_TYPE_STRUCT is the most general type constructor. It allows each block to consist of replications of different datatypes. This is the only constructor which allows MPI pseudo types MPI_LB and MPI_UB. Without these pseudo types, the extent of a datatype is the range from the first byte to the last byte rounded up as needed to meet boundary requirements. For example, if a type is made of an integer followed by 2 characters, it will still have an extent of 8 because it is padded to meet the boundary constraints of an int. This is intended to match the behavior of a compiler defining an array of such structures.
Because there may be cases in which this default behavior is not correct, MPI provides a means to set explicit upper and lower bounds which may not be directly related to the lowest and highest displacement datatype. When the pseudo type MPI_UB is used, the upper bound will be the value specified as the displacement of the MPI_UB block. No rounding for alignment is done. MPI_LB can be used to set an explicit lower bound but its use does not suppress rounding. When MPI_UB is not used, the upper bound of the datatype is adjusted to make the extent a multiple of the type's most boundary constrained component.
The marker placed by a MPI_LB or MPI_UB is 'sticky'. For example, assume type A is defined with a MPI_UB at 100. Type B is defined with a type A at 0 and a MPI_UB at 50. In effect, type B has received a MPI_UB at 50 and an inherited MPI_UB at 100. Because the inherited MPI_UB is higher, it is kept in the type B definition and the MPI_UB explicitly placed at 50 is discarded.
Notes
newtype must be committed using MPI_TYPE_COMMIT before being used for communication.
Errors
Related Information
Purpose
Returns the upper bound of a datatype.
C Synopsis
#include <mpi.h> int MPI_Type_ub(MPI_Datatype datatype,int *displacement);
Fortran Synopsis
include 'mpif.h' MPI_TYPE_UB(INTEGER DATATYPE,INTEGER DISPLACEMENT, INTEGER IERROR)
Parameters
Description
This routine returns the upper bound of a specific datatype.
The upper bound is the displacement you use in locating the origin byte of the next instance of datatype for operations which use count and datatype. In the normal case, ub represents the displacement of the highest address byte of the datatype + e (where e >= 0 and results in (ub - lb) being a multiple of the boundary requirement for the most boundary constrained type in the datatype). If MPI_UB is used in a type constructor, no alignment adjustment is done so ub is exactly as you set it.
For a type with MPI_UB in its ancestry, the value returned by MPI_TYPE_UB may not be related to the displacement of the highest address byte (with rounding). Refer to MPI_TYPE_STRUCT for more informatin on MPI_LB and MPI_UB.
Errors
Related Information
Purpose
Returns a new datatype that represents equally spaced blocks. The spacing between the start of each block is given in units of extent (oldtype).
C Synopsis
#include <mpi.h> int MPI_Type_vector(int count,int blocklength,int stride, MPI_Datatype oldtype,MPI_Datatype *newtype);
Fortran Synopsis
include 'mpif.h' MPI_TYPE_VECTOR(INTEGER COUNT,INTEGER BLOCKLENGTH, INTEGER STRIDE,INTEGER OLDTYPE,INTEGER NEWTYPE,INTEGER IERROR)
Parameters
Description
This function returns a new datatype that represents count equally spaced blocks. Each block is a a concatenation of blocklength instances of oldtype. The origins of the blocks are spaced stride units apart where the counting unit is extent(oldtype). That is, from one origin to the next in bytes = stride * extent (oldtype).
Notes
newtype must be committed using MPI_TYPE_COMMIT before being used for communication.
Errors
Related Information
Purpose
Unpacks the message into the specified receive buffer from the specified packed buffer.
C Synopsis
#include <mpi.h> int MPI_Unpack(void* inbuf,int insize,int *position, void *outbuf,int outcount,MPI_Datatype datatype, MPI_Comm comm);
Fortran Synopsis
include 'mpif.h' MPI_UNPACK(CHOICE INBUF,INTEGER INSIZE,INTEGER POSITION, CHOICE OUTBUF,INTEGER OUTCOUNT,INTEGER DATATYPE,INTEGER COMM, INTEGER IERROR)
Parameters
Description
This routine unpacks the message specified by outbuf, outcount, and datatype from the buffer space specified by inbuf and insize. The output buffer is any receive buffer allowed in MPI_RECV. The input buffer is any contiguous storage space containing insize bytes and starting at address inbuf.
The input value of position is the beginning offset in the input buffer for the data to be unpacked. The output value of position is the offset in the input buffer following the data already unpacked. That is, the starting point for another call to MPI_UNPACK. comm is the communicator that was used to receive the packed message.
Notes
In MPI_UNPACK the outcount argument specifies the actual number of items to be unpacked. The size of the corresponding message is the increment in position.
Errors
Related Information
Purpose
Waits for a nonblocking operation to complete.
C Synopsis
#include <mpi.h> int MPI_Wait(MPI_Request *request,MPI_Status *status);
Fortran Synopsis
include 'mpif.h' MPI_WAIT(INTEGER REQUEST,INTEGER STATUS(MPI_STATUS_SIZE),INTEGER IERROR)
Parameters
Description
MPI_WAIT returns after the operation identified by request completes. If the object associated with request was created by a nonblocking operation, the object is deallocated and request is set to MPI_REQUEST_NULL. MPI_WAIT is a non-local operation.
You can call MPI_WAIT with a null or inactive request argument. The operation returns immediately. The status argument returns tag = MPI_ANY_TAG, source = MPI_ANY_SOURCE. The status argument is also internally configured so that calls to MPI_GET_COUNT and MPI_GET_ELEMENTS return count = 0. (This is called an empty status.)
Information on the completed operation is found in status. You can query the status object for a send or receive operation with a call to MPI_TEST_CANCELLED. For receive operations, you can also retrieve information from status with MPI_GET_COUNT and MPI_GET_ELEMENTS. If wildcards were used by the receive for either the source or tag, the actual source and tag can be retrieved by:
The error field of MPI_Status is never modified. The success or failure is indicated by the return code only.
When one of the MPI wait or test calls returns status for a nonblocking operation request and the corresponding blocking operation does not provide a status argument, the status from this wait/test does not contain meaningful source, tag or message size information.
When you use this routine in a threaded application, make sure that the wait for a given request is done on only one thread. The wait does not have to be done on the thread that created the request. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Develop mode error if:
Related Information
Purpose
Waits for a collection of nonblocking operations to complete.
C Synopsis
#include <mpi.h> int MPI_Waitall(int count,MPI_Request *array_of_requests, MPI_Status *array_of_statuses);
Fortran Synopsis
include 'mpif.h' MPI_WAITALL(INTEGER COUNT,INTEGER ARRAY_OF_ REQUESTS(*), INTEGER ARRAY_OF_STATUSES(MPI_STATUS_SIZE,*),INTEGER IERROR)
Parameters
Description
This routine blocks until all operations associated with active handles in the list complete, and returns the status of each operation. array_of_requests and array_of statuses contain count entries.
The ith entry in array_of_statuses is set to the return status of the ith operation. Requests created by nonblocking operations are deallocated and the corresponding handles in the array are set to MPI_REQUEST_NULL. If array_of_requests contains null or inactive handles, MPI_WAITALL sets the status of each one to empty.
MPI_WAITALL(count, array_of_requests, array_of_statuses) has the same effect as the execution of MPI_WAIT(array_of_requests[i], array_of_statuses[i]) for i = 0, 1, ..., count-1, in some arbitrary order. MPI_WAITALL with an array of length one is equivalent to MPI_WAIT.
The error fields are never modified unless the function gives a return code of MPI_ERR_IN_STATUS. In which case, the error field of every MPI_Status is modified to reflect the result of the corresponding request.
When you use this routine in a threaded application, make sure that the wait for a given request is done on only one thread. The wait does not have to be done on the thread that created it. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Errors
Related Information
Purpose
Waits for any specified nonblocking operation to complete.
C Synopsis
#include <mpi.h> int MPI_Waitany(int count,MPI_Request *array_of_requests, int *index,MPI_Status *status);
Fortran Synopsis
include 'mpif.h' MPI_WAITANY(INTEGER COUNT,INTEGER ARRAY_OF_REQUESTS(*),INTEGER INDEX, INTEGER STATUS(MPI_STATUS_SIZE),INTEGER IERROR)
Parameters
Description
This routine blocks until one of the operations associated with the active requests in the array has completed. If more than one operation can complete, one is arbitrarily chosen. MPI_WAITANY returns in index the index of that request in the array, and in status the status of the completed operation. When the request is allocated by a nonblocking operation, it is deallocated and the request handle is set to MPI_REQUEST_NULL.
The array_of_requests list can contain null or inactive handles. When the list has a length of zero or all entries are null or inactive, the call returns immediately with index = MPI_UNDEFINED, and an empty status.
MPI_WAITANY(count, array_of_requests, index, status) has the same effect as the execution of MPI_WAIT(array_of_requests[i], status), where i is the value returned by index. MPI_WAITANY with an array containing one active entry is equivalent to MPI_WAIT.
The error fields are never modified unless the function gives a return code of MPI_ERR_IN_STATUS. In which case, the error field of every MPI_Status is modified to reflect the result of the corresponding request.
When one of the MPI wait or test calls returns status for a nonblocking operation request and the corresponding blocking operation does not provide a status argument, the status from this wait/test does not contain meaningful source, tag or message size information.
When you use this routine in a threaded application, make sure that the wait for a given request is done on only one thread. The wait does not have to be done on the thread that created it. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Notes
In C, the array is indexed from zero and in Fortran from one.
Errors
Related Information
Purpose
Waits for at least one of a list of nonblocking operations to complete.
C Synopsis
#include <mpi.h> int MPI_Waitsome(int incount,MPI_Request *array_of_requests, int *outcount,int *array_of_indices,MPI_Status *array_of_statuses);
Fortran Synopsis
include 'mpif.h' MPI_WAITSOME(INTEGER INCOUNT,INTEGER ARRAY_OF_REQUESTS,INTEGER OUTCOUNT, INTEGER ARRAY_OF_INDICES(*),INTEGER ARRAY_OF_STATUSES(MPI_STATUS_SIZE,*), INTEGER IERROR)
Parameters
Description
This routine waits for at least one of a list of nonblocking operations associated with active handles in the list to complete. The number of completed requests from the list of array_of_requests is returned in outcount. Returns in the first outcount locations of the array array_of_indices the indices of these operations.
The status for the completed operations is returned in the first outcount locations of the array array_of_statuses. When a completed request is allocated by a nonblocking operation, it is deallocated and the associated handle is set to MPI_REQUEST_NULL.
When the list contains no active handles, then the call returns immediately with outcount = MPI_UNDEFINED.
When a request for a receive repeatedly appears in a list of requests passed to MPI_WAITSOME and a matching send was posted, then the receive eventually succeeds unless the send is satisfied by another receive. This fairness requirement also applies to send requests and to I/O requests.
The error fields are never modified unless the function gives a return code of MPI_ERR_IN_STATUS. In which case, the error field of every MPI_Status is modified to reflect the result of the corresponding request.
When one of the MPI wait or test calls returns status for a nonblocking operation request and the corresponding blocking operation does not provide a status argument, the status from this wait/test does not contain meaningful source, tag or message size information.
When you use this routine in a threaded application, make sure that the wait for a given request is done on only one thread. The wait does not have to be done on the thread that created it. See Appendix G. "Programming Considerations for User Applications in POE" for more information on programming with MPI in a threaded environment.
Notes
In C, the index within the array array_of_requests, is indexed from zero and from one in Fortran.
Errors
Related Information
Purpose
Returns the resolution of MPI_WTIME in seconds.
C Synopsis
#include <mpi.h> double MPI_Wtick(void);
Fortran Synopsis
include 'mpif.h' DOUBLE PRECISION MPI_WTICK()
Parameters
None.
Description
This routine returns the resolution of MPI_WTIME in seconds, the time in seconds between successive clock ticks.
Errors
Related Information
Purpose
Returns the current value of time as a floating-point value.
C Synopsis
#include <mpi.h> double MPI_Wtime(void);
Fortran Synopsis
include 'mpif.h' DOUBLE PRECISION MPI_WTIME()
Parameters
None.
Description
This routine returns the current value of time as a double precision floating point number of seconds. This value represents elapsed time since some point in the past. This time in the past will not change during the life of the task. You are responsible for converting the number of seconds into other units if you prefer.
Notes
You can use the attribute key MPI_WTIME_IS_GLOBAL to determine if the values returned by MPI_WTIME on different nodes are synchronized. See MPI_ATTR_GET for more information.
The environment variable MP_CLOCK_SOURCE allows you to control where MPI_WTIME gets its time values from. See "Using the SP Switch Clock as a Time Source" for more information.
Errors
Related Information