Message Passing Interface
Message Passing Interface
Description
In the world of parallel computing the Message Passing Interface (MPI) is the de facto standard for implementing programs on multiple processors.
MPI_Allreduce - Combines values from all processes and distributes the result back to all processes.
MPI_Barrier - Blocks until all processes in the communicator have reached this routine.
MPI_Bcast - Broadcasts a message from the process with rank "root" to all other processes of the communicator
MPI_Comm_delete - Removes MPI_Comm object.
MPI_Comm_get_name - Return the print name from the communicator.
MPI_Comm_object - Creates MPI_Comm object.
MPI_Comm_rank - Determines the rank of the calling process in the communicator.
MPI_Comm_size - Determines the size of the group associated with a communicator.
MPI_Comm_split - Partitions the group that is associated with the specified communicator into a specified number of disjoint subgroups.
MPI_Comm_used - Returns list of current used MPI_Comm handle.
MPI_Finalize - Terminate the MPI execution environment.
MPI_Get_library_version - Return the version number of MPI library.
MPI_Get_processor_name - Gets the name of the processor.
MPI_Get_version - Return the version number of MPI.
MPI_Init - Initialize the MPI execution environment.
MPI_Initialized - Indicates whether MPI_Init has been called.
MPI_Iprobe - Nonblocking test for a message.
MPI_Probe - Blocking test for a message.
MPI_Recv - Blocking receive for a message.
MPI_Reduce - Reduces values on all processes to a single value.
MPI_Send - Performs a blocking send.
MPI examples - Some Nelson MPI examples.
MPI overview - Access to MPI features from Nelson.
mpiexec - Run an MPI script.
Last updated