- •Contents
- •List of Figures
- •List of Tables
- •Acknowledgments
- •Introduction to MPI
- •Overview and Goals
- •Background of MPI-1.0
- •Background of MPI-1.1, MPI-1.2, and MPI-2.0
- •Background of MPI-1.3 and MPI-2.1
- •Background of MPI-2.2
- •Who Should Use This Standard?
- •What Platforms Are Targets For Implementation?
- •What Is Included In The Standard?
- •What Is Not Included In The Standard?
- •Organization of this Document
- •MPI Terms and Conventions
- •Document Notation
- •Naming Conventions
- •Semantic Terms
- •Data Types
- •Opaque Objects
- •Array Arguments
- •State
- •Named Constants
- •Choice
- •Addresses
- •Language Binding
- •Deprecated Names and Functions
- •Fortran Binding Issues
- •C Binding Issues
- •C++ Binding Issues
- •Functions and Macros
- •Processes
- •Error Handling
- •Implementation Issues
- •Independence of Basic Runtime Routines
- •Interaction with Signals
- •Examples
- •Point-to-Point Communication
- •Introduction
- •Blocking Send and Receive Operations
- •Blocking Send
- •Message Data
- •Message Envelope
- •Blocking Receive
- •Return Status
- •Passing MPI_STATUS_IGNORE for Status
- •Data Type Matching and Data Conversion
- •Type Matching Rules
- •Type MPI_CHARACTER
- •Data Conversion
- •Communication Modes
- •Semantics of Point-to-Point Communication
- •Buffer Allocation and Usage
- •Nonblocking Communication
- •Communication Request Objects
- •Communication Initiation
- •Communication Completion
- •Semantics of Nonblocking Communications
- •Multiple Completions
- •Non-destructive Test of status
- •Probe and Cancel
- •Persistent Communication Requests
- •Send-Receive
- •Null Processes
- •Datatypes
- •Derived Datatypes
- •Type Constructors with Explicit Addresses
- •Datatype Constructors
- •Subarray Datatype Constructor
- •Distributed Array Datatype Constructor
- •Address and Size Functions
- •Lower-Bound and Upper-Bound Markers
- •Extent and Bounds of Datatypes
- •True Extent of Datatypes
- •Commit and Free
- •Duplicating a Datatype
- •Use of General Datatypes in Communication
- •Correct Use of Addresses
- •Decoding a Datatype
- •Examples
- •Pack and Unpack
- •Canonical MPI_PACK and MPI_UNPACK
- •Collective Communication
- •Introduction and Overview
- •Communicator Argument
- •Applying Collective Operations to Intercommunicators
- •Barrier Synchronization
- •Broadcast
- •Example using MPI_BCAST
- •Gather
- •Examples using MPI_GATHER, MPI_GATHERV
- •Scatter
- •Examples using MPI_SCATTER, MPI_SCATTERV
- •Example using MPI_ALLGATHER
- •All-to-All Scatter/Gather
- •Global Reduction Operations
- •Reduce
- •Signed Characters and Reductions
- •MINLOC and MAXLOC
- •All-Reduce
- •Process-local reduction
- •Reduce-Scatter
- •MPI_REDUCE_SCATTER_BLOCK
- •MPI_REDUCE_SCATTER
- •Scan
- •Inclusive Scan
- •Exclusive Scan
- •Example using MPI_SCAN
- •Correctness
- •Introduction
- •Features Needed to Support Libraries
- •MPI's Support for Libraries
- •Basic Concepts
- •Groups
- •Contexts
- •Intra-Communicators
- •Group Management
- •Group Accessors
- •Group Constructors
- •Group Destructors
- •Communicator Management
- •Communicator Accessors
- •Communicator Constructors
- •Communicator Destructors
- •Motivating Examples
- •Current Practice #1
- •Current Practice #2
- •(Approximate) Current Practice #3
- •Example #4
- •Library Example #1
- •Library Example #2
- •Inter-Communication
- •Inter-communicator Accessors
- •Inter-communicator Operations
- •Inter-Communication Examples
- •Caching
- •Functionality
- •Communicators
- •Windows
- •Datatypes
- •Error Class for Invalid Keyval
- •Attributes Example
- •Naming Objects
- •Formalizing the Loosely Synchronous Model
- •Basic Statements
- •Models of Execution
- •Static communicator allocation
- •Dynamic communicator allocation
- •The General case
- •Process Topologies
- •Introduction
- •Virtual Topologies
- •Embedding in MPI
- •Overview of the Functions
- •Topology Constructors
- •Cartesian Constructor
- •Cartesian Convenience Function: MPI_DIMS_CREATE
- •General (Graph) Constructor
- •Distributed (Graph) Constructor
- •Topology Inquiry Functions
- •Cartesian Shift Coordinates
- •Partitioning of Cartesian structures
- •Low-Level Topology Functions
- •An Application Example
- •MPI Environmental Management
- •Implementation Information
- •Version Inquiries
- •Environmental Inquiries
- •Tag Values
- •Host Rank
- •IO Rank
- •Clock Synchronization
- •Memory Allocation
- •Error Handling
- •Error Handlers for Communicators
- •Error Handlers for Windows
- •Error Handlers for Files
- •Freeing Errorhandlers and Retrieving Error Strings
- •Error Codes and Classes
- •Error Classes, Error Codes, and Error Handlers
- •Timers and Synchronization
- •Startup
- •Allowing User Functions at Process Termination
- •Determining Whether MPI Has Finished
- •Portable MPI Process Startup
- •The Info Object
- •Process Creation and Management
- •Introduction
- •The Dynamic Process Model
- •Starting Processes
- •The Runtime Environment
- •Process Manager Interface
- •Processes in MPI
- •Starting Processes and Establishing Communication
- •Reserved Keys
- •Spawn Example
- •Manager-worker Example, Using MPI_COMM_SPAWN.
- •Establishing Communication
- •Names, Addresses, Ports, and All That
- •Server Routines
- •Client Routines
- •Name Publishing
- •Reserved Key Values
- •Client/Server Examples
- •Ocean/Atmosphere - Relies on Name Publishing
- •Simple Client-Server Example.
- •Other Functionality
- •Universe Size
- •Singleton MPI_INIT
- •MPI_APPNUM
- •Releasing Connections
- •Another Way to Establish MPI Communication
- •One-Sided Communications
- •Introduction
- •Initialization
- •Window Creation
- •Window Attributes
- •Communication Calls
- •Examples
- •Accumulate Functions
- •Synchronization Calls
- •Fence
- •General Active Target Synchronization
- •Lock
- •Assertions
- •Examples
- •Error Handling
- •Error Handlers
- •Error Classes
- •Semantics and Correctness
- •Atomicity
- •Progress
- •Registers and Compiler Optimizations
- •External Interfaces
- •Introduction
- •Generalized Requests
- •Examples
- •Associating Information with Status
- •MPI and Threads
- •General
- •Initialization
- •Introduction
- •File Manipulation
- •Opening a File
- •Closing a File
- •Deleting a File
- •Resizing a File
- •Preallocating Space for a File
- •Querying the Size of a File
- •Querying File Parameters
- •File Info
- •Reserved File Hints
- •File Views
- •Data Access
- •Data Access Routines
- •Positioning
- •Synchronism
- •Coordination
- •Data Access Conventions
- •Data Access with Individual File Pointers
- •Data Access with Shared File Pointers
- •Noncollective Operations
- •Collective Operations
- •Seek
- •Split Collective Data Access Routines
- •File Interoperability
- •Datatypes for File Interoperability
- •Extent Callback
- •Datarep Conversion Functions
- •Matching Data Representations
- •Consistency and Semantics
- •File Consistency
- •Random Access vs. Sequential Files
- •Progress
- •Collective File Operations
- •Type Matching
- •Logical vs. Physical File Layout
- •File Size
- •Examples
- •Asynchronous I/O
- •I/O Error Handling
- •I/O Error Classes
- •Examples
- •Subarray Filetype Constructor
- •Requirements
- •Discussion
- •Logic of the Design
- •Examples
- •MPI Library Implementation
- •Systems with Weak Symbols
- •Systems Without Weak Symbols
- •Complications
- •Multiple Counting
- •Linker Oddities
- •Multiple Levels of Interception
- •Deprecated Functions
- •Deprecated since MPI-2.0
- •Deprecated since MPI-2.2
- •Language Bindings
- •Overview
- •Design
- •C++ Classes for MPI
- •Class Member Functions for MPI
- •Semantics
- •C++ Datatypes
- •Communicators
- •Exceptions
- •Mixed-Language Operability
- •Problems With Fortran Bindings for MPI
- •Problems Due to Strong Typing
- •Problems Due to Data Copying and Sequence Association
- •Special Constants
- •Fortran 90 Derived Types
- •A Problem with Register Optimization
- •Basic Fortran Support
- •Extended Fortran Support
- •The mpi Module
- •No Type Mismatch Problems for Subroutines with Choice Arguments
- •Additional Support for Fortran Numeric Intrinsic Types
- •Language Interoperability
- •Introduction
- •Assumptions
- •Initialization
- •Transfer of Handles
- •Status
- •MPI Opaque Objects
- •Datatypes
- •Callback Functions
- •Error Handlers
- •Reduce Operations
- •Addresses
- •Attributes
- •Extra State
- •Constants
- •Interlanguage Communication
- •Language Bindings Summary
- •Groups, Contexts, Communicators, and Caching Fortran Bindings
- •External Interfaces C++ Bindings
- •Change-Log
- •Bibliography
- •Examples Index
- •MPI Declarations Index
- •MPI Function Index
134 |
CHAPTER 5. COLLECTIVE COMMUNICATION |
1special communicator might be created for the collective operation so as to avoid inter-
2ference with any on-going point-to-point communication at the time of the collective
3call. This is discussed further in Section 5.12. (End of advice to implementors.)
4
5Many of the descriptions of the collective routines provide illustrations in terms of
6blocking MPI point-to-point routines. These are intended solely to indicate what data is
7sent or received by what process. Many of these examples are not correct MPI programs;
8for purposes of simplicity, they often assume in nite bu ering.
9
10 |
5.2 Communicator Argument |
|
|
11 |
|
12The key concept of the collective functions is to have a group or groups of participating
13processes. The routines do not have group identi ers as explicit arguments. Instead, there
14is a communicator argument. Groups and communicators are discussed in full detail in
15Chapter 6. For the purposes of this chapter, it is su cient to know that there are two types
16of communicators: intra-communicators and inter-communicators. An intracommunicator
17can be thought of as an indenti er for a single group of processes linked with a context. An
18intercommunicator identi es two distinct groups of processes linked with a context.
19
20 |
5.2.1 Speci cs for Intracommunicator Collective Operations |
|
21
22All processes in the group identi ed by the intracommunicator must call the collective
23routine.
24In many cases, collective communication can occur \in place" for intracommunicators,
25with the output bu er being identical to the input bu er. This is speci ed by providing
26a special argument value, MPI_IN_PLACE, instead of the send bu er or the receive bu er
27argument, depending on the operation performed.
28
29
30
31
32
33
34
35
Rationale. The \in place" operations are provided to reduce unnecessary memory motion by both the MPI implementation and by the user. Note that while the simple check of testing whether the send and receive bu ers have the same address will work for some cases (e.g., MPI_ALLREDUCE), they are inadequate in others (e.g., MPI_GATHER, with root not equal to zero). Further, Fortran explicitly prohibits aliasing of arguments; the approach of using a special value to denote \in place" operation eliminates that di culty. (End of rationale.)
36Advice to users. By allowing the \in place" option, the receive bu er in many of the
37collective calls becomes a send-and-receive bu er. For this reason, a Fortran binding
38that includes INTENT must mark these as INOUT, not OUT.
39Note that MPI_IN_PLACE is a special kind of value; it has the same restrictions on its
40use that MPI_BOTTOM has.
41
42
43
44
45
Some intracommunicator collective operations do not support the \in place" option (e.g., MPI_ALLTOALLV). (End of advice to users.)
5.2.2 Applying Collective Operations to Intercommunicators
46To understand how collective operations apply to intercommunicators, we can view most
47MPI intracommunicator collective operations as tting one of the following categories (see,
48for instance, [43]):
5.2. COMMUNICATOR ARGUMENT |
135 |
All-To-All All processes contribute to the result. All processes receive the result.
MPI_ALLGATHER, MPI_ALLGATHERV
MPI_ALLTOALL, MPI_ALLTOALLV, MPI_ALLTOALLW
MPI_ALLREDUCE, MPI_REDUCE_SCATTER
MPI_BARRIER
All-To-One All processes contribute to the result. One process receives the result.
MPI_GATHER, MPI_GATHERV
MPI_REDUCE
One-To-All One process contributes to the result. All processes receive the result.
MPI_BCAST
MPI_SCATTER, MPI_SCATTERV
Other Collective operations that do not t into one of the above categories.
MPI_SCAN, MPI_EXSCAN
The data movement patterns of MPI_SCAN and MPI_EXSCAN do not t this taxonomy. The application of collective communication to intercommunicators is best described
in terms of two groups. For example, an all-to-all MPI_ALLGATHER operation can be described as collecting data from all members of one group with the result appearing in all members of the other group (see Figure 5.2). As another example, a one-to-all MPI_BCAST operation sends data from one member of one group to all members of the other group. Collective computation operations such as MPI_REDUCE_SCATTER have a similar interpretation (see Figure 5.3). For intracommunicators, these two groups are the same. For intercommunicators, these two groups are distinct. For the all-to-all operations, each such operation is described in two phases, so that it has a symmetric, full-duplex behavior.
The following collective operations also apply to intercommunicators:
MPI_BARRIER,
MPI_BCAST,
MPI_GATHER, MPI_GATHERV,
MPI_SCATTER, MPI_SCATTERV,
MPI_ALLGATHER, MPI_ALLGATHERV,
MPI_ALLTOALL, MPI_ALLTOALLV, MPI_ALLTOALLW,
MPI_ALLREDUCE, MPI_REDUCE,
MPI_REDUCE_SCATTER.
In C++, the bindings for these functions are in the MPI::Comm class. However, since the collective operations do not make sense on a C++ MPI::Comm (as it is neither an intercommunicator nor an intracommunicator), the functions are all pure virtual.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
|
136 |
CHAPTER 5. COLLECTIVE COMMUNICATION |
1 |
|
|
2 |
|
|
3 |
|
|
4 |
|
|
5 |
0 |
0 |
|
|
|
6 |
1 |
1 |
|
|
|
7 |
2 |
2 |
|
|
|
8 |
3 |
|
9 |
|
|
10 |
Lcomm |
Rcomm |
|
|
|
11 |
|
|
12 |
|
|
13 |
0 |
0 |
|
|
|
14 |
1 |
1 |
15 |
2 |
2 |
16 |
3 |
|
17 |
|
|
18 |
Lcomm |
Rcomm |
19 |
|
|
20Figure 5.2: Intercommunicator allgather. The focus of data to one process is represented,
21not mandated by the semantics. The two phases do allgathers in both directions.
22 |
|
|
23 |
|
|
24 |
|
|
25 |
|
|
26 |
|
|
27 |
|
|
28 |
0 |
0 |
|
||
29 |
1 |
1 |
|
||
30 |
2 |
|
31 |
2 |
|
|
|
|
32 |
3 |
|
|
|
|
33 |
Lcomm |
Rcomm |
|
||
34 |
|
|
35 |
|
|
36 |
|
|
37 |
0 |
0 |
|
|
|
38 |
1 |
1 |
|
|
|
39 |
2 |
2 |
|
|
|
40 |
3 |
|
41 |
|
|
42 |
Lcomm |
Rcomm |
|
|
|
43 |
|
|
44Figure 5.3: Intercommunicator reduce-scatter. The focus of data to one process is rep-
45resented, not mandated by the semantics. The two phases do reduce-scatters in both
46directions.
47
48