- •Contents
- •List of Figures
- •List of Tables
- •Acknowledgments
- •Introduction to MPI
- •Overview and Goals
- •Background of MPI-1.0
- •Background of MPI-1.1, MPI-1.2, and MPI-2.0
- •Background of MPI-1.3 and MPI-2.1
- •Background of MPI-2.2
- •Who Should Use This Standard?
- •What Platforms Are Targets For Implementation?
- •What Is Included In The Standard?
- •What Is Not Included In The Standard?
- •Organization of this Document
- •MPI Terms and Conventions
- •Document Notation
- •Naming Conventions
- •Semantic Terms
- •Data Types
- •Opaque Objects
- •Array Arguments
- •State
- •Named Constants
- •Choice
- •Addresses
- •Language Binding
- •Deprecated Names and Functions
- •Fortran Binding Issues
- •C Binding Issues
- •C++ Binding Issues
- •Functions and Macros
- •Processes
- •Error Handling
- •Implementation Issues
- •Independence of Basic Runtime Routines
- •Interaction with Signals
- •Examples
- •Point-to-Point Communication
- •Introduction
- •Blocking Send and Receive Operations
- •Blocking Send
- •Message Data
- •Message Envelope
- •Blocking Receive
- •Return Status
- •Passing MPI_STATUS_IGNORE for Status
- •Data Type Matching and Data Conversion
- •Type Matching Rules
- •Type MPI_CHARACTER
- •Data Conversion
- •Communication Modes
- •Semantics of Point-to-Point Communication
- •Buffer Allocation and Usage
- •Nonblocking Communication
- •Communication Request Objects
- •Communication Initiation
- •Communication Completion
- •Semantics of Nonblocking Communications
- •Multiple Completions
- •Non-destructive Test of status
- •Probe and Cancel
- •Persistent Communication Requests
- •Send-Receive
- •Null Processes
- •Datatypes
- •Derived Datatypes
- •Type Constructors with Explicit Addresses
- •Datatype Constructors
- •Subarray Datatype Constructor
- •Distributed Array Datatype Constructor
- •Address and Size Functions
- •Lower-Bound and Upper-Bound Markers
- •Extent and Bounds of Datatypes
- •True Extent of Datatypes
- •Commit and Free
- •Duplicating a Datatype
- •Use of General Datatypes in Communication
- •Correct Use of Addresses
- •Decoding a Datatype
- •Examples
- •Pack and Unpack
- •Canonical MPI_PACK and MPI_UNPACK
- •Collective Communication
- •Introduction and Overview
- •Communicator Argument
- •Applying Collective Operations to Intercommunicators
- •Barrier Synchronization
- •Broadcast
- •Example using MPI_BCAST
- •Gather
- •Examples using MPI_GATHER, MPI_GATHERV
- •Scatter
- •Examples using MPI_SCATTER, MPI_SCATTERV
- •Example using MPI_ALLGATHER
- •All-to-All Scatter/Gather
- •Global Reduction Operations
- •Reduce
- •Signed Characters and Reductions
- •MINLOC and MAXLOC
- •All-Reduce
- •Process-local reduction
- •Reduce-Scatter
- •MPI_REDUCE_SCATTER_BLOCK
- •MPI_REDUCE_SCATTER
- •Scan
- •Inclusive Scan
- •Exclusive Scan
- •Example using MPI_SCAN
- •Correctness
- •Introduction
- •Features Needed to Support Libraries
- •MPI's Support for Libraries
- •Basic Concepts
- •Groups
- •Contexts
- •Intra-Communicators
- •Group Management
- •Group Accessors
- •Group Constructors
- •Group Destructors
- •Communicator Management
- •Communicator Accessors
- •Communicator Constructors
- •Communicator Destructors
- •Motivating Examples
- •Current Practice #1
- •Current Practice #2
- •(Approximate) Current Practice #3
- •Example #4
- •Library Example #1
- •Library Example #2
- •Inter-Communication
- •Inter-communicator Accessors
- •Inter-communicator Operations
- •Inter-Communication Examples
- •Caching
- •Functionality
- •Communicators
- •Windows
- •Datatypes
- •Error Class for Invalid Keyval
- •Attributes Example
- •Naming Objects
- •Formalizing the Loosely Synchronous Model
- •Basic Statements
- •Models of Execution
- •Static communicator allocation
- •Dynamic communicator allocation
- •The General case
- •Process Topologies
- •Introduction
- •Virtual Topologies
- •Embedding in MPI
- •Overview of the Functions
- •Topology Constructors
- •Cartesian Constructor
- •Cartesian Convenience Function: MPI_DIMS_CREATE
- •General (Graph) Constructor
- •Distributed (Graph) Constructor
- •Topology Inquiry Functions
- •Cartesian Shift Coordinates
- •Partitioning of Cartesian structures
- •Low-Level Topology Functions
- •An Application Example
- •MPI Environmental Management
- •Implementation Information
- •Version Inquiries
- •Environmental Inquiries
- •Tag Values
- •Host Rank
- •IO Rank
- •Clock Synchronization
- •Memory Allocation
- •Error Handling
- •Error Handlers for Communicators
- •Error Handlers for Windows
- •Error Handlers for Files
- •Freeing Errorhandlers and Retrieving Error Strings
- •Error Codes and Classes
- •Error Classes, Error Codes, and Error Handlers
- •Timers and Synchronization
- •Startup
- •Allowing User Functions at Process Termination
- •Determining Whether MPI Has Finished
- •Portable MPI Process Startup
- •The Info Object
- •Process Creation and Management
- •Introduction
- •The Dynamic Process Model
- •Starting Processes
- •The Runtime Environment
- •Process Manager Interface
- •Processes in MPI
- •Starting Processes and Establishing Communication
- •Reserved Keys
- •Spawn Example
- •Manager-worker Example, Using MPI_COMM_SPAWN.
- •Establishing Communication
- •Names, Addresses, Ports, and All That
- •Server Routines
- •Client Routines
- •Name Publishing
- •Reserved Key Values
- •Client/Server Examples
- •Ocean/Atmosphere - Relies on Name Publishing
- •Simple Client-Server Example.
- •Other Functionality
- •Universe Size
- •Singleton MPI_INIT
- •MPI_APPNUM
- •Releasing Connections
- •Another Way to Establish MPI Communication
- •One-Sided Communications
- •Introduction
- •Initialization
- •Window Creation
- •Window Attributes
- •Communication Calls
- •Examples
- •Accumulate Functions
- •Synchronization Calls
- •Fence
- •General Active Target Synchronization
- •Lock
- •Assertions
- •Examples
- •Error Handling
- •Error Handlers
- •Error Classes
- •Semantics and Correctness
- •Atomicity
- •Progress
- •Registers and Compiler Optimizations
- •External Interfaces
- •Introduction
- •Generalized Requests
- •Examples
- •Associating Information with Status
- •MPI and Threads
- •General
- •Initialization
- •Introduction
- •File Manipulation
- •Opening a File
- •Closing a File
- •Deleting a File
- •Resizing a File
- •Preallocating Space for a File
- •Querying the Size of a File
- •Querying File Parameters
- •File Info
- •Reserved File Hints
- •File Views
- •Data Access
- •Data Access Routines
- •Positioning
- •Synchronism
- •Coordination
- •Data Access Conventions
- •Data Access with Individual File Pointers
- •Data Access with Shared File Pointers
- •Noncollective Operations
- •Collective Operations
- •Seek
- •Split Collective Data Access Routines
- •File Interoperability
- •Datatypes for File Interoperability
- •Extent Callback
- •Datarep Conversion Functions
- •Matching Data Representations
- •Consistency and Semantics
- •File Consistency
- •Random Access vs. Sequential Files
- •Progress
- •Collective File Operations
- •Type Matching
- •Logical vs. Physical File Layout
- •File Size
- •Examples
- •Asynchronous I/O
- •I/O Error Handling
- •I/O Error Classes
- •Examples
- •Subarray Filetype Constructor
- •Requirements
- •Discussion
- •Logic of the Design
- •Examples
- •MPI Library Implementation
- •Systems with Weak Symbols
- •Systems Without Weak Symbols
- •Complications
- •Multiple Counting
- •Linker Oddities
- •Multiple Levels of Interception
- •Deprecated Functions
- •Deprecated since MPI-2.0
- •Deprecated since MPI-2.2
- •Language Bindings
- •Overview
- •Design
- •C++ Classes for MPI
- •Class Member Functions for MPI
- •Semantics
- •C++ Datatypes
- •Communicators
- •Exceptions
- •Mixed-Language Operability
- •Problems With Fortran Bindings for MPI
- •Problems Due to Strong Typing
- •Problems Due to Data Copying and Sequence Association
- •Special Constants
- •Fortran 90 Derived Types
- •A Problem with Register Optimization
- •Basic Fortran Support
- •Extended Fortran Support
- •The mpi Module
- •No Type Mismatch Problems for Subroutines with Choice Arguments
- •Additional Support for Fortran Numeric Intrinsic Types
- •Language Interoperability
- •Introduction
- •Assumptions
- •Initialization
- •Transfer of Handles
- •Status
- •MPI Opaque Objects
- •Datatypes
- •Callback Functions
- •Error Handlers
- •Reduce Operations
- •Addresses
- •Attributes
- •Extra State
- •Constants
- •Interlanguage Communication
- •Language Bindings Summary
- •Groups, Contexts, Communicators, and Caching Fortran Bindings
- •External Interfaces C++ Bindings
- •Change-Log
- •Bibliography
- •Examples Index
- •MPI Declarations Index
- •MPI Function Index
11.4. SYNCHRONIZATION CALLS |
347 |
This code is identical to the code in Example 11.2, page 344, except that a call to get has been replaced by a call to accumulate. (Note that, if map is one-to-one, then the code computes B = A(map 1), which is the reverse assignment to the one computed in that previous example.) In a similar manner, we can replace in Example 11.1, page 342, the call to get by a call to accumulate, thus performing the computation with only one communication between any two processes.
11.4 Synchronization Calls
RMA communications fall in two categories:
active target communication, where data is moved from the memory of one process to the memory of another, and both are explicitly involved in the communication. This communication pattern is similar to message passing, except that all the data transfer arguments are provided by one process, and the second process only participates in the synchronization.
passive target communication, where data is moved from the memory of one process to the memory of another, and only the origin process is explicitly involved in the transfer. Thus, two origin processes may communicate by accessing the same location in a target window. The process that owns the target window may be distinct from the two communicating processes, in which case it does not participate explicitly in the communication. This communication paradigm is closest to a shared memory model, where shared data can be accessed by all processes, irrespective of location.
RMA communication calls with argument win must occur at a process only within an access epoch for win. Such an epoch starts with an RMA synchronization call on win; it proceeds with zero or more RMA communication calls (MPI_PUT,
MPI_GET or MPI_ACCUMULATE) on win; it completes with another synchronization call on win. This allows users to amortize one synchronization with multiple data transfers and provide implementors more exibility in the implementation of RMA operations.
Distinct access epochs for win at the same process must be disjoint. On the other hand, epochs pertaining to di erent win arguments may overlap. Local operations or other MPI calls may also occur during an epoch.
In active target communication, a target window can be accessed by RMA operations only within an exposure epoch. Such an epoch is started and completed by RMA synchronization calls executed by the target process. Distinct exposure epochs at a process on the same window must be disjoint, but such an exposure epoch may overlap with exposure epochs on other windows or with access epochs for the same or other win arguments. There is a one-to-one matching between access epochs at origin processes and exposure epochs on target processes: RMA operations issued by an origin process for a target window will access that target window during the same exposure epoch if and only if they were issued during the same access epoch.
In passive target communication the target process does not execute RMA synchronization calls, and there is no concept of an exposure epoch.
MPI provides three synchronization mechanisms:
1.The MPI_WIN_FENCE collective synchronization call supports a simple synchronization pattern that is often used in parallel computations: namely a loosely-synchronous
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
348 |
CHAPTER 11. ONE-SIDED COMMUNICATIONS |
1model, where global computation phases alternate with global communication phases.
2This mechanism is most useful for loosely synchronous algorithms where the graph
3of communicating processes changes very frequently, or where each process communi-
4
5
6
7
8
9
10
cates with many others.
This call is used for active target communication. An access epoch at an origin process or an exposure epoch at a target process are started and completed by calls to MPI_WIN_FENCE. A process can access windows at all processes in the group of win during such an access epoch, and the local window can be accessed by all processes in the group of win during such an exposure epoch.
112. The four functions MPI_WIN_START, MPI_WIN_COMPLETE, MPI_WIN_POST and
12MPI_WIN_WAIT can be used to restrict synchronization to the minimum: only pairs
13of communicating processes synchronize, and they do so only when a synchronization
14is needed to order correctly RMA accesses to a window with respect to local accesses
15to that same window. This mechanism may be more e cient when each process
16communicates with few (logical) neighbors, and the communication graph is xed or
17changes infrequently.
18
19
20
21
22
23
24
These calls are used for active target communication. An access epoch is started at the origin process by a call to MPI_WIN_START and is terminated by a call to MPI_WIN_COMPLETE. The start call has a group argument that speci es the group of target processes for that epoch. An exposure epoch is started at the target process by a call to MPI_WIN_POST and is completed by a call to MPI_WIN_WAIT. The post call has a group argument that speci es the set of origin processes for that epoch.
253. Finally, shared and exclusive locks are provided by the two functions MPI_WIN_LOCK
26and MPI_WIN_UNLOCK. Lock synchronization is useful for MPI applications that
27emulate a shared memory model via MPI calls; e.g., in a \billboard" model, where
28processes can, at random times, access or update di erent parts of the billboard.
29
30
31
32
These two calls provide passive target communication. An access epoch is started by a call to MPI_WIN_LOCK and terminated by a call to MPI_WIN_UNLOCK. Only one target window can be accessed during that epoch with win.
33Figure 11.1 illustrates the general synchronization pattern for active target communi-
34cation. The synchronization between post and start ensures that the put call of the origin
35process does not start until the target process exposes the window (with the post call);
36the target process will expose the window only after preceding local accesses to the window
37have completed. The synchronization between complete and wait ensures that the put call
38of the origin process completes before the window is unexposed (with the wait call). The
39target process will execute following local accesses to the target window only after the wait
40returned.
41Figure 11.1 shows operations occurring in the natural temporal order implied by the
42synchronizations: the post occurs before the matching start, and complete occurs before
43the matching wait. However, such strong synchronization is more than needed for correct
44ordering of window accesses. The semantics of MPI calls allow weak synchronization, as
45illustrated in Figure 11.2. The access to the target window is delayed until the window is ex-
46posed, after the post. However the start may complete earlier; the put and complete may
47also terminate earlier, if put data is bu ered by the implementation. The synchronization
48
11.4. SYNCHRONIZATION CALLS |
349 |
ORIGIN
PROCESS
.
.
.
.
.
.
.
.
.
start
put put
executed
in origin
memory
complete
.
.
.
TARGET
PROCESS
wait
Local load window
accesses
store
post
|
. |
|
|
put |
. |
|
|
. |
|
||
|
|
||
executed . |
Window is |
||
in target |
. |
exposed |
|
. |
to RMA |
||
memory |
|||
. |
accesses |
||
|
. |
|
|
|
. |
|
|
|
wait |
|
|
|
load |
Local |
|
|
|
window |
|
|
store |
accesses |
post
Figure 11.1: Active target communication. Dashed arrows represent synchronizations (ordering of events).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
350 |
CHAPTER 11. ONE-SIDED COMMUNICATIONS |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
ORIGIN |
TARGET |
PROCESS |
PROCESS |
15 |
. |
|
wait |
|
|
. |
|
|
|||
|
|
|
|||
|
|
|
|
||
16 |
start |
|
|
Local |
|
|
|
load |
|||
17 |
|
|
window |
||
|
|
|
|||
18 |
put |
|
store |
accesses |
|
19 |
|
|
|
||
put |
|
|
|
||
20 |
|
post |
|
||
|
|
|
|||
|
|
|
|
||
21 |
executed |
|
. |
|
|
22 |
in origin |
|
|
||
|
. |
|
|||
|
|
Window is |
|||
23 |
memory |
put |
. |
||
24 |
. |
exposed |
|||
|
executed |
||||
|
|
. |
to RMA |
||
25 |
|
|
accesses |
||
|
in target . |
||||
|
|
|
|||
26 |
|
memory |
. |
|
|
|
|
|
|||
|
complete |
. |
|
||
27 |
|
|
|||
28 |
. |
|
. |
|
|
. |
|
|
|
||
29 |
|
wait |
|
||
. |
|
|
|||
|
|
|
|
||
30 |
. |
|
load |
Local |
|
|
. |
|
|||
31 |
|
|
window |
||
|
. |
|
|
||
32 |
|
store |
accesses |
||
. |
|
||||
|
|
|
|
||
33 |
. |
|
post |
|
|
34 |
. |
|
|
||
|
|
|
|||
35 |
|
|
|
|
Figure 11.2: Active target communication, with weak synchronization. Dashed arrows
36
represent synchronizations (ordering of events)
37
38
39
40
41
42
43
44
45
46
47
48
11.4. SYNCHRONIZATION CALLS |
351 |
ORIGIN |
|
|
TARGET |
PROCESS |
|
|
PROCESS |
1 |
|
|
|
lock |
|
|
|
put |
|
|
|
|
|
|
lock |
. |
put |
|
|
executed |
|
|
|
. |
|
. |
|
. |
in origin put |
. |
|
. |
memory |
executed |
. |
|
. |
||
|
|
||
|
|
in target |
|
|
|
. |
|
|
|
memory |
|
unlock |
|
|
unlock |
.
.
.
. lock
.
.
.
.
.
.
.
.
unlock
get |
|
executed |
|
in target |
get |
|
|
memory |
executed |
|
in origin |
|
memory |
ORIGIN PROCESS 2
.
.
.
.
.
.
.
.
.
.
lock
.
.
get
.
.
.
.
.
.
.
.
unlock
Figure 11.3: Passive target communication. Dashed arrows represent synchronizations (ordering of events).
calls order correctly window accesses, but do not necessarily synchronize other operations. This weaker synchronization semantic allows for more e cient implementations.
Figure 11.3 illustrates the general synchronization pattern for passive target communication. The rst origin process communicates data to the second origin process, through the memory of the target process; the target process is not explicitly involved in the communication. The lock and unlock calls ensure that the two RMA accesses do not occur concurrently. However, they do not ensure that the put by origin 1 will precede the get by origin 2.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48