- •Contents
- •List of Figures
- •List of Tables
- •Acknowledgments
- •Introduction to MPI
- •Overview and Goals
- •Background of MPI-1.0
- •Background of MPI-1.1, MPI-1.2, and MPI-2.0
- •Background of MPI-1.3 and MPI-2.1
- •Background of MPI-2.2
- •Who Should Use This Standard?
- •What Platforms Are Targets For Implementation?
- •What Is Included In The Standard?
- •What Is Not Included In The Standard?
- •Organization of this Document
- •MPI Terms and Conventions
- •Document Notation
- •Naming Conventions
- •Semantic Terms
- •Data Types
- •Opaque Objects
- •Array Arguments
- •State
- •Named Constants
- •Choice
- •Addresses
- •Language Binding
- •Deprecated Names and Functions
- •Fortran Binding Issues
- •C Binding Issues
- •C++ Binding Issues
- •Functions and Macros
- •Processes
- •Error Handling
- •Implementation Issues
- •Independence of Basic Runtime Routines
- •Interaction with Signals
- •Examples
- •Point-to-Point Communication
- •Introduction
- •Blocking Send and Receive Operations
- •Blocking Send
- •Message Data
- •Message Envelope
- •Blocking Receive
- •Return Status
- •Passing MPI_STATUS_IGNORE for Status
- •Data Type Matching and Data Conversion
- •Type Matching Rules
- •Type MPI_CHARACTER
- •Data Conversion
- •Communication Modes
- •Semantics of Point-to-Point Communication
- •Buffer Allocation and Usage
- •Nonblocking Communication
- •Communication Request Objects
- •Communication Initiation
- •Communication Completion
- •Semantics of Nonblocking Communications
- •Multiple Completions
- •Non-destructive Test of status
- •Probe and Cancel
- •Persistent Communication Requests
- •Send-Receive
- •Null Processes
- •Datatypes
- •Derived Datatypes
- •Type Constructors with Explicit Addresses
- •Datatype Constructors
- •Subarray Datatype Constructor
- •Distributed Array Datatype Constructor
- •Address and Size Functions
- •Lower-Bound and Upper-Bound Markers
- •Extent and Bounds of Datatypes
- •True Extent of Datatypes
- •Commit and Free
- •Duplicating a Datatype
- •Use of General Datatypes in Communication
- •Correct Use of Addresses
- •Decoding a Datatype
- •Examples
- •Pack and Unpack
- •Canonical MPI_PACK and MPI_UNPACK
- •Collective Communication
- •Introduction and Overview
- •Communicator Argument
- •Applying Collective Operations to Intercommunicators
- •Barrier Synchronization
- •Broadcast
- •Example using MPI_BCAST
- •Gather
- •Examples using MPI_GATHER, MPI_GATHERV
- •Scatter
- •Examples using MPI_SCATTER, MPI_SCATTERV
- •Example using MPI_ALLGATHER
- •All-to-All Scatter/Gather
- •Global Reduction Operations
- •Reduce
- •Signed Characters and Reductions
- •MINLOC and MAXLOC
- •All-Reduce
- •Process-local reduction
- •Reduce-Scatter
- •MPI_REDUCE_SCATTER_BLOCK
- •MPI_REDUCE_SCATTER
- •Scan
- •Inclusive Scan
- •Exclusive Scan
- •Example using MPI_SCAN
- •Correctness
- •Introduction
- •Features Needed to Support Libraries
- •MPI's Support for Libraries
- •Basic Concepts
- •Groups
- •Contexts
- •Intra-Communicators
- •Group Management
- •Group Accessors
- •Group Constructors
- •Group Destructors
- •Communicator Management
- •Communicator Accessors
- •Communicator Constructors
- •Communicator Destructors
- •Motivating Examples
- •Current Practice #1
- •Current Practice #2
- •(Approximate) Current Practice #3
- •Example #4
- •Library Example #1
- •Library Example #2
- •Inter-Communication
- •Inter-communicator Accessors
- •Inter-communicator Operations
- •Inter-Communication Examples
- •Caching
- •Functionality
- •Communicators
- •Windows
- •Datatypes
- •Error Class for Invalid Keyval
- •Attributes Example
- •Naming Objects
- •Formalizing the Loosely Synchronous Model
- •Basic Statements
- •Models of Execution
- •Static communicator allocation
- •Dynamic communicator allocation
- •The General case
- •Process Topologies
- •Introduction
- •Virtual Topologies
- •Embedding in MPI
- •Overview of the Functions
- •Topology Constructors
- •Cartesian Constructor
- •Cartesian Convenience Function: MPI_DIMS_CREATE
- •General (Graph) Constructor
- •Distributed (Graph) Constructor
- •Topology Inquiry Functions
- •Cartesian Shift Coordinates
- •Partitioning of Cartesian structures
- •Low-Level Topology Functions
- •An Application Example
- •MPI Environmental Management
- •Implementation Information
- •Version Inquiries
- •Environmental Inquiries
- •Tag Values
- •Host Rank
- •IO Rank
- •Clock Synchronization
- •Memory Allocation
- •Error Handling
- •Error Handlers for Communicators
- •Error Handlers for Windows
- •Error Handlers for Files
- •Freeing Errorhandlers and Retrieving Error Strings
- •Error Codes and Classes
- •Error Classes, Error Codes, and Error Handlers
- •Timers and Synchronization
- •Startup
- •Allowing User Functions at Process Termination
- •Determining Whether MPI Has Finished
- •Portable MPI Process Startup
- •The Info Object
- •Process Creation and Management
- •Introduction
- •The Dynamic Process Model
- •Starting Processes
- •The Runtime Environment
- •Process Manager Interface
- •Processes in MPI
- •Starting Processes and Establishing Communication
- •Reserved Keys
- •Spawn Example
- •Manager-worker Example, Using MPI_COMM_SPAWN.
- •Establishing Communication
- •Names, Addresses, Ports, and All That
- •Server Routines
- •Client Routines
- •Name Publishing
- •Reserved Key Values
- •Client/Server Examples
- •Ocean/Atmosphere - Relies on Name Publishing
- •Simple Client-Server Example.
- •Other Functionality
- •Universe Size
- •Singleton MPI_INIT
- •MPI_APPNUM
- •Releasing Connections
- •Another Way to Establish MPI Communication
- •One-Sided Communications
- •Introduction
- •Initialization
- •Window Creation
- •Window Attributes
- •Communication Calls
- •Examples
- •Accumulate Functions
- •Synchronization Calls
- •Fence
- •General Active Target Synchronization
- •Lock
- •Assertions
- •Examples
- •Error Handling
- •Error Handlers
- •Error Classes
- •Semantics and Correctness
- •Atomicity
- •Progress
- •Registers and Compiler Optimizations
- •External Interfaces
- •Introduction
- •Generalized Requests
- •Examples
- •Associating Information with Status
- •MPI and Threads
- •General
- •Initialization
- •Introduction
- •File Manipulation
- •Opening a File
- •Closing a File
- •Deleting a File
- •Resizing a File
- •Preallocating Space for a File
- •Querying the Size of a File
- •Querying File Parameters
- •File Info
- •Reserved File Hints
- •File Views
- •Data Access
- •Data Access Routines
- •Positioning
- •Synchronism
- •Coordination
- •Data Access Conventions
- •Data Access with Individual File Pointers
- •Data Access with Shared File Pointers
- •Noncollective Operations
- •Collective Operations
- •Seek
- •Split Collective Data Access Routines
- •File Interoperability
- •Datatypes for File Interoperability
- •Extent Callback
- •Datarep Conversion Functions
- •Matching Data Representations
- •Consistency and Semantics
- •File Consistency
- •Random Access vs. Sequential Files
- •Progress
- •Collective File Operations
- •Type Matching
- •Logical vs. Physical File Layout
- •File Size
- •Examples
- •Asynchronous I/O
- •I/O Error Handling
- •I/O Error Classes
- •Examples
- •Subarray Filetype Constructor
- •Requirements
- •Discussion
- •Logic of the Design
- •Examples
- •MPI Library Implementation
- •Systems with Weak Symbols
- •Systems Without Weak Symbols
- •Complications
- •Multiple Counting
- •Linker Oddities
- •Multiple Levels of Interception
- •Deprecated Functions
- •Deprecated since MPI-2.0
- •Deprecated since MPI-2.2
- •Language Bindings
- •Overview
- •Design
- •C++ Classes for MPI
- •Class Member Functions for MPI
- •Semantics
- •C++ Datatypes
- •Communicators
- •Exceptions
- •Mixed-Language Operability
- •Problems With Fortran Bindings for MPI
- •Problems Due to Strong Typing
- •Problems Due to Data Copying and Sequence Association
- •Special Constants
- •Fortran 90 Derived Types
- •A Problem with Register Optimization
- •Basic Fortran Support
- •Extended Fortran Support
- •The mpi Module
- •No Type Mismatch Problems for Subroutines with Choice Arguments
- •Additional Support for Fortran Numeric Intrinsic Types
- •Language Interoperability
- •Introduction
- •Assumptions
- •Initialization
- •Transfer of Handles
- •Status
- •MPI Opaque Objects
- •Datatypes
- •Callback Functions
- •Error Handlers
- •Reduce Operations
- •Addresses
- •Attributes
- •Extra State
- •Constants
- •Interlanguage Communication
- •Language Bindings Summary
- •Groups, Contexts, Communicators, and Caching Fortran Bindings
- •External Interfaces C++ Bindings
- •Change-Log
- •Bibliography
- •Examples Index
- •MPI Declarations Index
- •MPI Function Index
416 |
CHAPTER 13. I/O |
1MPI_FILE_GET_POSITION returns, in o set, the current position of the individual le
2
3
pointer in etype units relative to the current view.
4Advice to users. The o set can be used in a future call to MPI_FILE_SEEK using
5whence = MPI_SEEK_SET to return to the current position. To set the displacement to
6the current le pointer position, rst convert o set into an absolute byte position using
7MPI_FILE_GET_BYTE_OFFSET, then call MPI_FILE_SET_VIEW with the resulting
8
9
10
11
displacement. (End of advice to users.)
12
13
14
15
16
17
MPI_FILE_GET_BYTE_OFFSET(fh, o set, disp)
IN |
fh |
le handle (handle) |
IN |
o set |
o set (integer) |
OUT |
disp |
absolute byte position of o set (integer) |
18 |
int MPI_File_get_byte_offset(MPI_File fh, MPI_Offset offset, |
19 |
MPI_Offset *disp) |
20 |
MPI_FILE_GET_BYTE_OFFSET(FH, OFFSET, DISP, IERROR) |
|
|
21 |
INTEGER FH, IERROR |
|
|
22 |
INTEGER(KIND=MPI_OFFSET_KIND) OFFSET, DISP |
|
|
23 |
fMPI::Offset MPI::File::Get_byte_offset(const MPI::Offset disp) const |
24 |
|
25 |
(binding deprecated, see Section 15.2) g |
26 |
|
MPI_FILE_GET_BYTE_OFFSET converts a view-relative o set into an absolute byte
27
position. The absolute byte position (from the beginning of the le) of o set relative to the
28
current view of fh is returned in disp.
29
30
13.4.4 Data Access with Shared File Pointers
31
32MPI maintains exactly one shared le pointer per collective MPI_FILE_OPEN (shared among
33processes in the communicator group). The current value of this pointer implicitly speci es
34the o set in the data access routines described in this section. These routines only use and
35update the shared le pointer maintained by MPI. The individual le pointers are not used
36nor updated.
37The shared le pointer routines have the same semantics as the data access with explicit
38o set routines described in Section 13.4.2, page 407, with the following modi cations:
39
40
41
42
43
the o set is de ned to be the current value of the MPI-maintained shared le pointer,
the e ect of multiple calls to shared le pointer routines is de ned to behave as if the calls were serialized, and
44the use of shared le pointer routines is erroneous unless all processes use the same
45le view.
46
47For the noncollective shared le pointer routines, the serialization ordering is not determin-
48istic. The user needs to use other synchronization means to enforce a speci c order.
13.4. DATA ACCESS |
417 |
After a shared le pointer operation is initiated, the shared le pointer is updated to point to the next etype after the last one that will be accessed. The le pointer is updated relative to the current view of the le.
Noncollective Operations
MPI_FILE_READ_SHARED(fh, buf, count, datatype, status)
INOUT |
fh |
le handle (handle) |
OUT |
buf |
initial address of bu er (choice) |
IN |
count |
number of elements in bu er (integer) |
IN |
datatype |
datatype of each bu er element (handle) |
OUT |
status |
status object (Status) |
int MPI_File_read_shared(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Status *status)
MPI_FILE_READ_SHARED(FH, BUF, COUNT, DATATYPE, STATUS, IERROR) <type> BUF(*)
INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR
fvoid MPI::File::Read_shared(void* buf, int count,
const MPI::Datatype& datatype, MPI::Status& status) (binding deprecated, see Section 15.2) g
fvoid MPI::File::Read_shared(void* buf, int count,
const MPI::Datatype& datatype) (binding deprecated, see Section 15.2) g
MPI_FILE_READ_SHARED reads a le using the shared le pointer.
MPI_FILE_WRITE_SHARED(fh, buf, count, datatype, status)
INOUT |
fh |
le handle (handle) |
IN |
buf |
initial address of bu er (choice) |
IN |
count |
number of elements in bu er (integer) |
IN |
datatype |
datatype of each bu er element (handle) |
OUT |
status |
status object (Status) |
int MPI_File_write_shared(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Status *status)
MPI_FILE_WRITE_SHARED(FH, BUF, COUNT, DATATYPE, STATUS, IERROR) <type> BUF(*)
INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
418 |
CHAPTER 13. I/O |
1fvoid MPI::File::Write_shared(const void* buf, int count,
2
3
4
const MPI::Datatype& datatype, MPI::Status& status) (binding deprecated, see Section 15.2) g
5fvoid MPI::File::Write_shared(const void* buf, int count,
6
7
const MPI::Datatype& datatype) (binding deprecated, see Section 15.2) g
8MPI_FILE_WRITE_SHARED writes a le using the shared le pointer.
9
10
11
12
13
14
15
16
17
18
19
MPI_FILE_IREAD_SHARED(fh, buf, count, datatype, request)
INOUT |
fh |
le handle (handle) |
OUT |
buf |
initial address of bu er (choice) |
IN |
count |
number of elements in bu er (integer) |
IN |
datatype |
datatype of each bu er element (handle) |
OUT |
request |
request object (handle) |
20 |
int MPI_File_iread_shared(MPI_File fh, void *buf, int count, |
21 |
MPI_Datatype datatype, MPI_Request *request) |
22 |
MPI_FILE_IREAD_SHARED(FH, BUF, COUNT, DATATYPE, REQUEST, IERROR) |
|
|
23 |
<type> BUF(*) |
|
|
24 |
INTEGER FH, COUNT, DATATYPE, REQUEST, IERROR |
|
|
25 |
fMPI::Request MPI::File::Iread_shared(void* buf, int count, |
26 |
|
27 |
const MPI::Datatype& datatype) (binding deprecated, see |
28 |
Section 15.2) g |
29 |
MPI_FILE_IREAD_SHARED is a nonblocking version of the MPI_FILE_READ_SHARED |
|
30 |
interface. |
|
|
31 |
|
32 |
|
33 |
MPI_FILE_IWRITE_SHARED(fh, buf, count, datatype, request) |
|
34
35
36
37
38
39
40
41
INOUT |
fh |
le handle (handle) |
IN |
buf |
initial address of bu er (choice) |
IN |
count |
number of elements in bu er (integer) |
IN |
datatype |
datatype of each bu er element (handle) |
OUT |
request |
request object (handle) |
42 |
int MPI_File_iwrite_shared(MPI_File fh, void *buf, int count, |
43 |
MPI_Datatype datatype, MPI_Request *request) |
44
MPI_FILE_IWRITE_SHARED(FH, BUF, COUNT, DATATYPE, REQUEST, IERROR)
45
<type> BUF(*)
46
INTEGER FH, COUNT, DATATYPE, REQUEST, IERROR
47
48
13.4. DATA ACCESS |
419 |
fMPI::Request MPI::File::Iwrite_shared(const void* buf, int count, const MPI::Datatype& datatype) (binding deprecated, see Section 15.2) g
MPI_FILE_IWRITE_SHARED is a nonblocking version of the
MPI_FILE_WRITE_SHARED interface.
Collective Operations
The semantics of a collective access using a shared le pointer is that the accesses to thele will be in the order determined by the ranks of the processes within the group. For each process, the location in the le at which data is accessed is the position at which the sharedle pointer would be after all processes whose ranks within the group less than that of this process had accessed their data. In addition, in order to prevent subsequent shared o set accesses by the same processes from interfering with this collective access, the call might return only after all the processes within the group have initiated their accesses. When the call returns, the shared le pointer points to the next etype accessible, according to the le view used by all processes, after the last etype requested.
Advice to users. There may be some programs in which all processes in the group need to access the le using the shared le pointer, but the program may not require that data be accessed in order of process rank. In such programs, using the shared ordered routines (e.g., MPI_FILE_WRITE_ORDERED rather than MPI_FILE_WRITE_SHARED) may enable an implementation to optimize access, improving performance. (End of advice to users.)
Advice to implementors. Accesses to the data requested by all processes do not have to be serialized. Once all processes have issued their requests, locations within the le for all accesses can be computed, and accesses can proceed independently from each other, possibly in parallel. (End of advice to implementors.)
MPI_FILE_READ_ORDERED(fh, buf, count, datatype, status)
INOUT |
fh |
le handle (handle) |
OUT |
buf |
initial address of bu er (choice) |
IN |
count |
number of elements in bu er (integer) |
IN |
datatype |
datatype of each bu er element (handle) |
OUT |
status |
status object (Status) |
int MPI_File_read_ordered(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Status *status)
MPI_FILE_READ_ORDERED(FH, BUF, COUNT, DATATYPE, STATUS, IERROR) <type> BUF(*)
INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48