|
| template<class Tbasic > |
| constexpr MPI_Datatype | DNDS::__DNDSToMPITypeInt () |
| | Map a DNDS integer type size to an MPI signed-integer datatype.
|
| |
| template<class Tbasic > |
| constexpr MPI_Datatype | DNDS::__DNDSToMPITypeFloat () |
| | Map a DNDS floating-point type size to an MPI datatype.
|
| |
| template<class T > |
| std::pair< MPI_Datatype, MPI_int > | DNDS::BasicType_To_MPIIntType_Custom () |
| | Dispatch to a user-provided CommPair / CommMult+ CommType pair on T.
|
| |
| template<class T > |
| std::pair< MPI_Datatype, MPI_int > | DNDS::BasicType_To_MPIIntType () |
| | Deduce an (MPI_Datatype, count) pair that represents a T value.
|
| |
| MPI_int | DNDS::MPIWorldSize () |
| | Convenience: MPI_Comm_size(MPI_COMM_WORLD).
|
| |
| MPI_int | DNDS::MPIWorldRank () |
| | Convenience: MPI_Comm_rank(MPI_COMM_WORLD).
|
| |
| std::string | DNDS::getTimeStamp (const MPIInfo &mpi) |
| | Format a human-readable timestamp using the calling rank as context.
|
| |
| bool | DNDS::Debug::IsDebugged () |
| | Whether the current process is running under a debugger. Implemented via /proc/self/status TracerPid on Linux.
|
| |
| void | DNDS::Debug::MPIDebugHold (const MPIInfo &mpi) |
| | If isDebugging is set, block every rank in a busy-wait loop so the user can attach a debugger and inspect state. Exit by setting isDebugging = false in the debugger.
|
| |
| void | DNDS::assert_false_info_mpi (const char *expr, const char *file, int line, const std::string &info, const DNDS::MPIInfo &mpi) |
| | MPI-aware assertion-failure reporter.
|
| |
| int | DNDS::MPI::GetMPIThreadLevel () |
| | Return the MPI thread-support level the current process was initialised with.
|
| |
| MPI_int | DNDS::MPI::Init_thread (int *argc, char ***argv) |
| | Initialise MPI with thread support, honouring the DNDS_DISABLE_ASYNC_MPI environment override.
|
| |
| int | DNDS::MPI::Finalize () |
| | Release DNDSR-registered MPI resources then call MPI_Finalize.
|
| |
| MPI_int | DNDS::MPI::Bcast (void *buf, MPI_int num, MPI_Datatype type, MPI_int source_rank, MPI_Comm comm) |
| | dumb wrapper
|
| |
| MPI_int | DNDS::MPI::Alltoall (void *send, MPI_int sendNum, MPI_Datatype typeSend, void *recv, MPI_int recvNum, MPI_Datatype typeRecv, MPI_Comm comm) |
| | Wrapper over MPI_Alltoall (fixed per-peer count).
|
| |
| MPI_int | DNDS::MPI::Alltoallv (void *send, MPI_int *sendSizes, MPI_int *sendStarts, MPI_Datatype sendType, void *recv, MPI_int *recvSizes, MPI_int *recvStarts, MPI_Datatype recvType, MPI_Comm comm) |
| | Wrapper over MPI_Alltoallv (variable per-peer counts + displacements).
|
| |
| MPI_int | DNDS::MPI::Allreduce (const void *sendbuf, void *recvbuf, MPI_int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm) |
| | Wrapper over MPI_Allreduce.
|
| |
| MPI_int | DNDS::MPI::Scan (const void *sendbuf, void *recvbuf, MPI_int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm) |
| | Wrapper over MPI_Scan (inclusive prefix reduction).
|
| |
| MPI_int | DNDS::MPI::Allgather (const void *sendbuf, MPI_int sendcount, MPI_Datatype sendtype, void *recvbuf, MPI_int recvcount, MPI_Datatype recvtype, MPI_Comm comm) |
| | Wrapper over MPI_Allgather.
|
| |
| MPI_int | DNDS::MPI::Barrier (MPI_Comm comm) |
| | Wrapper over MPI_Barrier.
|
| |
| MPI_int | DNDS::MPI::BarrierLazy (MPI_Comm comm, uint64_t checkNanoSecs) |
| | Polling barrier that sleeps checkNanoSecs ns between MPI_Test calls. Reduces CPU spin when many ranks wait unevenly.
|
| |
| MPI_int | DNDS::MPI::WaitallLazy (MPI_int count, MPI_Request *reqs, MPI_Status *statuses, uint64_t checkNanoSecs=10000000) |
| | Like WaitallAuto but sleeps checkNanoSecs ns between polls.
|
| |
| MPI_int | DNDS::MPI::WaitallAuto (MPI_int count, MPI_Request *reqs, MPI_Status *statuses) |
| | Wait on an array of requests, choosing between MPI_Waitall and the lazy-poll variant based on CommStrategy settings.
|
| |
| void | DNDS::MPI::AllreduceOneReal (real &v, MPI_Op op, const MPIInfo &mpi) |
| | Single-scalar Allreduce helper for reals (in-place, count = 1).
|
| |
| void | DNDS::MPI::AllreduceOneIndex (index &v, MPI_Op op, const MPIInfo &mpi) |
| | Single-scalar Allreduce helper for indices (in-place, count = 1).
|
| |
| template<class F > |
| void | DNDS::MPISerialDo (const MPIInfo &mpi, F f) |
| | Execute f on each rank serially, in rank order.
|
| |
| bool | DNDS::MPI::isCudaAware () |
| | Runtime probe: is the current MPI implementation configured with CUDA-aware support? Affects whether arrays are transferred on-device or via the host round-trip.
|
| |
| void | DNDS::InsertCheck (const MPIInfo &mpi, const std::string &info="", const std::string &FUNCTION="", const std::string &FILE="", int LINE=-1) |
| | Barrier + annotated print used by DNDS_MPI_InsertCheck.
|
| |
MPI wrappers: MPIInfo, collective operations, type mapping, CommStrategy.
- Unit Test Coverage (test_MPI.cpp, MPI np=1,2,4)
- MPIInfo: constructor, setWorld, operator==, field validation
- MPIWorldSize, MPIWorldRank
- Allreduce (MPI_SUM, MPI_MAX for real/index), AllreduceOneReal, AllreduceOneIndex
- Scan (MPI_SUM), Allgather, Bcast, Barrier, Alltoall
- BasicType_To_MPIIntType (scalar, std::array, Eigen::Matrix)
- CommStrategy: get/set HIndexed/InSituPack
- Not Yet Tested
- Alltoallv, WaitallLazy, WaitallAuto, BarrierLazy
- MPIBufferHandler, MPITypePairHolder, MPIReqHolder (tested indirectly via ArrayTransformer)
- MPI::ResourceRecycler, MPISerialDo, InsertCheck
- Sub-communicators, CommStrategy functional impact on ArrayTransformer
Definition in file MPI.hpp.