|
DNDSR 0.1.0.dev1+gcd065ad
Distributed Numeric Data Structure for CFV
|
Implementations of the MPI wrapper functions declared in MPI.hpp: retry-aware Bcast/ Alltoall/Alltoallv/ Allreduce/Allgather/ Barrier variants, lazy waits, singleton definitions, CUDA-aware probe. More...
#include <ctime>#include <cstdio>#include <cstdlib>#include <iostream>#include <chrono>#include <thread>#include "MPI.hpp"#include "Profiling.hpp"Go to the source code of this file.
Namespaces | |
| namespace | DNDS |
| the host side operators are provided as implemented | |
| namespace | DNDS::Debug |
| namespace | DNDS::MPI |
Macros | |
| #define | __start_timer PerformanceTimer::Instance().StartTimer(PerformanceTimer::Comm) |
| #define | __stop_timer PerformanceTimer::Instance().StopTimer(PerformanceTimer::Comm) |
Functions | |
| bool | DNDS::Debug::IsDebugged () |
Whether the current process is running under a debugger. Implemented via /proc/self/status TracerPid on Linux. | |
| void | DNDS::Debug::MPIDebugHold (const MPIInfo &mpi) |
If isDebugging is set, block every rank in a busy-wait loop so the user can attach a debugger and inspect state. Exit by setting isDebugging = false in the debugger. | |
| void | DNDS::assert_false_info_mpi (const char *expr, const char *file, int line, const std::string &info, const DNDS::MPIInfo &mpi) |
| MPI-aware assertion-failure reporter. | |
| std::string | DNDS::getTimeStamp (const MPIInfo &mpi) |
| Format a human-readable timestamp using the calling rank as context. | |
| MPI_int | DNDS::MPI::Bcast (void *buf, MPI_int num, MPI_Datatype type, MPI_int source_rank, MPI_Comm comm) |
| dumb wrapper | |
| MPI_int | DNDS::MPI::Alltoall (void *send, MPI_int sendNum, MPI_Datatype typeSend, void *recv, MPI_int recvNum, MPI_Datatype typeRecv, MPI_Comm comm) |
Wrapper over MPI_Alltoall (fixed per-peer count). | |
| MPI_int | DNDS::MPI::Alltoallv (void *send, MPI_int *sendSizes, MPI_int *sendStarts, MPI_Datatype sendType, void *recv, MPI_int *recvSizes, MPI_int *recvStarts, MPI_Datatype recvType, MPI_Comm comm) |
Wrapper over MPI_Alltoallv (variable per-peer counts + displacements). | |
| MPI_int | DNDS::MPI::Allreduce (const void *sendbuf, void *recvbuf, MPI_int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm) |
Wrapper over MPI_Allreduce. | |
| MPI_int | DNDS::MPI::Scan (const void *sendbuf, void *recvbuf, MPI_int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm) |
Wrapper over MPI_Scan (inclusive prefix reduction). | |
| MPI_int | DNDS::MPI::Allgather (const void *sendbuf, MPI_int sendcount, MPI_Datatype sendtype, void *recvbuf, MPI_int recvcount, MPI_Datatype recvtype, MPI_Comm comm) |
Wrapper over MPI_Allgather. | |
| MPI_int | DNDS::MPI::Barrier (MPI_Comm comm) |
Wrapper over MPI_Barrier. | |
| MPI_int | DNDS::MPI::BarrierLazy (MPI_Comm comm, uint64_t checkNanoSecs) |
Polling barrier that sleeps checkNanoSecs ns between MPI_Test calls. Reduces CPU spin when many ranks wait unevenly. | |
| MPI_int | DNDS::MPI::WaitallLazy (MPI_int count, MPI_Request *reqs, MPI_Status *statuses, uint64_t checkNanoSecs=10000000) |
Like WaitallAuto but sleeps checkNanoSecs ns between polls. | |
| MPI_int | DNDS::MPI::WaitallAuto (MPI_int count, MPI_Request *reqs, MPI_Status *statuses) |
Wait on an array of requests, choosing between MPI_Waitall and the lazy-poll variant based on CommStrategy settings. | |
| bool | DNDS::MPI::isCudaAware () |
| Runtime probe: is the current MPI implementation configured with CUDA-aware support? Affects whether arrays are transferred on-device or via the host round-trip. | |
Variables | |
| bool | DNDS::Debug::isDebugging = false |
| Flag consulted by MPIDebugHold and assert_false_info_mpi. | |
| std::mutex | DNDS::HDF_mutex |
| Global mutex serialising host-side HDF5 calls. | |
Implementations of the MPI wrapper functions declared in MPI.hpp: retry-aware Bcast/ Alltoall/Alltoallv/ Allreduce/Allgather/ Barrier variants, lazy waits, singleton definitions, CUDA-aware probe.
Definition in file MPI.cpp.
| #define __start_timer PerformanceTimer::Instance().StartTimer(PerformanceTimer::Comm) |