DNDSR 0.1.0.dev1+gcd065ad
Distributed Numeric Data Structure for CFV
Loading...
Searching...
No Matches
DNDS::MPI Namespace Reference

Classes

class  CommStrategy
 Process-wide singleton that selects how ArrayTransformer packs and waits for MPI messages. More...
 
class  ResourceRecycler
 Singleton that tracks and releases long-lived MPI resources at MPI_Finalize time. More...
 

Functions

MPI_int Bcast (void *buf, MPI_int num, MPI_Datatype type, MPI_int source_rank, MPI_Comm comm)
 dumb wrapper
 
MPI_int Alltoall (void *send, MPI_int sendNum, MPI_Datatype typeSend, void *recv, MPI_int recvNum, MPI_Datatype typeRecv, MPI_Comm comm)
 Wrapper over MPI_Alltoall (fixed per-peer count).
 
MPI_int Alltoallv (void *send, MPI_int *sendSizes, MPI_int *sendStarts, MPI_Datatype sendType, void *recv, MPI_int *recvSizes, MPI_int *recvStarts, MPI_Datatype recvType, MPI_Comm comm)
 Wrapper over MPI_Alltoallv (variable per-peer counts + displacements).
 
MPI_int Allreduce (const void *sendbuf, void *recvbuf, MPI_int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)
 Wrapper over MPI_Allreduce.
 
MPI_int Scan (const void *sendbuf, void *recvbuf, MPI_int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)
 Wrapper over MPI_Scan (inclusive prefix reduction).
 
MPI_int Allgather (const void *sendbuf, MPI_int sendcount, MPI_Datatype sendtype, void *recvbuf, MPI_int recvcount, MPI_Datatype recvtype, MPI_Comm comm)
 Wrapper over MPI_Allgather.
 
MPI_int Barrier (MPI_Comm comm)
 Wrapper over MPI_Barrier.
 
MPI_int BarrierLazy (MPI_Comm comm, uint64_t checkNanoSecs)
 Polling barrier that sleeps checkNanoSecs ns between MPI_Test calls. Reduces CPU spin when many ranks wait unevenly.
 
MPI_int WaitallLazy (MPI_int count, MPI_Request *reqs, MPI_Status *statuses, uint64_t checkNanoSecs=10000000)
 Like WaitallAuto but sleeps checkNanoSecs ns between polls.
 
MPI_int WaitallAuto (MPI_int count, MPI_Request *reqs, MPI_Status *statuses)
 Wait on an array of requests, choosing between MPI_Waitall and the lazy-poll variant based on CommStrategy settings.
 
bool isCudaAware ()
 Runtime probe: is the current MPI implementation configured with CUDA-aware support? Affects whether arrays are transferred on-device or via the host round-trip.
 
int GetMPIThreadLevel ()
 Return the MPI thread-support level the current process was initialised with.
 
MPI_int Init_thread (int *argc, char ***argv)
 Initialise MPI with thread support, honouring the DNDS_DISABLE_ASYNC_MPI environment override.
 
int Finalize ()
 Release DNDSR-registered MPI resources then call MPI_Finalize.
 
void AllreduceOneReal (real &v, MPI_Op op, const MPIInfo &mpi)
 Single-scalar Allreduce helper for reals (in-place, count = 1).
 
void AllreduceOneIndex (index &v, MPI_Op op, const MPIInfo &mpi)
 Single-scalar Allreduce helper for indices (in-place, count = 1).
 
void pybind11_Init_thread (py::module_ &m)
 
void pybind11_MPI_Operations (py::module_ &m)
 

Function Documentation

◆ Allgather()

MPI_int DNDS::MPI::Allgather ( const void *  sendbuf,
MPI_int  sendcount,
MPI_Datatype  sendtype,
void *  recvbuf,
MPI_int  recvcount,
MPI_Datatype  recvtype,
MPI_Comm  comm 
)

Wrapper over MPI_Allgather.

Definition at line 230 of file MPI.cpp.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ Allreduce()

MPI_int DNDS::MPI::Allreduce ( const void *  sendbuf,
void *  recvbuf,
MPI_int  count,
MPI_Datatype  datatype,
MPI_Op  op,
MPI_Comm  comm 
)

Wrapper over MPI_Allreduce.

Definition at line 203 of file MPI.cpp.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ AllreduceOneIndex()

void DNDS::MPI::AllreduceOneIndex ( index v,
MPI_Op  op,
const MPIInfo mpi 
)
inline

Single-scalar Allreduce helper for indices (in-place, count = 1).

Definition at line 679 of file MPI.hpp.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ AllreduceOneReal()

void DNDS::MPI::AllreduceOneReal ( real v,
MPI_Op  op,
const MPIInfo mpi 
)
inline

Single-scalar Allreduce helper for reals (in-place, count = 1).

Definition at line 673 of file MPI.hpp.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ Alltoall()

MPI_int DNDS::MPI::Alltoall ( void *  send,
MPI_int  sendNum,
MPI_Datatype  typeSend,
void *  recv,
MPI_int  recvNum,
MPI_Datatype  typeRecv,
MPI_Comm  comm 
)

Wrapper over MPI_Alltoall (fixed per-peer count).

Definition at line 166 of file MPI.cpp.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ Alltoallv()

MPI_int DNDS::MPI::Alltoallv ( void *  send,
MPI_int sendSizes,
MPI_int sendStarts,
MPI_Datatype  sendType,
void *  recv,
MPI_int recvSizes,
MPI_int recvStarts,
MPI_Datatype  recvType,
MPI_Comm  comm 
)

Wrapper over MPI_Alltoallv (variable per-peer counts + displacements).

Definition at line 182 of file MPI.cpp.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ Barrier()

MPI_int DNDS::MPI::Barrier ( MPI_Comm  comm)

Wrapper over MPI_Barrier.

Definition at line 248 of file MPI.cpp.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ BarrierLazy()

MPI_int DNDS::MPI::BarrierLazy ( MPI_Comm  comm,
uint64_t  checkNanoSecs 
)

Polling barrier that sleeps checkNanoSecs ns between MPI_Test calls. Reduces CPU spin when many ranks wait unevenly.

Definition at line 260 of file MPI.cpp.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ Bcast()

MPI_int DNDS::MPI::Bcast ( void *  buf,
MPI_int  num,
MPI_Datatype  type,
MPI_int  source_rank,
MPI_Comm  comm 
)

dumb wrapper

Wrapper over MPI_Bcast that logs on error and goes through DNDSR retry logic.

Definition at line 150 of file MPI.cpp.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ Finalize()

int DNDS::MPI::Finalize ( )
inline

Release DNDSR-registered MPI resources then call MPI_Finalize.

Idempotent: returns immediately if MPI has already been finalised.

Definition at line 531 of file MPI.hpp.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ GetMPIThreadLevel()

int DNDS::MPI::GetMPIThreadLevel ( )
inline

Return the MPI thread-support level the current process was initialised with.

Definition at line 474 of file MPI.hpp.

Here is the caller graph for this function:

◆ Init_thread()

MPI_int DNDS::MPI::Init_thread ( int *  argc,
char ***  argv 
)
inline

Initialise MPI with thread support, honouring the DNDS_DISABLE_ASYNC_MPI environment override.

  • No env var or value 0: request MPI_THREAD_MULTIPLE (full).
  • 1: drop to MPI_THREAD_SERIALIZED.
  • 2: drop to MPI_THREAD_FUNNELED.
  • >=3: MPI_THREAD_SINGLE.

Aborts via MPI_Abort if the provided level is lower than requested. Idempotent: if MPI is already initialised the call just queries the level.

Definition at line 495 of file MPI.hpp.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ isCudaAware()

bool DNDS::MPI::isCudaAware ( )

Runtime probe: is the current MPI implementation configured with CUDA-aware support? Affects whether arrays are transferred on-device or via the host round-trip.

Definition at line 298 of file MPI.cpp.

Here is the caller graph for this function:

◆ pybind11_Init_thread()

void DNDS::MPI::pybind11_Init_thread ( py::module_ &  m)

Definition at line 27 of file MPI_bind.cpp.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ pybind11_MPI_Operations()

void DNDS::MPI::pybind11_MPI_Operations ( py::module_ &  m)

Definition at line 91 of file MPI_bind.cpp.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ Scan()

MPI_int DNDS::MPI::Scan ( const void *  sendbuf,
void *  recvbuf,
MPI_int  count,
MPI_Datatype  datatype,
MPI_Op  op,
MPI_Comm  comm 
)

Wrapper over MPI_Scan (inclusive prefix reduction).

Definition at line 220 of file MPI.cpp.

Here is the caller graph for this function:

◆ WaitallAuto()

MPI_int DNDS::MPI::WaitallAuto ( MPI_int  count,
MPI_Request *  reqs,
MPI_Status *  statuses 
)

Wait on an array of requests, choosing between MPI_Waitall and the lazy-poll variant based on CommStrategy settings.

Definition at line 283 of file MPI.cpp.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ WaitallLazy()

MPI_int DNDS::MPI::WaitallLazy ( MPI_int  count,
MPI_Request *  reqs,
MPI_Status *  statuses,
uint64_t  checkNanoSecs 
)

Like WaitallAuto but sleeps checkNanoSecs ns between polls.

Definition at line 271 of file MPI.cpp.

Here is the caller graph for this function: