DNDSR 0.1.0.dev1+gcd065ad
Distributed Numeric Data Structure for CFV
Loading...
Searching...
No Matches
SerializerH5.hpp File Reference

MPI-parallel HDF5 serializer implementing the SerializerBase interface. More...

#include "SerializerBase.hpp"
#include <hdf5.h>
#include <fstream>
#include <map>
#include <set>
Include dependency graph for SerializerH5.hpp:
This graph shows which files directly or indirectly include this file:

Go to the source code of this file.

Classes

class  DNDS::Serializer::SerializerH5
 MPI-parallel HDF5 serializer; all ranks collectively read/write a single .h5 file. More...
 

Namespaces

namespace  DNDS
 the host side operators are provided as implemented
 
namespace  DNDS::Serializer
 

Detailed Description

MPI-parallel HDF5 serializer implementing the SerializerBase interface.

Unit Test Coverage (test_Serializer.cpp, MPI np=1,2,4)
  • Scalar round-trip: WriteInt/ReadInt, WriteIndex/ReadIndex, WriteReal/ReadReal, WriteString/ReadString
  • Vector round-trip: WriteRealVector, WriteIndexVector with explicit ArrayGlobalOffset; ReadRealVector/ReadIndexVector with ArrayGlobalOffset_Unknown
  • Distributed vector: non-uniform per-rank sizes, write with explicit offset, read with both ArrayGlobalOffset_Unknown (auto-detect from ::rank_offsets) and explicit offset
  • uint8 distributed round-trip: two-pass read (nullptr size query, then read)
  • Path operations: CreatePath, GoToPath, GetCurrentPath, ListCurrentPath (groups materialized by writing content), WriteInt/ReadInt on nested paths
  • String round-trip: WriteString/ReadString with fixed-length HDF5 attributes
Collective I/O and zero-size partitions
All Read/Write vector and byte-array methods use MPI-collective HDF5 calls. Every rank must call them in the same order, even when its local element count is 0 (which happens when nGlobal < nRanks under EvenSplit).

Internally, ReadDataVector uses a two-pass pattern:

  • Pass 1 (buf == nullptr): queries dataset size and resolves the offset.
  • Pass 2 (buf != nullptr): performs the collective H5Dread.

When local size is 0, std::vector<>::data() / host_device_vector<>::data() may return nullptr, which would skip the H5Dread block (guarded by if (buf != nullptr)) and hang the other ranks. To prevent this, each Read*Vector / ReadShared*Vector caller passes a dummy stack pointer when size == 0. Callers of ReadUint8Array must do the same (see SerializerBase).

H5_ReadDataset and H5_WriteDataset accept nLocal == 0: the hyperslab selection with count == 0 selects nothing, so no data is transferred, but the rank still participates in the collective call.

Not Yet Tested
  • SetChunkAndDeflate impact, SetCollectiveRW impact
  • WriteSharedIndexVector / ReadSharedIndexVector (H5 deduplication)
  • WriteRowsizeVector / ReadRowsizeVector
  • WriteIndexVectorPerRank
  • ArrayGlobalOffset_One semantics (rank-0-only write)
  • ArrayGlobalOffset_Parts auto-offset

Definition in file SerializerH5.hpp.