DNDSR 0.1.0.dev1+gcd065ad
Distributed Numeric Data Structure for CFV
Loading...
Searching...
No Matches
test_MeshDistributedRead.cpp
Go to the documentation of this file.
1/**
2 * @file test_MeshDistributedRead.cpp
3 * @brief Tests for ReadSerializeAndDistribute: write mesh to H5, then
4 * read it back with even-split + ParMetis repartition and verify
5 * the rebuilt mesh matches the original.
6 *
7 * @par Test plan
8 * (A) Same-np: write at np=N, read at np=N with a different Metis seed.
9 * Verifies that repartitioning produces a valid mesh.
10 * (B) Cross-np: write with a rank subset [0, npWrite), read with all ranks.
11 * Verifies that reading at a different np works.
12 *
13 * @par Checks (each scenario)
14 * - Global cell, node, bnd, face counts match the reference.
15 * - AssertOnFaces passes (face topology correct after full rebuild).
16 * - face2cell entries are valid (left cell local, right cell local or ghost).
17 * - node2cell non-empty for all owned nodes.
18 * - cell2cellOrig globally unique (no duplicate or lost cells).
19 * - Coordinate bounding box is non-degenerate.
20 *
21 * @par Mesh configs
22 * [0] UniformSquare_10 -- 2D, 100 quad, non-periodic
23 * TODO: [1] IV10_10 -- 2D, 100 quad, periodic (pending periodic bnd convention)
24 *
25 * @par Bugs found and fixed during development
26 *
27 * 1. **H5 path navigation** (Mesh_ReadSerializeDistributed.cpp):
28 * `GoToPath("..")` appends ".." literally in the H5 serializer instead
29 * of navigating up. Fixed by saving/restoring absolute paths via
30 * `GetCurrentPath()`.
31 *
32 * 2. **Bnd partition from ghost cells** (Mesh_ReadSerializeDistributed.cpp):
33 * `bnd2cell(iBnd, 0)` can reference a ghost cell in the even-split
34 * partition. A ghost pull of the cell partition array is needed to
35 * resolve the target partition for bnds whose owner cell is on another
36 * rank.
37 *
38 * 3. **Ghost append index vs local index** (Mesh_ReadSerializeDistributed.cpp):
39 * `search_indexAppend` returns father_size + ghost_local_idx, but
40 * the bnd partition code used this directly as an index into
41 * `cellPartArrGhost` (which is 0-indexed). Fixed by subtracting
42 * `cellPartArr->Size()`.
43 *
44 * 4. **Ghost bnd nodes not in coord ghost layer** (Mesh.cpp, BuildGhostPrimary):
45 * Ghost bnds (pulled by BuildGhostPrimary via node2bnd) may reference
46 * nodes not in the coord ghost layer (which only covers cells' nodes).
47 * Fixed by expanding the coord ghost layer after bnd ghosting, with
48 * a collective Allreduce guard to avoid deadlock.
49 *
50 * 5. **AdjGlobal2LocalPrimary assertion on ghost bnds** (Mesh.cpp):
51 * Ghost bnds' parent cells may not be in the cell ghost layer (which
52 * only includes cell2cell neighbors). Relaxed the assertion to only
53 * enforce for father bnds: the semantic contract is that father bnds
54 * must have their parent cell as a local father cell.
55 *
56 * 6. **Serializer shared-ptr dedup address reuse** (SerializerBase/H5/JSON):
57 * The ptr_2_pth dedup map keyed on raw void* addresses. After a
58 * temporary shared_ptr was destroyed, the allocator could reuse the
59 * address for a new shared_ptr, causing false dedup (e.g., bnd2node's
60 * pRowStart written as a ::ref to cell2node's pRowStart). Fixed by
61 * storing shared_ptr<void> to keep objects alive.
62 *
63 * 7. **CSR Compress before dataOffset** (ArrayTransformer.hpp, ParArray):
64 * ParArray::WriteSerializer computed dataOffset from _pRowStart before
65 * Compress() was called. For uncompressed CSR arrays (np=1 after
66 * TransferDataSerial2Global), _pRowStart was null and pRowStart was
67 * silently skipped. Fixed by calling Compress() first.
68 *
69 * 8. **Periodic bnd pbi filter on non-local cells** (Mesh.cpp, RecoverCell2CellAndBnd2Cell):
70 * The periodic pbi filter used cell2node.father->pLGlobalMapping->search()
71 * which only returns father-local indices, skipping non-local cells.
72 * On even-split partitions a periodic bnd's two cells can both be on
73 * other ranks, leaving cellRecCur empty after the filter (assertion
74 * failure). Fixed by refactoring into a two-pass approach: first pass
75 * collects candidate cells from node intersection for all bnds, then
76 * a ghost pull fetches cell2node and cell2nodePbi for ALL candidates,
77 * then second pass does the pbi filter using search_indexAppend on the
78 * ghost mapping. This correctly handles non-local cells on any partition.
79 *
80 * 9. **ctest parallel collision** (test_MeshDistributedRead.cpp):
81 * All np variants wrote H5 files to the same temp directory. When ctest
82 * ran them in parallel, file corruption occurred. Fixed by including
83 * np in the temp directory name.
84 */
85
86#define DOCTEST_CONFIG_IMPLEMENT
87#include "doctest.h"
88
89#include "Geom/Mesh.hpp"
90#include "DNDS/SerializerH5.hpp"
91#include <string>
92#include <filesystem>
93#include <algorithm>
94#include <numeric>
95
96using namespace DNDS;
97using namespace DNDS::Geom;
98
99// ---------------------------------------------------------------------------
100// Config
101// ---------------------------------------------------------------------------
112
113static const MeshConfig g_configs[] = {
114 {"UniformSquare_10", "UniformSquare_10.cgns", 2, false,
115 {0, 0, 0}, {0, 0, 0}, {0, 0, 0}, 100, 40},
116 {"IV10_10", "IV10_10.cgns", 2, true,
117 {10, 0, 0}, {0, 10, 0}, {0, 0, 10}, 100, -1},
118 {"IV10U_10", "IV10U_10.cgns", 2, true,
119 {10, 0, 0}, {0, 10, 0}, {0, 0, 10}, 322, -1},
120 {"NACA0012_H2", "NACA0012_H2.cgns", 2, false,
121 {0, 0, 0}, {0, 0, 0}, {0, 0, 0}, 20816, 484},
122};
123static constexpr int N_CONFIGS = sizeof(g_configs) / sizeof(g_configs[0]);
124
125// ---------------------------------------------------------------------------
126// Globals
127// ---------------------------------------------------------------------------
128static MPIInfo g_mpi;
129
134static RefCounts g_refCounts[N_CONFIGS];
135
136/// Meshes rebuilt from same-np distributed read
137static ssp<UnstructuredMesh> g_sameNpMesh[N_CONFIGS];
138
139/// Meshes rebuilt from cross-np distributed read (only if np >= 3)
140static ssp<UnstructuredMesh> g_crossNpMesh[N_CONFIGS];
141static bool g_crossNpAvailable = false;
142
143// H5 file paths
144static std::string g_h5Same[N_CONFIGS];
145static std::string g_h5Cross[N_CONFIGS];
146
147// ---------------------------------------------------------------------------
148// Helpers
149// ---------------------------------------------------------------------------
150static std::string meshPath(const std::string &name)
151{
152 std::string f(__FILE__);
153 for (int i = 0; i < 4; i++)
154 {
155 auto pos = f.rfind('/');
156 if (pos == std::string::npos)
157 pos = f.rfind('\\');
158 if (pos != std::string::npos)
159 f = f.substr(0, pos);
160 }
161 return f + "/data/mesh/" + name;
162}
163
164static std::string tmpDir()
165{
166 std::string f(__FILE__);
167 for (int i = 0; i < 4; i++)
168 {
169 auto pos = f.rfind('/');
170 if (pos == std::string::npos)
171 pos = f.rfind('\\');
172 if (pos != std::string::npos)
173 f = f.substr(0, pos);
174 }
175 // Include np in directory name to avoid conflicts when ctest runs
176 // multiple np tests in parallel.
177 return f + "/data/tmp_test_distributed_read_np" + std::to_string(g_mpi.size);
178}
179
180static void setPeriodicIfNeeded(ssp<UnstructuredMesh> &mesh, const MeshConfig &cfg)
181{
182 if (cfg.periodic)
183 {
184 tPoint zero{0, 0, 0};
185 mesh->SetPeriodicGeometry(
186 cfg.translation1, zero, zero,
187 cfg.translation2, zero, zero,
188 cfg.translation3, zero, zero);
189 }
190}
191
192/// Full pipeline rebuild after ReadSerializeAndDistribute.
193static void rebuildAfterDistributedRead(ssp<UnstructuredMesh> &mesh, const MeshConfig &cfg)
194{
195 mesh->RecoverNode2CellAndNode2Bnd();
196 mesh->RecoverCell2CellAndBnd2Cell();
197 mesh->BuildGhostPrimary();
198 mesh->AdjGlobal2LocalPrimary();
199 mesh->AdjGlobal2LocalN2CB();
200
201 mesh->InterpolateFace();
202 mesh->AssertOnFaces();
203
204 mesh->AdjLocal2GlobalN2CB();
205 mesh->BuildGhostN2CB();
206 mesh->AdjGlobal2LocalN2CB();
207
208 if (cfg.periodic)
209 mesh->RecreatePeriodicNodes();
210 mesh->BuildVTKConnectivity();
211}
212
213/// Build reference mesh, record counts, and write to H5.
214static void buildAndWriteRef(
215 int ic, const MeshConfig &cfg, const MPIInfo &mpi, const std::string &h5Path)
216{
217 auto mesh = std::make_shared<UnstructuredMesh>(mpi, cfg.dim);
218 UnstructuredMeshSerialRW reader(mesh, 0);
219 setPeriodicIfNeeded(mesh, cfg);
220
221 reader.ReadFromCGNSSerial(meshPath(cfg.file));
222 reader.Deduplicate1to1Periodic(1e-8);
223 reader.BuildCell2Cell();
224
225 PartitionOptions pOpt;
226 pOpt.metisType = "KWAY";
227 pOpt.metisUfactor = 30;
228 pOpt.metisSeed = 42;
229 pOpt.metisNcuts = 1;
230 reader.MeshPartitionCell2Cell(pOpt);
231 reader.PartitionReorderToMeshCell2Cell();
232
233 mesh->RecoverNode2CellAndNode2Bnd();
234 mesh->RecoverCell2CellAndBnd2Cell();
235 mesh->BuildGhostPrimary();
236 mesh->AdjGlobal2LocalPrimary();
237 mesh->AdjGlobal2LocalN2CB();
238
239 mesh->InterpolateFace();
240 mesh->AssertOnFaces();
241
242 mesh->AdjLocal2GlobalN2CB();
243 mesh->BuildGhostN2CB();
244 mesh->AdjGlobal2LocalN2CB();
245
246 // Record reference counts
247 g_refCounts[ic].nCellGlobal = mesh->NumCellGlobal();
248 g_refCounts[ic].nNodeGlobal = mesh->NumNodeGlobal();
249 g_refCounts[ic].nBndGlobal = mesh->NumBndGlobal();
250 g_refCounts[ic].nFaceGlobal = mesh->NumFaceGlobal();
251
252 // Write to H5
253 mesh->AdjLocal2GlobalPrimary();
254 auto ser = std::make_shared<Serializer::SerializerH5>(mpi);
255 ser->OpenFile(h5Path, false);
256 mesh->WriteSerialize(ser, "meshPart");
257 ser->CloseFile();
258}
259
260/// Write mesh to H5 using a sub-communicator of size npWrite.
261/// Only ranks [0, npWrite) participate in the write.
262static void buildAndWriteRefSubComm(
263 int ic, const MeshConfig &cfg, int npWrite, const std::string &h5Path)
264{
265 int color = (g_mpi.rank < npWrite) ? 0 : MPI_UNDEFINED;
266 MPI_Comm writeComm = MPI_COMM_NULL;
267 MPI_Comm_split(MPI_COMM_WORLD, color, g_mpi.rank, &writeComm);
268
269 if (writeComm != MPI_COMM_NULL)
270 {
271 MPIInfo writeMpi(writeComm);
272 buildAndWriteRef(ic, cfg, writeMpi, h5Path);
273 MPI_Comm_free(&writeComm);
274 }
275
276 // Broadcast reference counts from rank 0 to all ranks
277 MPI::Bcast(&g_refCounts[ic], sizeof(RefCounts) / sizeof(DNDS::index), DNDS_MPI_INDEX, 0, g_mpi.comm);
278}
279
280/// Read from H5 with ReadSerializeAndDistribute using the full world communicator.
281static ssp<UnstructuredMesh> distributedRead(
282 const MeshConfig &cfg, const std::string &h5Path)
283{
284 auto mesh = std::make_shared<UnstructuredMesh>(g_mpi, cfg.dim);
285 setPeriodicIfNeeded(mesh, cfg);
286
287 PartitionOptions pOpt;
288 pOpt.metisType = "KWAY";
289 pOpt.metisUfactor = 5;
290 pOpt.metisSeed = 777; // deliberately different from write
291 pOpt.metisNcuts = 1;
292
293 auto ser = std::make_shared<Serializer::SerializerH5>(g_mpi);
294 ser->OpenFile(h5Path, true);
295 mesh->ReadSerializeAndDistribute(ser, "meshPart", pOpt);
296 ser->CloseFile();
297
298 rebuildAfterDistributedRead(mesh, cfg);
299 return mesh;
300}
301
302// ---------------------------------------------------------------------------
303// Verification helpers
304// ---------------------------------------------------------------------------
305
306/// Collect cell2cellOrig from all ranks, check global uniqueness.
307static void checkOrigUnique(UnstructuredMesh &mesh, DNDS::index expectedGlobal, const MPIInfo &mpi)
308{
309 std::vector<DNDS::index> localOrig(mesh.NumCell());
310 for (DNDS::index iC = 0; iC < mesh.NumCell(); iC++)
311 localOrig[iC] = mesh.cell2cellOrig(iC, 0);
312
313 int localCount = static_cast<int>(localOrig.size());
314 std::vector<int> allCounts(mpi.size);
315 MPI_Allgather(&localCount, 1, MPI_INT, allCounts.data(), 1, MPI_INT, mpi.comm);
316
317 std::vector<int> displs(mpi.size + 1, 0);
318 for (int r = 0; r < mpi.size; r++)
319 displs[r + 1] = displs[r] + allCounts[r];
320 int totalCount = displs[mpi.size];
321
322 std::vector<DNDS::index> allOrig(totalCount);
323 MPI_Allgatherv(localOrig.data(), localCount, DNDS_MPI_INDEX,
324 allOrig.data(), allCounts.data(), displs.data(), DNDS_MPI_INDEX, mpi.comm);
325
326 std::sort(allOrig.begin(), allOrig.end());
327 auto last = std::unique(allOrig.begin(), allOrig.end());
328 CHECK(last == allOrig.end());
329 CHECK(totalCount == expectedGlobal);
330}
331
332/// Compute global bounding box.
333static std::pair<Eigen::Vector3d, Eigen::Vector3d>
334computeBBox(UnstructuredMesh &mesh, const MPIInfo &mpi)
335{
336 Eigen::Vector3d localMin = Eigen::Vector3d::Constant(1e100);
337 Eigen::Vector3d localMax = Eigen::Vector3d::Constant(-1e100);
338 for (DNDS::index iN = 0; iN < mesh.NumNode(); iN++)
339 for (int d = 0; d < 3; d++)
340 {
341 localMin(d) = std::min(localMin(d), mesh.coords[iN](d));
342 localMax(d) = std::max(localMax(d), mesh.coords[iN](d));
343 }
344 Eigen::Vector3d globalMin, globalMax;
345 MPI_Allreduce(localMin.data(), globalMin.data(), 3, MPI_DOUBLE, MPI_MIN, mpi.comm);
346 MPI_Allreduce(localMax.data(), globalMax.data(), 3, MPI_DOUBLE, MPI_MAX, mpi.comm);
347 return {globalMin, globalMax};
348}
349
350// ---------------------------------------------------------------------------
351int main(int argc, char **argv)
352{
353 MPI_Init(&argc, &argv);
354 g_mpi.setWorld();
355
356 // Create temp directory
357 if (g_mpi.rank == 0)
358 std::filesystem::create_directories(tmpDir());
359 MPI::Barrier(g_mpi.comm);
360
361 // --- (A) Same-np tests ---
362 for (int ic = 0; ic < N_CONFIGS; ic++)
363 {
364 g_h5Same[ic] = tmpDir() + "/" + g_configs[ic].name + "_same.dnds.h5";
365 if (g_mpi.rank == 0)
366 log() << "[setup] same-np: building + writing " << g_configs[ic].name << std::endl;
367 buildAndWriteRef(ic, g_configs[ic], g_mpi, g_h5Same[ic]);
368
369 if (g_mpi.rank == 0)
370 log() << "[setup] same-np: distributed read " << g_configs[ic].name << std::endl;
371 g_sameNpMesh[ic] = distributedRead(g_configs[ic], g_h5Same[ic]);
372 }
373
374 // --- (B) Cross-np tests (write with npWrite < np, read with np) ---
375 g_crossNpAvailable = (g_mpi.size >= 2);
376 if (g_crossNpAvailable)
377 {
378 int npWrite = std::max(1, g_mpi.size / 2); // write with half the ranks
379 for (int ic = 0; ic < N_CONFIGS; ic++)
380 {
381 g_h5Cross[ic] = tmpDir() + "/" + g_configs[ic].name + "_cross.dnds.h5";
382 if (g_mpi.rank == 0)
383 log() << "[setup] cross-np: building + writing " << g_configs[ic].name
384 << " with npWrite=" << npWrite << std::endl;
385 buildAndWriteRefSubComm(ic, g_configs[ic], npWrite, g_h5Cross[ic]);
386
387 MPI::Barrier(g_mpi.comm);
388 if (g_mpi.rank == 0)
389 log() << "[setup] cross-np: distributed read " << g_configs[ic].name << std::endl;
390 g_crossNpMesh[ic] = distributedRead(g_configs[ic], g_h5Cross[ic]);
391 }
392 }
393
394 // Run tests
395 doctest::Context ctx;
396 ctx.applyCommandLine(argc, argv);
397 int res = ctx.run();
398
399 // Cleanup
400 for (auto &m : g_sameNpMesh)
401 m.reset();
402 for (auto &m : g_crossNpMesh)
403 m.reset();
404 MPI::Barrier(g_mpi.comm);
405 if (g_mpi.rank == 0)
406 std::filesystem::remove_all(tmpDir());
407
408 MPI_Finalize();
409 return res;
410}
411
412// ===========================================================================
413// Same-np tests
414// ===========================================================================
415
416#define FOR_EACH_CONFIG(body) \
417 for (int ic = 0; ic < N_CONFIGS; ic++) \
418 { \
419 CAPTURE(ic); \
420 CAPTURE(g_configs[ic].name); \
421 body \
422 }
423
424TEST_CASE("SameNp: global cell count")
425{
427 CHECK(g_sameNpMesh[ic]->NumCellGlobal() == g_refCounts[ic].nCellGlobal);
428 })
429}
430
431TEST_CASE("SameNp: global node count")
432{
434 CHECK(g_sameNpMesh[ic]->NumNodeGlobal() == g_refCounts[ic].nNodeGlobal);
435 })
436}
437
438TEST_CASE("SameNp: global bnd count")
439{
441 CHECK(g_sameNpMesh[ic]->NumBndGlobal() == g_refCounts[ic].nBndGlobal);
442 })
443}
444
445TEST_CASE("SameNp: global face count")
446{
448 CHECK(g_sameNpMesh[ic]->NumFaceGlobal() == g_refCounts[ic].nFaceGlobal);
449 })
450}
451
452TEST_CASE("SameNp: expected counts from config")
453{
455 if (g_configs[ic].expectedCells >= 0)
456 CHECK(g_refCounts[ic].nCellGlobal == g_configs[ic].expectedCells);
457 if (g_configs[ic].expectedBnds >= 0)
458 CHECK(g_refCounts[ic].nBndGlobal == g_configs[ic].expectedBnds);
459 })
460}
461
462TEST_CASE("SameNp: every rank has cells")
463{
465 CHECK(g_sameNpMesh[ic]->NumCell() > 0);
466 // NumNode() can be 0 if all cell2node entries point to ghost nodes
467 CHECK(g_sameNpMesh[ic]->NumNode() >= 0);
468 })
469}
470
471TEST_CASE("SameNp: face2cell valid")
472{
474 auto &mesh = *g_sameNpMesh[ic];
475 for (DNDS::index iF = 0; iF < mesh.NumFace(); iF++)
476 {
477 DNDS::index iCL = mesh.face2cell(iF, 0);
478 DNDS::index iCR = mesh.face2cell(iF, 1);
479 REQUIRE(iCL >= 0);
480 REQUIRE(iCL < mesh.NumCell());
481 if (iCR != UnInitIndex)
482 {
483 REQUIRE(iCR >= 0);
484 REQUIRE(iCR < mesh.NumCellProc());
485 }
486 }
487 })
488}
489
490TEST_CASE("SameNp: node2cell non-empty")
491{
493 auto &mesh = *g_sameNpMesh[ic];
494 for (DNDS::index iN = 0; iN < mesh.NumNode(); iN++)
495 CHECK(mesh.node2cell.RowSize(iN) > 0);
496 })
497}
498
499TEST_CASE("SameNp: cell2cellOrig globally unique")
500{
502 checkOrigUnique(*g_sameNpMesh[ic], g_refCounts[ic].nCellGlobal, g_mpi);
503 })
504}
505
506TEST_CASE("SameNp: coordinate bounding box is sane")
507{
509 auto bbox = computeBBox(*g_sameNpMesh[ic], g_mpi);
510 for (int d = 0; d < g_configs[ic].dim; d++)
511 CHECK(bbox.second(d) > bbox.first(d));
512 })
513}
514
515// ===========================================================================
516// Cross-np tests (write with fewer ranks, read with all)
517// ===========================================================================
518
519#define FOR_EACH_CROSS_CONFIG(body) \
520 if (!g_crossNpAvailable) \
521 return; \
522 for (int ic = 0; ic < N_CONFIGS; ic++) \
523 { \
524 CAPTURE(ic); \
525 CAPTURE(g_configs[ic].name); \
526 body \
527 }
528
529TEST_CASE("CrossNp: global cell count")
530{
532 CHECK(g_crossNpMesh[ic]->NumCellGlobal() == g_refCounts[ic].nCellGlobal);
533 })
534}
535
536TEST_CASE("CrossNp: global node count")
537{
539 CHECK(g_crossNpMesh[ic]->NumNodeGlobal() == g_refCounts[ic].nNodeGlobal);
540 })
541}
542
543TEST_CASE("CrossNp: global bnd count")
544{
546 CHECK(g_crossNpMesh[ic]->NumBndGlobal() == g_refCounts[ic].nBndGlobal);
547 })
548}
549
550TEST_CASE("CrossNp: global face count")
551{
553 CHECK(g_crossNpMesh[ic]->NumFaceGlobal() == g_refCounts[ic].nFaceGlobal);
554 })
555}
556
557TEST_CASE("CrossNp: every rank has cells")
558{
560 CHECK(g_crossNpMesh[ic]->NumCell() > 0);
561 CHECK(g_crossNpMesh[ic]->NumNode() >= 0);
562 })
563}
564
565TEST_CASE("CrossNp: face2cell valid")
566{
568 auto &mesh = *g_crossNpMesh[ic];
569 for (DNDS::index iF = 0; iF < mesh.NumFace(); iF++)
570 {
571 DNDS::index iCL = mesh.face2cell(iF, 0);
572 DNDS::index iCR = mesh.face2cell(iF, 1);
573 REQUIRE(iCL >= 0);
574 REQUIRE(iCL < mesh.NumCell());
575 if (iCR != UnInitIndex)
576 {
577 REQUIRE(iCR >= 0);
578 REQUIRE(iCR < mesh.NumCellProc());
579 }
580 }
581 })
582}
583
584TEST_CASE("CrossNp: cell2cellOrig globally unique")
585{
587 checkOrigUnique(*g_crossNpMesh[ic], g_refCounts[ic].nCellGlobal, g_mpi);
588 })
589}
590
591TEST_CASE("CrossNp: node2cell non-empty")
592{
594 auto &mesh = *g_crossNpMesh[ic];
595 for (DNDS::index iN = 0; iN < mesh.NumNode(); iN++)
596 CHECK(mesh.node2cell.RowSize(iN) > 0);
597 })
598}
MPI-parallel HDF5 serializer implementing the SerializerBase interface.
int main()
Definition dummy.cpp:3
Eigen::Vector3d tPoint
Definition Geometric.hpp:9
the host side operators are provided as implemented
const MPI_Datatype DNDS_MPI_INDEX
MPI datatype matching index (= MPI_INT64_T).
Definition MPI.hpp:90
DNDS_CONSTANT const index UnInitIndex
Sentinel "not initialised" index value (= INT64_MIN).
Definition Defines.hpp:176
int64_t index
Global row / DOF index type (signed 64-bit; handles multi-billion-cell meshes).
Definition Defines.hpp:107
std::shared_ptr< T > ssp
Shortened alias for std::shared_ptr used pervasively in DNDSR.
Definition Defines.hpp:138
std::ostream & log()
Return the current DNDSR log stream (either std::cout or the installed file).
Definition Defines.cpp:49
auto RowSize() const
Uniform row width (delegates to father).
index NumCellProc() const
Definition Mesh.hpp:467
tCoordPair coords
reader
Definition Mesh.hpp:54
tAdjPair node2cell
inverse relations
Definition Mesh.hpp:83
Lightweight bundle of an MPI communicator and the calling rank's coordinates.
Definition MPI.hpp:215
int size
Number of ranks in comm (-1 until initialised).
Definition MPI.hpp:221
int rank
This rank's 0-based index within comm (-1 until initialised).
Definition MPI.hpp:219
MPI_Comm comm
The underlying MPI communicator handle.
Definition MPI.hpp:217
void setWorld()
Initialise the object to MPI_COMM_WORLD. Requires MPI_Init to have run.
Definition MPI.hpp:242
DNDS::index expectedCells
Expected global cell count (-1 = skip check)
tPoint translation1
Periodic translation axis 1.
bool periodic
Has periodic boundaries.
int dim
Spatial dimension.
tPoint translation3
Periodic translation axis 3.
const char * file
Filename relative to data/mesh/.
tPoint translation2
Periodic translation axis 2.
const char * name
Human-readable label.
DNDS::index expectedBnds
Expected global bnd count (-1 = skip check)
tVec r(NCells)
CHECK(result.size()==3)
#define FOR_EACH_CONFIG(body)
#define FOR_EACH_CROSS_CONFIG(body)
auto res
Definition test_ODE.cpp:196
TEST_CASE("3D: VFV P2 HQM error < P1 on sinCos3D")