Building DNDSR¶
Prerequisites¶
Requirement |
Version |
Notes |
|---|---|---|
C++ compiler |
GCC 9+ / Clang 8+ |
Must support C++17 |
MPI |
MPI-3 |
OpenMPI or MPICH |
CMake |
>= 3.21 |
|
Python |
>= 3.10 |
System Python recommended (not conda) |
Ninja |
any |
Optional but recommended for speed |
C++ libraries (managed via external/cfd_externals submodule):
Eigen, Boost, CGAL, nlohmann_json, fmt, pybind11, HDF5, CGNS,
Metis, ParMetis, ZLIB. Optional: CUDA toolkit, SuperLU_dist.
Building External Dependencies¶
DNDSR requires two sets of external dependencies: binary libraries built
from the cfd_externals submodule, and header-only libraries shipped as a
tarball.
Binary libraries (HDF5, CGNS, Metis, ParMetis, ZLIB)¶
git submodule update --init --recursive --depth=1
cd external/cfd_externals
CC=mpicc CXX=mpicxx python cfd_externals_build.py
cd ../..
This installs all binary libraries into external/cfd_externals/install/.
Header-only libraries (Eigen, Boost, CGAL, fmt, pybind11, …)¶
Download the latest release tarball from GitHub and extract it into
the external/ directory:
curl -L -o external/external_headeronlys.tar.gz \
https://github.com/harryzhou2000/cfd_externals_headeronlys/releases/latest/download/external_headeronlys.tar.gz
cd external
tar -xzf external_headeronlys.tar.gz
cd ..
After extraction, directories such as external/eigen/,
external/boost/, external/CGAL/, etc. should exist.
CMake Module Architecture¶
The build system is split into focused modules under cmake/. The
main CMakeLists.txt (~100 lines) orchestrates them in dependency
order:
# |
Module |
Purpose |
|---|---|---|
1 |
|
Detect and bundle libstdc++ or libc++ for the Python package |
2 |
|
All user-facing cache options, commit recording, ccache |
3 |
|
CUDA language enable, toolkit discovery, CCCL include path |
4 |
|
LTO, MPI discovery, platform flags, OpenMP |
5 |
|
find_library/find_path, pybind11/fmt/superlu subdirs, |
6 |
|
Helper functions: |
7 |
|
CTest registration for doctest C++ tests and pytest |
8 |
|
Application executables and |
9 |
|
Doxygen documentation target |
10 |
|
compile_commands.json post-processing, automatic stub generation |
Between modules 6 and 7, the five core library subdirectories are
added: src/DNDS, src/Geom, src/CFV, src/Euler, src/EulerP.
Building C++ (Solvers and Libraries)¶
Using CMake Presets¶
cmake --preset release-test # Release with unit tests enabled
cmake --build --preset tests -j32 # Build all C++ unit tests
ctest --preset unit # Run unit tests
Available presets (defined in CMakePresets.json):
Preset |
Build Type |
Tests |
Build Dir |
Notes |
|---|---|---|---|---|
|
Release |
OFF |
|
Minimal solver build |
|
Debug |
ON |
|
Full debug symbols |
|
Release |
ON |
|
Main development preset |
|
Release |
ON |
|
Enables CUDA GPU support |
|
Release |
ON |
|
CI (no ccache) |
Note: Each preset writes to its own build directory (
build/,build-debug/,build-cuda/, etc.). If you switch presets, use the corresponding directory in subsequentcmake --buildandctestcommands.
Manual CMake Configuration¶
cmake -S . -B build -G Ninja -DDNDS_BUILD_TESTS=ON
cmake --build build -t euler -j32 # Build a solver
cmake --build build -t dnds_unit_tests -j32 # Build C++ tests
ctest --test-dir build -R dnds_ --output-on-failure
Let CMake detect the system default compiler. Use CC=mpicc CXX=mpicxx
when unsure which MPI implementation CMake will find.
Solver Targets¶
Each Euler model variant generates a separate executable:
Target |
Model |
Dimension |
|---|---|---|
|
Navier-Stokes |
2D/3D auto |
|
Navier-Stokes |
2D only |
|
Navier-Stokes |
3D only |
|
Spalart-Allmaras |
2D/3D |
|
Spalart-Allmaras |
3D only |
|
k-omega 2-equation |
2D/3D |
|
k-omega 2-equation |
3D only |
|
Extended |
2D/3D |
|
Extended |
3D only |
CMake Cache Options¶
Key options (set via -D<OPTION>=<VALUE> or in a preset):
Option |
Default |
Description |
|---|---|---|
|
OFF |
Build C++ unit tests (doctest) |
|
OFF |
Enable CUDA GPU support |
|
ON |
Enable OpenMP |
|
ON |
Use -O3 -g0 on core library modules |
|
OFF |
Link-time optimization |
|
OFF |
Use -flto=thin (Clang) |
|
OFF |
Disable LTO for pybind11 modules only |
|
OFF |
Use -march=native |
|
OFF |
Use -funsafe-math-optimizations |
|
auto |
Use ccache (auto-detected, off for pip) |
|
ON |
Use -rdynamic on POSIX |
|
OFF |
Generate compile_commands.json for clangd |
|
OFF |
Run clang-tidy during build |
|
ON |
Record git commit hash at configure |
|
OFF |
Use precompiled headers |
|
OFF |
Use external BLAS in Eigen |
|
OFF |
Use external LAPACK in Eigen |
Building the Python Package¶
Creating a Virtual Environment¶
Use the system Python (not conda) to avoid libstdc++ version conflicts:
python3.12 -m venv venv
source venv/bin/activate
pip install numpy scipy pytest pytest-mpi pytest-timeout mpi4py \
pybind11 pybind11-stubgen scikit-build-core ninja
Why not conda? Conda Python binaries embed an
RPATHpointing to conda’s bundled libstdc++, which may be too old for the compiler used to build DNDSR. Using the system Python avoids this entirely.
Three Ways to Build the Python Package¶
1. In-place build (no pip install)¶
Build the pybind11 .so modules via the main CMake build, then
install them (and auto-generate type stubs) into python/DNDSR/:
source venv/bin/activate
cmake -S . -B build -G Ninja
cmake --build build -t dnds_pybind11 geom_pybind11 cfv_pybind11 eulerP_pybind11 -j32
cmake --install build --component py
Use the package by setting PYTHONPATH:
PYTHONPATH=python pytest test/
# or:
export PYTHONPATH=$PWD/python
python -c "from DNDSR import DNDS, Geom, CFV, EulerP"
After making C++ changes, rebuild and reinstall:
cmake --build build -t dnds_pybind11 geom_pybind11 cfv_pybind11 eulerP_pybind11 -j32
cmake --install build --component py # reinstalls .so AND regenerates stubs
2. Editable install (development with pip)¶
Uses scikit-build-core. Builds into a separate build_py/ directory:
source venv/bin/activate
CMAKE_BUILD_PARALLEL_LEVEL=32 pip install -e . --no-build-isolation
This configures and builds all four pybind11 targets, installs .so
files into python/DNDSR/<Module>/_ext/, generates .pyi stubs, and
registers the package as editable in the venv.
After the initial install, rebuild only the C++ parts:
cmake --build build_py -t dnds_pybind11 geom_pybind11 cfv_pybind11 eulerP_pybind11 -j32
cmake --install build_py --component py
3. Full wheel install¶
CMAKE_BUILD_PARALLEL_LEVEL=32 pip install . --verbose
Builds a wheel with .so files, bundled shared libraries, and .pyi
stubs included.
Controlling Build Parallelism¶
The default is -j0 (all available cores). Override via environment:
SKBUILD_BUILD_TOOL_ARGS="-j8" pip install -e . --no-build-isolation
Pybind11 Module Targets¶
Target |
Module |
C++ Source |
Output .so location |
|---|---|---|---|
|
DNDS |
|
|
|
Geom |
|
|
|
CFV |
|
|
|
EulerP |
|
|
Each pybind11 module links against a corresponding *_shared library
(dnds_shared, geom_shared, cfv_shared, eulerP_shared) which
contains the compiled C++ code.
Type Stub Generation¶
Type stubs (.pyi) provide IDE autocompletion and type checking for
the pybind11 bindings. Stubs are generated automatically during
install and are not tracked in git.
How it works¶
cmake/DndsTooling.cmake registers an install(CODE ...) step on
the py component that runs scripts/generate-stubs.sh after all
.so files are installed. This happens in both workflows:
cmake --install build --component py(in-place build)pip install -e .(scikit-build-core editable install)
The script runs pybind11-stubgen for each submodule (DNDS, Geom,
CFV, EulerP), writes raw output to stubs/, and copies .pyi files
into python/DNDSR/ for PEP 561 compliance.
Manual stub regeneration¶
If you only changed Python code (no C++ binding changes), you can regenerate stubs without rebuilding:
PYTHONPATH=python ./scripts/generate-stubs.sh
Stubs in wheels¶
The .pyi files under python/DNDSR/ are included in sdist/wheel
builds via pyproject.toml sdist.include. They are generated at
build time and packaged into the wheel, so end users get type hints
without running stubgen themselves.
CUDA Support¶
Enabling CUDA¶
cmake -S . -B build-cuda -G Ninja -DDNDS_USE_CUDA=ON
cmake --build build-cuda -t euler -j32
Or use the cuda preset:
cmake --preset cuda
cmake --build build-cuda -j32
CUDA 13.1 (CCCL 3.x) Compatibility¶
CUDA 13.1 moved thrust, cub, and libcudacxx headers into a cccl/
subdirectory under the CUDA toolkit include path. nvcc adds this
path automatically, but the host C++ compiler (g++) does not.
DndsCudaSetup.cmake detects ${CUDAToolkit_INCLUDE_DIRS}/cccl and
exposes it as DNDS_CUDA_CCCL_INCLUDE_DIR. DndsExternalDeps.cmake
appends it to DNDS_EXTERNAL_INCLUDES, so #include <thrust/...>
works from both .cu and .cpp files.
This is backward compatible: on CUDA 12.x the cccl/ path does not
exist, so nothing is added.
CUDA-specific targets¶
GPU-accelerated versions of the test apps are built when
DNDS_USE_CUDA=ON:
cuda_test,array_cuda_Test,array_cuda_Bench,arrayDOF_test_cudaeulerP_pybind11(Python bindings with GPU evaluator)
Running Tests¶
C++ Unit Tests¶
C++ tests use doctest and live
under test/cpp/. MPI tests are registered with CTest at np=1, np=2,
and np=4 (DNDS and Geom tests additionally at np=8).
cmake --build build -t dnds_unit_tests -j32
ctest --test-dir build -R dnds_ --output-on-failure
# Run a single test executable directly
./build/test/cpp/dnds_test_array
mpirun -np 4 ./build/test/cpp/dnds_test_mpi
Available test executables: dnds_test_array, dnds_test_mpi,
dnds_test_array_transformer, dnds_test_array_derived,
dnds_test_array_dof, dnds_test_index_mapping,
dnds_test_serializer.
Python Tests¶
Python tests use pytest and live under test/. The root
test/conftest.py adds python/ to sys.path so tests work with
both PYTHONPATH=python and pip install -e ..
pytest test/DNDS/test_basic.py -v
# With MPI
mpirun -np 4 python -m pytest test/DNDS/test_basic.py
# All tests
pytest test/ -x --timeout=120
CTest also registers pytest suites when DNDS_BUILD_TESTS=ON:
ctest --test-dir build -R pytest_ --output-on-failure
Build Mode Summary¶
Mode |
Command |
Build Dir |
Stubs |
|---|---|---|---|
Pure C++ build |
|
|
N/A |
C++ unit tests |
|
|
N/A |
In-place Python |
|
|
Auto-generated |
Editable install |
|
|
Auto-generated |
Editable C++ rebuild |
|
|
Auto-generated |
Full wheel |
|
|
Included in wheel |
Developer Tooling¶
compile_commands.json for clangd¶
clangd needs a compile_commands.json at the project root for C++
code intelligence. CMake generates one in the build directory; the
compdb tool post-processes it to include header-only translation
units.
cmake -S . -B build -DDNDS_GENERATE_COMPILE_COMMANDS=ON -G Ninja
cmake --build build -j32
cmake --build build -t process-compile-commands
This creates build/compile_commands_processed.json and symlinks it
to compile_commands.json at the project root.
The build system uses a shipped, modified version of compdb at
scripts/compdb/ (invoked as PYTHONPATH=scripts python -m compdb).
If the shipped version is not found, it falls back to a
system-installed compdb executable (pip install compdb).
Doxygen Documentation¶
cmake --build build -t docs
Output goes to build/docs/html/.
CMake Utility Targets¶
Target |
Description |
|---|---|
|
Post-process compile_commands.json for clangd |
|
Build Doxygen HTML documentation |
|
Build all C++ unit test executables |
pyproject.toml Configuration¶
The Python package is built with
scikit-build-core. Key
settings in pyproject.toml:
Setting |
Value |
Purpose |
|---|---|---|
|
|
Separate from the C++ build |
|
4 pybind11 targets |
Only build Python bindings |
|
|
Use Ninja generator |
|
|
All cores by default |
|
|
Package root |
When scikit-build-core configures CMake it sets SKBUILD_PROJECT_NAME,
which the build system uses to skip ccache and in-source symlink
creation (these are only relevant for the developer C++ build).