Skip to content

Commit

Permalink
Merge pull request #8 from NOAA-GFDL/feature/tests_readme
Browse files Browse the repository at this point in the history
[Feature] Tests via Github actions + README
  • Loading branch information
fmalatino committed Feb 5, 2024
2 parents 64fe8e6 + 6b85261 commit 3b3b140
Show file tree
Hide file tree
Showing 7 changed files with 89 additions and 5 deletions.
27 changes: 27 additions & 0 deletions .github/workflows/lint.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
name: "Lint"
on:
pull_request:
types: [opened, synchronize, reopened, ready_for_review, labeled, unlabeled]

jobs:
lint:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/[email protected]
with:
submodules: 'recursive'
- name: Step Python 3.8.12
uses: actions/[email protected]
with:
python-version: '3.8.12'
- name: Install OpenMPI for gt4py
run: |
sudo apt-get install libopenmpi-dev
- name: Install Python packages
run: |
python -m pip install --upgrade pip setuptools wheel
pip install .[develop]
- name: Run lint via pre-commit
run: |
pre-commit run --all-files
30 changes: 30 additions & 0 deletions .github/workflows/unit_tests.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
name: "Unit tests"
on:
pull_request:
types: [opened, synchronize, reopened, ready_for_review, labeled, unlabeled]

jobs:
all:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/[email protected]
with:
submodules: 'recursive'
- name: Setup Python
uses: actions/[email protected]
with:
python-version: '3.8.12'
- name: Install OpenMPI & Boost for gt4py
run: |
sudo apt-get install libopenmpi-dev libboost1.74-dev
- name: Install Python packages
run: |
python -m pip install --upgrade pip setuptools wheel
pip install .[test]
- name: Run serial-cpu tests
run: |
pytest -x tests
- name: Run parallel-cpu tests
run: |
pytest -x tests/mpi
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -151,6 +151,10 @@ dmypy.json
# GT4Py
**/.gt_cache*/

# Tests
.my_cache_path/*
.my_relocated_cache_path/*

# Run outputs
plot_output/
profiling_results/
Expand Down
20 changes: 19 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,21 @@
# NOAA/NASA Domain Specific Language middleware

Use `git clone --recurse-submodule` to pull all vetted versions of the submodules used by `ndsl`
NDSL is a middleware for climate and weather modelling developped conjointment by NOAA and NASA. The middleware brings together [GT4Py](https://github.com/GridTools/gt4py/) (the `cartesian` flavor), an ETH CSCS's stencil DSL, and [DaCE](https://github.com/spcl/dace/), an ETH SPCL's data flow framework, both developped for high-performance and portability. On top of those pillars, NDSL deploys a series of optimized APIs for common operations (Halo exchange, domain decomposition, MPI...) and a set of bespoke optimizations for the models targeted by the middleware.

## Battery-included for FV-based models

Historically NDSL was developed to port the FV3 dynamical core on the cube-sphere. Therefore, the middleware ships with ready-to-execute specilization for models based on cube-sphere grid and FV-based model in particular.

## Quickstart

NDSL submodules `gt4py` and `dace` to point to vetted versions, use `git clone --recurse-submodule`.

NDSL is __NOT__ available on `pypi`. Installation of the package has to be local, via `pip install ./NDSL` (`-e` supported). The packages has a few options:

- `ndsl[test]`: installs the test packages (based on `pytest`)
- `ndsl[develop]`: installs tools for development and tests.

Tests are available via:

- `pytest -x test`: running CPU serial tests (GPU as well if `cupy` is installed)
- `mpirun -np 6 pytest -x test/mpi`: running CPU parallel tests (GPU as well if `cupy` is installed)
2 changes: 1 addition & 1 deletion ndsl/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ def is_c_contiguous(array: np.ndarray) -> bool:

def ensure_contiguous(maybe_array: Union[np.ndarray, None]) -> None:
if maybe_array is not None and not is_contiguous(maybe_array):
raise ValueError("ndarray is not contiguous")
raise BufferError("dlpack: buffer is not contiguous")


def safe_assign_array(to_array: np.ndarray, from_array: np.ndarray):
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ def local_pkg(name: str, relative_path: str) -> str:
"Programming Language :: Python :: 3.9",
],
install_requires=requirements,
extras_requires=extras_requires,
extras_require=extras_requires,
name="ndsl",
license="BSD license",
packages=find_namespace_packages(include=["ndsl", "ndsl.*"]),
Expand Down
9 changes: 7 additions & 2 deletions tests/dsl/test_caches.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
from gt4py.cartesian.gtscript import PARALLEL, Field, computation, interval
from gt4py.storage import empty, ones

from ndsl.comm.mpi import MPI
from ndsl.dsl.dace import orchestrate
from ndsl.dsl.dace.dace_config import DaceConfig, DaCeOrchestration
from ndsl.dsl.stencil import (
Expand Down Expand Up @@ -77,6 +78,9 @@ def __call__(self):
pytest.param("dace:cpu"),
],
)
@pytest.mark.skipif(
MPI is not None, reason="relocatibility checked with a one-rank setup"
)
def test_relocatability_orchestration(backend):
import os
import shutil
Expand Down Expand Up @@ -133,15 +137,16 @@ def test_relocatability_orchestration(backend):
pytest.param("dace:cpu"),
],
)
@pytest.mark.skipif(
MPI is not None, reason="relocatibility checked with a one-rank setup"
)
def test_relocatability(backend: str):
import os
import shutil

import gt4py
from gt4py.cartesian import config as gt_config

from ..mpi.mpi_comm import MPI

# Restore original dir name
gt4py.cartesian.config.cache_settings["dir_name"] = os.environ.get(
"GT_CACHE_DIR_NAME", f".gt_cache_{MPI.COMM_WORLD.Get_rank():06}"
Expand Down

0 comments on commit 3b3b140

Please sign in to comment.