Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

yacs.CfgNode instead of argparse.Namespace #27

Closed
wants to merge 13 commits into from
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
ckp/
rollout/
rollouts/
wandb
wandb/
*.out
datasets
baselines
Expand Down
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ repos:
- id: check-yaml
- id: requirements-txt-fixer
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: 'v0.1.8'
rev: 'v0.2.2'
hooks:
- id: ruff
args: [ --fix ]
Expand Down
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ pip install --upgrade jax[cuda12_pip]==0.4.20 -f https://storage.googleapis.com/
### MacOS
Currently, only the CPU installation works. You will need to change a few small things to get it going:
- Clone installation: in `pyproject.toml` change the torch version from `2.1.0+cpu` to `2.1.0`. Then, remove the `poetry.lock` file and run `poetry install --only main`.
- Configs: You will need to set `f64: False` and `num_workers: 0` in the `configs/` files.
- Configs: You will need to set `f32: True` and `num_workers: 0` in the `configs/` files.

Although the current [`jax-metal==0.0.5` library](https://pypi.org/project/jax-metal/) supports jax in general, there seems to be a missing feature used by `jax-md` related to padding -> see [this issue](https://github.com/google/jax/issues/16366#issuecomment-1591085071).

Expand All @@ -81,7 +81,7 @@ Although the current [`jax-metal==0.0.5` library](https://pypi.org/project/jax-m
A general tutorial is provided in the example notebook "Training GNS on the 2D Taylor Green Vortex" under `./notebooks/tutorial.ipynb` on the [LagrangeBench repository](https://github.com/tumaer/lagrangebench). The notebook covers the basics of LagrangeBench, such as loading a dataset, setting up a case, training a model from scratch and evaluating its performance.

### Running in a local clone (`main.py`)
Alternatively, experiments can also be set up with `main.py`, based on extensive YAML config files and cli arguments (check [`configs/`](configs/)). By default, the arguments have priority as: 1) passed cli arguments, 2) YAML config and 3) [`defaults.py`](lagrangebench/defaults.py) (`lagrangebench` defaults).
Alternatively, experiments can also be set up with `main.py`, based on extensive YAML config files and cli arguments (check [`configs/`](configs/)). By default, the arguments have priority as 1) passed cli arguments, 2) YAML config and 3) [`defaults.py`](lagrangebench/defaults.py) (`lagrangebench` defaults).

When loading a saved model with `--model_dir` the config from the checkpoint is automatically loaded and training is restarted. For more details check the [`experiments/`](experiments/) directory and the [`run.py`](experiments/run.py) file.

Expand Down Expand Up @@ -127,7 +127,7 @@ The datasets are hosted on Zenodo under the DOI: [10.5281/zenodo.10021925](https
### Notebooks
We provide three notebooks that show LagrangeBench functionalities, namely:
- [`tutorial.ipynb`](notebooks/tutorial.ipynb) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tumaer/lagrangebench/blob/main/notebooks/tutorial.ipynb), with a general overview of LagrangeBench library, with training and evaluation of a simple GNS model,
- [`datasets.ipynb`](notebooks/datasets.ipynb) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tumaer/lagrangebench/blob/main/notebooks/datasets.ipynb), with more details and visualizations on the datasets, and
- [`datasets.ipynb`](notebooks/datasets.ipynb) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tumaer/lagrangebench/blob/main/notebooks/datasets.ipynb), with more details and visualizations of the datasets, and
- [`gns_data.ipynb`](notebooks/gns_data.ipynb) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tumaer/lagrangebench/blob/main/notebooks/gns_data.ipynb), showing how to train models within LagrangeBench on the datasets from the paper [Learning to Simulate Complex Physics with Graph Networks](https://arxiv.org/abs/2002.09405).

## Directory structure
Expand Down Expand Up @@ -165,9 +165,9 @@ Welcome! We highly appreciate [Github issues](https://github.com/tumaer/lagrange
You can also chat with us on [**Discord**](https://discord.gg/Ds8jRZ78hU).

### Contributing Guideline
If you want to contribute to this repository, you will need the dev depencencies, i.e.
If you want to contribute to this repository, you will need the dev dependencies, i.e.
install the environment with `poetry install` without the ` --only main` flag.
Then, we also recommend you to install the pre-commit hooks
Then, we also recommend you install the pre-commit hooks
if you don't want to manually run `pre-commit run` before each commit. To sum up:

```bash
Expand Down Expand Up @@ -220,6 +220,6 @@ The associated datasets can be cited as:


### Publications
The following further publcations are based on the LagrangeBench codebase:
The following further publications are based on the LagrangeBench codebase:

1. [Learning Lagrangian Fluid Mechanics with E(3)-Equivariant Graph Neural Networks (GSI 2023)](https://arxiv.org/abs/2305.15603), A. P. Toshev, G. Galletti, J. Brandstetter, S. Adami, N. A. Adams
6 changes: 0 additions & 6 deletions configs/WaterDrop_2d/base.yaml

This file was deleted.

20 changes: 15 additions & 5 deletions configs/WaterDrop_2d/gns.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,16 @@
extends: WaterDrop_2d/base.yaml
main:
data_dir: /tmp/datasets/WaterDrop

model: gns
num_mp_steps: 10
latent_dim: 128
lr_start: 5.e-4
model:
name: gns
num_mp_steps: 10
latent_dim: 128

optimizer:
lr_start: 5.e-4

logging:
wandb_project: waterdrop_2d

neighbors:
backend: matscipy
7 changes: 0 additions & 7 deletions configs/dam_2d/base.yaml

This file was deleted.

23 changes: 18 additions & 5 deletions configs/dam_2d/gns.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,19 @@
extends: dam_2d/base.yaml
main:
data_dir: datasets/2D_DAM_5740_20kevery100

model:
name: gns
num_mp_steps: 10
latent_dim: 128

optimizer:
lr_start: 5.e-4
noise_std: 0.001

logging:
wandb_project: dam_2d

neighbors:
multiplier: 2.0


model: gns
num_mp_steps: 10
latent_dim: 128
lr_start: 5.e-4
24 changes: 18 additions & 6 deletions configs/dam_2d/segnn.yaml
Original file line number Diff line number Diff line change
@@ -1,8 +1,20 @@
extends: dam_2d/base.yaml
main:
data_dir: datasets/2D_DAM_5740_20kevery100

model: segnn
num_mp_steps: 10
latent_dim: 64
lr_start: 5.e-4
model:
name: segnn
num_mp_steps: 10
latent_dim: 64

isotropic_norm: True
train:
isotropic_norm: True

optimizer:
lr_start: 5.e-4
noise_std: 0.001

logging:
wandb_project: dam_2d

neighbors:
multiplier: 2.0
118 changes: 0 additions & 118 deletions configs/defaults.yaml

This file was deleted.

7 changes: 0 additions & 7 deletions configs/ldc_2d/base.yaml

This file was deleted.

21 changes: 16 additions & 5 deletions configs/ldc_2d/gns.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,17 @@
extends: ldc_2d/base.yaml
main:
data_dir: datasets/2D_LDC_2708_10kevery100

model: gns
num_mp_steps: 10
latent_dim: 128
lr_start: 5.e-4
model:
name: gns
num_mp_steps: 10
latent_dim: 128

optimizer:
lr_start: 5.e-4
noise_std: 0.001

logging:
wandb_project: ldc_2d

neighbors:
multiplier: 2.0
24 changes: 18 additions & 6 deletions configs/ldc_2d/segnn.yaml
Original file line number Diff line number Diff line change
@@ -1,8 +1,20 @@
extends: ldc_2d/base.yaml
main:
data_dir: datasets/2D_LDC_2708_10kevery100

model: segnn
num_mp_steps: 10
latent_dim: 64
lr_start: 5.e-4
model:
name: segnn
num_mp_steps: 10
latent_dim: 64

isotropic_norm: True
train:
isotropic_norm: True

optimizer:
lr_start: 5.e-4
noise_std: 0.001

logging:
wandb_project: ldc_2d

neighbors:
multiplier: 2.0
6 changes: 0 additions & 6 deletions configs/ldc_3d/base.yaml

This file was deleted.

20 changes: 15 additions & 5 deletions configs/ldc_3d/gns.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,16 @@
extends: ldc_3d/base.yaml
main:
data_dir: datasets/3D_LDC_8160_10kevery100

model: gns
num_mp_steps: 10
latent_dim: 128
lr_start: 5.e-4
model:
name: gns
num_mp_steps: 10
latent_dim: 128

optimizer:
lr_start: 5.e-4

logging:
wandb_project: ldc_3d

neighbors:
multiplier: 2.0
23 changes: 17 additions & 6 deletions configs/ldc_3d/segnn.yaml
Original file line number Diff line number Diff line change
@@ -1,8 +1,19 @@
extends: ldc_3d/base.yaml
main:
data_dir: datasets/3D_LDC_8160_10kevery100

model: segnn
num_mp_steps: 10
latent_dim: 64
lr_start: 5.e-4
model:
name: segnn
num_mp_steps: 10
latent_dim: 64

isotropic_norm: True
train:
isotropic_norm: True

optimizer:
lr_start: 5.e-4

logging:
wandb_project: ldc_3d

neighbors:
multiplier: 2.0
4 changes: 0 additions & 4 deletions configs/rpf_2d/base.yaml

This file was deleted.

Loading
Loading