Skip to content

Commit

Permalink
deploy: 7d40b20
Browse files Browse the repository at this point in the history
  • Loading branch information
lbluque committed Sep 14, 2024
1 parent a502cb3 commit db8d3ae
Show file tree
Hide file tree
Showing 336 changed files with 12,566 additions and 4,467 deletions.
4 changes: 0 additions & 4 deletions .buildinfo

This file was deleted.

136 changes: 71 additions & 65 deletions _downloads/5fdddbed2260616231dbf7b0d94bb665/train.txt

Large diffs are not rendered by default.

55 changes: 27 additions & 28 deletions _downloads/819e10305ddd6839cd7da05935b17060/mass-inference.txt
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
2024-08-14 17:37:16 (INFO): Running in non-distributed local mode
2024-08-14 17:37:16 (INFO): Setting env PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
2024-08-14 17:37:16 (INFO): Project root: /home/runner/work/fairchem/fairchem/src/fairchem
2024-08-14 17:37:17 (INFO): amp: true
2024-09-13 23:33:27 (INFO): Running in local mode without elastic launch (single gpu only)
2024-09-13 23:33:27 (INFO): Setting env PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
2024-09-13 23:33:27 (INFO): Project root: /home/runner/work/fairchem/fairchem/src/fairchem
2024-09-13 23:33:28 (INFO): amp: false
cmd:
checkpoint_dir: ./checkpoints/2024-08-14-17-38-08
commit: 8fb16d6
checkpoint_dir: /home/runner/work/fairchem/fairchem/docs/core/checkpoints/2024-09-13-23-34-24
commit: 7d40b20
identifier: ''
logs_dir: ./logs/tensorboard/2024-08-14-17-38-08
logs_dir: /home/runner/work/fairchem/fairchem/docs/core/logs/tensorboard/2024-09-13-23-34-24
print_every: 10
results_dir: ./results/2024-08-14-17-38-08
results_dir: /home/runner/work/fairchem/fairchem/docs/core/results/2024-09-13-23-34-24
seed: 0
timestamp_id: 2024-08-14-17-38-08
version: 0.1.dev1+g8fb16d6
timestamp_id: 2024-09-13-23-34-24
version: 0.1.dev1+g7d40b20
dataset: {}
evaluation_metrics:
metrics:
Expand Down Expand Up @@ -67,7 +67,6 @@ model:
rbf:
name: gaussian
regress_forces: true
noddp: false
optim:
batch_size: 16
clip_grad_norm: 10
Expand Down Expand Up @@ -114,24 +113,24 @@ test_dataset:
trainer: ocp
val_dataset: {}

2024-08-14 17:37:17 (WARNING): Could not find dataset metadata.npz files in '[PosixPath('data.db')]'
2024-08-14 17:37:17 (WARNING): Disabled BalancedBatchSampler because num_replicas=1.
2024-08-14 17:37:17 (WARNING): Failed to get data sizes, falling back to uniform partitioning. BalancedBatchSampler requires a dataset that has a metadata attributed with number of atoms.
2024-08-14 17:37:17 (INFO): rank: 0: Sampler created...
2024-08-14 17:37:17 (INFO): Created BalancedBatchSampler with sampler=<fairchem.core.common.data_parallel.StatefulDistributedSampler object at 0x7fcfd9f6c850>, batch_size=16, drop_last=False
2024-08-14 17:37:17 (INFO): Loading model: gemnet_t
2024-08-14 17:37:19 (INFO): Loaded GemNetT with 31671825 parameters.
2024-08-14 17:37:19 (WARNING): log_summary for Tensorboard not supported
2024-08-14 17:37:19 (INFO): Attemping to load user specified checkpoint at /tmp/fairchem_checkpoints/gndt_oc22_all_s2ef.pt
2024-08-14 17:37:19 (INFO): Loading checkpoint from: /tmp/fairchem_checkpoints/gndt_oc22_all_s2ef.pt
2024-08-14 17:37:20 (INFO): Overwriting scaling factors with those loaded from checkpoint. If you're generating predictions with a pretrained checkpoint, this is the correct behavior. To disable this, delete `scale_dict` from the checkpoint.
2024-08-14 17:37:20 (WARNING): Scale factor comment not found in model
2024-08-14 17:37:20 (INFO): Predicting on test.
2024-09-13 23:33:28 (INFO): Loading model: gemnet_t
2024-09-13 23:33:30 (INFO): Loaded GemNetT with 31671825 parameters.
2024-09-13 23:33:30 (WARNING): log_summary for Tensorboard not supported
2024-09-13 23:33:30 (WARNING): Could not find dataset metadata.npz files in '[PosixPath('data.db')]'
2024-09-13 23:33:30 (WARNING): Disabled BalancedBatchSampler because num_replicas=1.
2024-09-13 23:33:30 (WARNING): Failed to get data sizes, falling back to uniform partitioning. BalancedBatchSampler requires a dataset that has a metadata attributed with number of atoms.
2024-09-13 23:33:30 (INFO): rank: 0: Sampler created...
2024-09-13 23:33:30 (INFO): Created BalancedBatchSampler with sampler=<fairchem.core.common.data_parallel.StatefulDistributedSampler object at 0x7f89cc9d3d10>, batch_size=16, drop_last=False
2024-09-13 23:33:30 (INFO): Attemping to load user specified checkpoint at /tmp/fairchem_checkpoints/gndt_oc22_all_s2ef.pt
2024-09-13 23:33:30 (INFO): Loading checkpoint from: /tmp/fairchem_checkpoints/gndt_oc22_all_s2ef.pt
2024-09-13 23:33:30 (INFO): Overwriting scaling factors with those loaded from checkpoint. If you're generating predictions with a pretrained checkpoint, this is the correct behavior. To disable this, delete `scale_dict` from the checkpoint.
2024-09-13 23:33:30 (WARNING): Scale factor comment not found in model
2024-09-13 23:33:30 (INFO): Predicting on test.
device 0: 0%| | 0/3 [00:00<?, ?it/s]/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/torch_geometric/data/collate.py:145: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
storage = elem.storage()._new_shared(numel)
/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/torch_geometric/data/collate.py:145: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
storage = elem.storage()._new_shared(numel)
device 0: 33%|███████████▋ | 1/3 [00:02<00:05, 2.85s/it]device 0: 67%|███████████████████████▎ | 2/3 [00:04<00:02, 2.14s/it]device 0: 100%|███████████████████████████████████| 3/3 [00:05<00:00, 1.85s/it]device 0: 100%|███████████████████████████████████| 3/3 [00:06<00:00, 2.01s/it]
2024-08-14 17:37:26 (INFO): Writing results to ./results/2024-08-14-17-38-08/ocp_predictions.npz
2024-08-14 17:37:26 (INFO): Total time taken: 6.164632797241211
Elapsed time = 12.3 seconds
device 0: 33%|███████████▋ | 1/3 [00:02<00:05, 2.97s/it]device 0: 67%|███████████████████████▎ | 2/3 [00:05<00:02, 2.76s/it]device 0: 100%|███████████████████████████████████| 3/3 [00:10<00:00, 3.54s/it]device 0: 100%|███████████████████████████████████| 3/3 [00:10<00:00, 3.36s/it]
2024-09-13 23:33:40 (INFO): Writing results to /home/runner/work/fairchem/fairchem/docs/core/results/2024-09-13-23-34-24/ocp_predictions.npz
2024-09-13 23:33:40 (INFO): Total time taken: 10.21991777420044
Elapsed time = 16.4 seconds
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
9 changes: 3 additions & 6 deletions _sources/autoapi/core/_cli/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Functions
Module Contents
---------------

.. py:class:: Runner(distributed: bool = False)
.. py:class:: Runner
Bases: :py:obj:`submitit.helpers.Checkpointable`

Expand All @@ -55,9 +55,6 @@ Module Contents



.. py:attribute:: distributed
.. py:method:: __call__(config: dict) -> None
Expand All @@ -67,9 +64,9 @@ Module Contents



.. py:function:: runner_wrapper(distributed: bool, config: dict)
.. py:function:: runner_wrapper(config: dict)
.. py:function:: main()
.. py:function:: main(args: argparse.Namespace | None = None, override_args: list[str] | None = None)
Run the main fairchem program.

Expand Down
5 changes: 5 additions & 0 deletions _sources/autoapi/core/common/distutils/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ Attributes
.. autoapisummary::

core.common.distutils.T
core.common.distutils.DISTRIBUTED_PORT


Functions
Expand Down Expand Up @@ -45,6 +46,10 @@ Module Contents

.. py:data:: T
.. py:data:: DISTRIBUTED_PORT
:value: 13356


.. py:function:: os_environ_get_or_throw(x: str) -> str
.. py:function:: setup(config) -> None
Expand Down
3 changes: 3 additions & 0 deletions _sources/autoapi/core/common/logger/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,9 @@ Module Contents
.. py:attribute:: entity
.. py:attribute:: group
.. py:method:: watch(model, log_freq: int = 1000) -> None
Monitor parameters and gradients.
Expand Down
2 changes: 2 additions & 0 deletions _sources/autoapi/core/common/relaxation/ase_utils/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,8 @@ Module Contents
:value: ['energy', 'forces']


Properties calculator can handle (energy, forces, ...)


.. py:attribute:: checkpoint
:value: None
Expand Down
3 changes: 3 additions & 0 deletions _sources/autoapi/core/common/test_utils/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ Functions
core.common.test_utils.init_env_rank_and_launch_test
core.common.test_utils.init_pg_and_rank_and_launch_test
core.common.test_utils.spawn_multi_process
core.common.test_utils.init_local_distributed_process_group


Module Contents
Expand Down Expand Up @@ -89,3 +90,5 @@ Module Contents
:returns: A list, l, where l[i] is the return value of test_method on rank i


.. py:function:: init_local_distributed_process_group(backend='nccl')
3 changes: 0 additions & 3 deletions _sources/autoapi/core/common/transforms/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,4 @@ Module Contents
.. py:method:: __repr__() -> str
Return repr(self).


22 changes: 19 additions & 3 deletions _sources/autoapi/core/common/utils/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ Attributes
.. autoapisummary::

core.common.utils.DEFAULT_ENV_VARS
core.common.utils.multitask_required_keys


Classes
Expand Down Expand Up @@ -50,6 +51,7 @@ Functions
core.common.utils.dict_set_recursively
core.common.utils.parse_value
core.common.utils.create_dict_from_args
core.common.utils.find_relative_file_in_paths
core.common.utils.load_config
core.common.utils.build_config
core.common.utils.create_grid
Expand All @@ -74,6 +76,7 @@ Functions
core.common.utils.irreps_sum
core.common.utils.update_config
core.common.utils.get_loss_module
core.common.utils.load_model_and_weights_from_checkpoint


Module Contents
Expand All @@ -97,6 +100,8 @@ Module Contents

.. py:function:: save_checkpoint(state, checkpoint_dir: str = 'checkpoints/', checkpoint_file: str = 'checkpoint.pt') -> str
.. py:data:: multitask_required_keys
.. py:class:: Complete
.. py:method:: __call__(data)
Expand Down Expand Up @@ -164,9 +169,18 @@ Module Contents
Keys in different dictionary levels are separated by sep.


.. py:function:: load_config(path: str, previous_includes: list | None = None)
.. py:function:: find_relative_file_in_paths(filename, include_paths)
.. py:function:: load_config(path: str, files_previously_included: list | None = None, include_paths: list | None = None)
Load a given config with any defined imports

.. py:function:: build_config(args, args_override)
When imports are present this is a recursive function called on imports.
To prevent any cyclic imports we keep track of already imported yml files
using files_previously_included


.. py:function:: build_config(args, args_override, include_paths=None)
.. py:function:: create_grid(base_config, sweep_file: str)
Expand Down Expand Up @@ -250,7 +264,7 @@ Module Contents
.. py:function:: setup_env_vars() -> None
.. py:function:: new_trainer_context(*, config: dict[str, Any], distributed: bool = False)
.. py:function:: new_trainer_context(*, config: dict[str, Any])
.. py:function:: _resolve_scale_factor_submodule(model: torch.nn.Module, name: str)
Expand Down Expand Up @@ -281,3 +295,5 @@ Module Contents

.. py:function:: get_loss_module(loss_name)
.. py:function:: load_model_and_weights_from_checkpoint(checkpoint_path: str) -> torch.nn.Module
8 changes: 7 additions & 1 deletion _sources/autoapi/core/datasets/_utils/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,11 +23,17 @@ Functions
Module Contents
---------------

.. py:function:: rename_data_object_keys(data_object: torch_geometric.data.Data, key_mapping: dict[str, str]) -> torch_geometric.data.Data
.. py:function:: rename_data_object_keys(data_object: torch_geometric.data.Data, key_mapping: dict[str, str | list[str]]) -> torch_geometric.data.Data
Rename data object keys

:param data_object: data object
:param key_mapping: dictionary specifying keys to rename and new names {prev_key: new_key}
:param new_key can be a list of new keys:
:param for example:
:param :
:param prev_key: energy
:param new_key: [common_energy, oc20_energy]
:param This is currently required when we use a single target/label for multiple tasks:


2 changes: 1 addition & 1 deletion _sources/autoapi/core/datasets/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -418,5 +418,5 @@ Package Contents
.. py:method:: sample_property_metadata(num_samples: int = 100)
.. py:function:: data_list_collater(data_list: list[torch_geometric.data.data.BaseData], otf_graph: bool = False) -> torch_geometric.data.data.BaseData
.. py:function:: data_list_collater(data_list: list[torch_geometric.data.data.BaseData], otf_graph: bool = False, to_dict: bool = False) -> torch_geometric.data.data.BaseData | dict[str, torch.Tensor]
2 changes: 1 addition & 1 deletion _sources/autoapi/core/datasets/lmdb_dataset/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -84,5 +84,5 @@ Module Contents
.. py:method:: sample_property_metadata(num_samples: int = 100)
.. py:function:: data_list_collater(data_list: list[torch_geometric.data.data.BaseData], otf_graph: bool = False) -> torch_geometric.data.data.BaseData
.. py:function:: data_list_collater(data_list: list[torch_geometric.data.data.BaseData], otf_graph: bool = False, to_dict: bool = False) -> torch_geometric.data.data.BaseData | dict[str, torch.Tensor]
48 changes: 11 additions & 37 deletions _sources/autoapi/core/models/base/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@ Classes
core.models.base.GraphModelMixin
core.models.base.HeadInterface
core.models.base.BackboneInterface
core.models.base.HydraInterface
core.models.base.HydraModel


Expand Down Expand Up @@ -92,6 +91,9 @@ Module Contents

.. py:class:: HeadInterface
.. py:property:: use_amp
.. py:method:: forward(data: torch_geometric.data.Batch, emb: dict[str, torch.Tensor]) -> dict[str, torch.Tensor]
:abstractmethod:

Expand Down Expand Up @@ -124,28 +126,9 @@ Module Contents



.. py:class:: HydraInterface
Bases: :py:obj:`abc.ABC`


Helper class that provides a standard way to create an ABC using
inheritance.


.. py:method:: get_backbone() -> BackboneInterface
:abstractmethod:



.. py:method:: get_heads() -> dict[str, HeadInterface]
:abstractmethod:
.. py:class:: HydraModel(backbone: dict | None = None, heads: dict | None = None, finetune_config: dict | None = None, otf_graph: bool = True, pass_through_head_outputs: bool = False)


.. py:class:: HydraModel(backbone: dict, heads: dict, otf_graph: bool = True)
Bases: :py:obj:`torch.nn.Module`, :py:obj:`GraphModelMixin`, :py:obj:`HydraInterface`
Bases: :py:obj:`torch.nn.Module`, :py:obj:`GraphModelMixin`


Base class for all neural network modules.
Expand Down Expand Up @@ -180,31 +163,22 @@ Module Contents
:vartype training: bool


.. py:attribute:: otf_graph
.. py:attribute:: backbone
.. py:attribute:: device
:value: None


.. py:attribute:: heads

.. py:attribute:: otf_graph
.. py:attribute:: backbone_model_name
.. py:attribute:: pass_through_head_outputs
.. py:attribute:: output_heads
:type: dict[str, HeadInterface]
.. py:attribute:: starting_model
:value: None

.. py:attribute:: head_names_sorted


.. py:method:: forward(data: torch_geometric.data.Batch)
.. py:method:: get_backbone() -> BackboneInterface
.. py:method:: get_heads() -> dict[str, HeadInterface]
9 changes: 9 additions & 0 deletions _sources/autoapi/core/models/dimenet_plus_plus/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -445,4 +445,13 @@ Module Contents

.. py:method:: forward(data: torch_geometric.data.batch.Batch) -> dict[str, torch.Tensor]
Backbone forward.

:param data: Atomic systems as input
:type data: DataBatch

:returns: **embedding** -- Return backbone embeddings for the given input
:rtype: dict[str->torch.Tensor]



Loading

0 comments on commit db8d3ae

Please sign in to comment.