This is the repository accompanying the paper "CoViS-Net: A Cooperative Visual Spatial Foundation Model for Multi-Robot Applications".
Setup Miniconda as instructed here. Run the commands below to create and activate the environment. Don't forget to source the pre_training_setup
script. Creating the conda environment can take up to 30 minutes, but installing libmamba can help speeding up resolving dependencies.
conda env create -f environment.yml
conda activate covisnet
export ./pre_training_setup.bash
Download the HM3D dataset by signing up on the website.
python -m habitat_sim.utils.datasets_download --username xxx --password xxx --uids hm3d_full
Make sure the dataset was downloaded:
ls data/versioned_data/hm3d-1.0/hm3d/train | wc -l
The command should show a number higher than 800.
The dataset generation can be triggered with the following command:
./dataset_util/generate_dataset.bash 800
The number indicates the upper limit on the HM3D scenes used for the data generation (0 would only generate the first scene, 800 all training scenes). The datasets will be moved to the datasets
folder.
Download the dataset from here, move it to the datasets
folder and extract the zip.
After generating the dataset, update the dataset path of the field data/data_dir
in train/configs/covisnet.yaml
. Update the logging configuration in train/configs/logging.yaml
as appropriate.
To reproduce training, run the following command:
python3 -m train fit --config train/configs/covisnet.yaml --config train/configs/logging.yaml
TODO
TODO