Code for the paper:
Hang Lai, Jiahang Cao, JiaFeng Xu, Hongtao Wu, Yunfeng Lin, Tao Kong, Yong Yu, Weinan Zhang
- Create a new python virtual env with python 3.6, 3.7 or 3.8 (3.8 recommended)
- Install pytorch:
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
- Install Isaac Gym
- Download and install Isaac Gym Preview 3 (Preview 2 will not work!) from https://developer.nvidia.com/isaac-gym
cd isaacgym/python && pip install -e .
- Install other packages:
sudo apt-get install build-essential --fix-missing
sudo apt-get install ninja-build
pip install setuptools==59.5.0
pip install ruamel_yaml==0.17.4
sudo apt install libgl1-mesa-glx -y
pip install opencv-contrib-python
pip install -r requirements.txt
python legged_gym/scripts/train.py --task=a1_amp --headless --sim_device=cuda:0
Training takes about 23G GPU memory, and at least 10k iterations recommended.
Please make sure you have trained the WMP before
python legged_gym/scripts/play.py --task=a1_amp --sim_device=cuda:0 --terrain=climb
We thank the authors of the following projects for making their code open source:
If you find this project helpful, please consider citing our paper:
@article{lai2024world,
title={World Model-based Perception for Visual Legged Locomotion},
author={Lai, Hang and Cao, Jiahang and Xu, Jiafeng and Wu, Hongtao and Lin, Yunfeng and Kong, Tao and Yu, Yong and Zhang, Weinan},
journal={arXiv preprint arXiv:2409.16784},
year={2024}
}