Skip to content
/ WMP Public

Reproduction code of paper "World Model-based Perception for Visual Legged Locomotion"

License

Notifications You must be signed in to change notification settings

bytedance/WMP

Repository files navigation

WMP

Code for the paper:

World Model-based Perception for Visual Legged Locomotion

Hang Lai, Jiahang Cao, JiaFeng Xu, Hongtao Wu, Yunfeng Lin, Tao Kong, Yong Yu, Weinan Zhang

Requirements

  1. Create a new python virtual env with python 3.6, 3.7 or 3.8 (3.8 recommended)
  2. Install pytorch:
    • pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
  3. Install Isaac Gym
  4. Install other packages:
    • sudo apt-get install build-essential --fix-missing
    • sudo apt-get install ninja-build
    • pip install setuptools==59.5.0
    • pip install ruamel_yaml==0.17.4
    • sudo apt install libgl1-mesa-glx -y
    • pip install opencv-contrib-python
    • pip install -r requirements.txt

Training

python legged_gym/scripts/train.py --task=a1_amp --headless --sim_device=cuda:0

Training takes about 23G GPU memory, and at least 10k iterations recommended.

Visualization

Please make sure you have trained the WMP before

python legged_gym/scripts/play.py --task=a1_amp --sim_device=cuda:0 --terrain=climb

Acknowledgments

We thank the authors of the following projects for making their code open source:

Citation

If you find this project helpful, please consider citing our paper:

@article{lai2024world,
  title={World Model-based Perception for Visual Legged Locomotion},
  author={Lai, Hang and Cao, Jiahang and Xu, Jiafeng and Wu, Hongtao and Lin, Yunfeng and Kong, Tao and Yu, Yong and Zhang, Weinan},
  journal={arXiv preprint arXiv:2409.16784},
  year={2024}
}

About

Reproduction code of paper "World Model-based Perception for Visual Legged Locomotion"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages