Skip to content

Commit

Permalink
Merge pull request #527 from ARISE-Initiative/v1.5.0
Browse files Browse the repository at this point in the history
V1.5.0 official release
  • Loading branch information
zhuyifengzju authored Oct 29, 2024
2 parents 74981fd + 80b91fa commit 29e73bd
Show file tree
Hide file tree
Showing 484 changed files with 1,943,840 additions and 6,276 deletions.
38 changes: 38 additions & 0 deletions .github/ISSUE_TEMPLATE/feature-request.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
---
name: 🛠️ Feature Request
description: Suggest an idea to help us improve Robosuite!
title: "[Feature]: "
labels:
- "ty:feature"

body:
- type: markdown
attributes:
value: >
**Thanks for taking the time to fill out this feature request report! :heart:** Please double check if an issue
[already exists](https://github.com/ARISE-Initiative/robosuite-dev/issues) for
your feature.
We are also happy to accept contributions from our users. For more details see
[here](https://github.com/ARISE-Initiative/robosuite-dev/blob/master/CONTRIBUTING.md).
- type: textarea
attributes:
label: Description
description: |
A clear and concise description of the feature you're interested in.
value: |
<!--- Describe your feature here --->
validations:
required: true

- type: textarea
attributes:
label: Suggested Solution
description: >
Describe the solution you'd like. A clear and concise description of what you want to happen. If you have
considered alternatives, please describe them.
value: |
<!--- Describe your solution here --->
validations:
required: false
35 changes: 35 additions & 0 deletions .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
## What this does
Explain what this PR does. Feel free to tag your PR with the appropriate label(s).

Examples:
| Title | Label |
|----------------------|-----------------|
| Fixes #[issue] | (🐛 Bug) |
| Optimizes something | (⚡️ Performance) |
| Adds a new feature | (✨ Feature) |


## How it was tested
Explain/show how you tested your changes.

Examples:
- Added `test_something` in `tests/test_stuff.py`.
- Added `new_feature` and checked that training converges with policy X on dataset/environment Y.
- Optimized `some_function`, it now runs X times faster than previously.

## How to checkout & try? (for the reviewer)
Provide a simple way for the reviewer to try out your changes.

Examples:
```bash
DATA_DIR=tests/data pytest -sx tests/test_stuff.py::test_something
```
```bash
python robosuite/scripts/train.py --some.option=true
```

## SECTION TO REMOVE BEFORE SUBMITTING YOUR PR
**Note**: Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR. Try to avoid tagging more than 3 people.

**Note**: Before submitting this PR, please read the [contributor guideline](https://github.com/ARISE-Initiative/robosuite/blob/master/CONTRIBUTING.md).
14 changes: 14 additions & 0 deletions .github/workflows/pre-commit.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
name: pre-commit

on:
pull_request:
push:
branches: [main]

jobs:
pre-commit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v3
- uses: pre-commit/[email protected]
60 changes: 60 additions & 0 deletions .github/workflows/run-tests.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
name: run-tests

on:
push:
branches: [ "main", "master" ]
pull_request:

permissions:
contents: read

jobs:
build:

runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v3
- name: Set up Python 3.10
uses: actions/setup-python@v3
with:
python-version: "3.10"
- name: Install dependencies
run: |
sudo apt-cache search mesa
sudo apt update
sudo apt install -y libgl1-mesa-glx libgl1-mesa-dev libosmesa6-dev python3-opengl mesa-utils
# check if OSMesa is installed
dpkg -L libosmesa6-dev
python -m pip install --upgrade pip
pip install flake8 pytest
# Install the current repo. We explictly install mink as it's supposed to be in requirements-extra, but we need it for passing CI test.
- name: Install robosuite
run: |
pip install -e .
if [ -f requirements.txt ]; then
pip install -r requirements.txt
pip install mink
fi
if [ -f requirements-extra.txt ]; then pip install -r requirements-extra.txt; fi
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
# Run the specified tests
# NOTE: custom environment variable MUJOCO_GL="osmesa" is used to run tests without a display
# https://github.com/google-deepmind/dm_control/issues/136
# https://github.com/ARISE-Initiative/robosuite/issues/469
- name: Test with pytest
run: |
export PYOPENGL_PLATFORM="osmesa"
export MUJOCO_GL="osmesa"
python3 tests/test_robots/test_all_robots.py
python3 tests/test_grippers/test_all_grippers.py
python3 tests/test_environments/test_all_environments.py
pytest tests/test_controllers/test_composite_controllers.py
12 changes: 11 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,11 @@ eggs/
lib/
lib64/
parts/
!robosuite/controllers/parts
sdist/
var/
wheels/
!robosuite/models/assets/robots/**/wheels/
*.egg-info/
.installed.cfg
*.egg
Expand Down Expand Up @@ -103,8 +105,9 @@ ENV/
# mac
.DS_Store

# mujoco-key
# mujoco files
mjkey.txt
MUJOCO_LOG.TXT

.mujocomanip_temp_model.xml

Expand All @@ -115,3 +118,10 @@ mjkey.txt

# private macros
macros_private.py

robosuite_usd/

MUJOCO_LOG.TXT

# private demonstration files
robosuite/models/assets/demonstrations_private
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ repos:
- id: black
language_version: python3 # Should be a command that runs python3.6+
- repo: https://github.com/pycqa/isort
rev: 5.10.1
rev: 5.12.0
hooks:
- id: isort
name: isort (python)
9 changes: 7 additions & 2 deletions AUTHORS
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,15 @@ Josiah Wong <[email protected]>
Ajay Mandlekar <[email protected]>
Roberto Martín-Martín <[email protected]>
Abhishek Joshi <[email protected]>
Kevin Lin <[email protected]>
Soroush Nasiriany <[email protected]>
Yifeng Zhu <[email protected]>

Past Contributors
Contributors
Zhenyu Jiang <[email protected]>
Yuqi Xie <[email protected]>
Abhiram Maddukuri <[email protected]>
You Liang Tan <[email protected]>
Jiren Zhu <[email protected]>
Jim (Linxi) Fan <[email protected]>
Orien Zeng <[email protected]>
Expand All @@ -28,4 +33,4 @@ Jonathan Booher <[email protected]>
Danfei Xu <[email protected]>
Rachel Gardner <[email protected]>
Albert Tung <[email protected]>
Divyansh Jha <[email protected]>
Divyansh Jha <[email protected]>
20 changes: 12 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,9 @@

-------
## Latest Updates

- [10/28/2024] **v1.5**: Added support for diverse robot embodiments (including humanoids), custom robot composition, composite controllers (including whole body controllers), more teleoperation devices, photo-realistic rendering. [[release notes]](https://github.com/ARISE-Initiative/robosuite/releases/tag/v1.5.0)

- [11/15/2022] **v1.4**: Backend migration to DeepMind's official [MuJoCo Python binding](https://github.com/deepmind/mujoco), robot textures, and bug fixes :robot: [[release notes]](https://github.com/ARISE-Initiative/robosuite/releases/tag/v1.4.0) [[documentation]](http://robosuite.ai/docs/v1.4/)

- [10/19/2021] **v1.3**: Ray tracing and physically based rendering tools :sparkles: and access to additional vision modalities 🎥 [[video spotlight]](https://www.youtube.com/watch?v=2xesly6JrQ8) [[release notes]](https://github.com/ARISE-Initiative/robosuite/releases/tag/v1.3) [[documentation]](http://robosuite.ai/docs/v1.3/)
Expand All @@ -16,31 +19,32 @@

-------

**robosuite** is a simulation framework powered by the [MuJoCo](http://mujoco.org/) physics engine for robot learning. It also offers a suite of benchmark environments for reproducible research. The current release (v1.4) features long-term support with the official MuJoCo binding from DeepMind. This project is part of the broader [Advancing Robot Intelligence through Simulated Environments (ARISE) Initiative](https://github.com/ARISE-Initiative), with the aim of lowering the barriers of entry for cutting-edge research at the intersection of AI and Robotics.
**robosuite** is a simulation framework powered by the [MuJoCo](http://mujoco.org/) physics engine for robot learning. It also offers a suite of benchmark environments for reproducible research. The current release (v1.5) features support for diverse robot embodiments (including humanoids), custom robot composition, composite controllers (including whole body controllers), more teleoperation devices, photo-realistic rendering. This project is part of the broader [Advancing Robot Intelligence through Simulated Environments (ARISE) Initiative](https://github.com/ARISE-Initiative), with the aim of lowering the barriers of entry for cutting-edge research at the intersection of AI and Robotics.

Data-driven algorithms, such as reinforcement learning and imitation learning, provide a powerful and generic tool in robotics. These learning paradigms, fueled by new advances in deep learning, have achieved some exciting successes in a variety of robot control problems. However, the challenges of reproducibility and the limited accessibility of robot hardware (especially during a pandemic) have impaired research progress. The overarching goal of **robosuite** is to provide researchers with:

* a standardized set of benchmarking tasks for rigorous evaluation and algorithm development;
* a modular design that offers great flexibility to design new robot simulation environments;
* a modular design that offers great flexibility in designing new robot simulation environments;
* a high-quality implementation of robot controllers and off-the-shelf learning algorithms to lower the barriers to entry.

This framework was originally developed since late 2017 by researchers in [Stanford Vision and Learning Lab](http://svl.stanford.edu) (SVL) as an internal tool for robot learning research. Now it is actively maintained and used for robotics research projects in SVL and the [UT Robot Perception and Learning Lab](http://rpl.cs.utexas.edu) (RPL). We welcome community contributions to this project. For details please check out our [contributing guidelines](CONTRIBUTING.md).
This framework was originally developed in late 2017 by researchers in [Stanford Vision and Learning Lab](http://svl.stanford.edu) (SVL) as an internal tool for robot learning research. Now, it is actively maintained and used for robotics research projects in SVL, the [UT Robot Perception and Learning Lab](http://rpl.cs.utexas.edu) (RPL) and NVIDIA [Genearlist Embodied Agent Research Group](https://research.nvidia.com/labs/gear/) (GEAR). We welcome community contributions to this project. For details, please check out our [contributing guidelines](CONTRIBUTING.md).

This release of **robosuite** contains seven robot models, eight gripper models, six controller modes, and nine standardized tasks. It also offers a modular design of APIs for building new environments with procedural generation. We highlight these primary features below:
**Robosuite** offers a modular design of APIs for building new environments, robot embodiments, and robot controllers with procedural generation. We highlight these primary features below:

* **standardized tasks**: a set of standardized manipulation tasks of large diversity and varying complexity and RL benchmarking results for reproducible research;
* **procedural generation**: modular APIs for programmatically creating new environments and new tasks as combinations of robot models, arenas, and parameterized 3D objects;
* **robot controllers**: a selection of controller types to command the robots, such as joint-space velocity control, inverse kinematics control, operational space control, and 3D motion devices for teleoperation;
* **procedural generation**: modular APIs for programmatically creating new environments and new tasks as combinations of robot models, arenas, and parameterized 3D objects. Check out our repo [robosuite_models](https://github.com/ARISE-Initiative/robosuite_models) for extra robot models tailored to robosuite.
* **robot controllers**: a selection of controller types to command the robots, such as joint-space velocity control, inverse kinematics control, operational space control, and whole body control;
* **teleoperation devices**: a selection of teleoperation devices including keyboard, spacemouse and MuJoCo viewer drag-drop;
* **multi-modal sensors**: heterogeneous types of sensory signals, including low-level physical states, RGB cameras, depth maps, and proprioception;
* **human demonstrations**: utilities for collecting human demonstrations, replaying demonstration datasets, and leveraging demonstration data for learning. Check out our sister project [robomimic](https://arise-initiative.github.io/robomimic-web/);
* **photorealistic rendering**: integration with advanced graphics tools that provide real-time photorealistic renderings of simulated scenes.
* **photorealistic rendering**: integration with advanced graphics tools that provide real-time photorealistic renderings of simulated scenes, including support for NVIDIA Isaac Sim rendering.

## Citation
Please cite [**robosuite**](https://robosuite.ai) if you use this framework in your publications:
```bibtex
@inproceedings{robosuite2020,
title={robosuite: A Modular Simulation Framework and Benchmark for Robot Learning},
author={Yuke Zhu and Josiah Wong and Ajay Mandlekar and Roberto Mart\'{i}n-Mart\'{i}n and Abhishek Joshi and Soroush Nasiriany and Yifeng Zhu},
author={Yuke Zhu and Josiah Wong and Ajay Mandlekar and Roberto Mart\'{i}n-Mart\'{i}n and Abhishek Joshi and Soroush Nasiriany and Yifeng Zhu and Kevin Lin},
booktitle={arXiv preprint arXiv:2009.12293},
year={2020}
}
Expand Down
33 changes: 33 additions & 0 deletions docs/basicusage.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# Basic Usage

## Running Standardized Environments
**robosuite** offers a set of standardized manipulation tasks for benchmarking purposes. These pre-defined environments can be easily instantiated with the `make` function. The APIs we provide to interact with our environments are simple and similar to the ones used by [OpenAI Gym](https://github.com/openai/gym/). Below is a minimalistic example of how to interact with an environment.

```python
import numpy as np
import robosuite as suite

# create environment instance
env = suite.make(
env_name="Lift", # try with other tasks like "Stack" and "Door"
robots="Panda", # try with other robots like "Sawyer" and "Jaco"
has_renderer=True,
has_offscreen_renderer=False,
use_camera_obs=False,
)

# reset the environment
env.reset()

for i in range(1000):
action = np.random.randn(*env.action_spec[0].shape)
obs, reward, done, info = env.step(action) # take action in the environment
env.render() # render on display
````

This script above creates a simulated environment with the on-screen renderer, which is useful for visualization and qualitative evaluation. The `step()` function takes an `action` as input and returns a tuple of `(obs, reward, done, info)` where `obs` is an `OrderedDict` containing observations `[(name_string, np.array), ...]`, `reward` is the immediate reward obtained per step, `done` is a Boolean flag indicating if the episode has terminated and `info` is a dictionary which contains additional metadata.

Many other parameters can be configured for each environment. They provide functionalities such as headless rendering, getting pixel observations, changing camera settings, using reward shaping, and adding extra low-level observations. Please refer to [Environment](modules/environments) modules and the [Environment class](simulation/environment) APIs for further details.

Demo scripts that showcase various features of **robosuite** are available [here](demos). The purpose of each script and usage instructions can be found at the beginning of each file.

12 changes: 12 additions & 0 deletions docs/changelog.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# Changelog



## Version 1.5.0

<div class="admonition warning">
<p class="admonition-title">Breaking API changes</p>
<div>
<ul>New controller design.</ul>
</div>
</div>
5 changes: 5 additions & 0 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,11 @@
"sphinx.ext.autodoc",
"recommonmark", # use Sphinx-1.4 or newer
"nbsphinx",
"sphinx_togglebutton",
]

myst_enable_extensions = [
"dollarmath",
]

mathjax_config = {
Expand Down
Loading

0 comments on commit 29e73bd

Please sign in to comment.