Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix typos #802

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/source/contributing/robots.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# Robots

To add a new robot, you can follow any of the existing robots built already as templates in ManiSkill. We also highly recommend that you read through [custom robot tutorial](../user_guide/tutorials/custom_robots.md) to learn how to make new robots and tune them.
To add a new robot, you can follow any of the existing robots built already as templates in ManiSkill. We also highly recommend that you read through the [custom robot tutorial](../user_guide/tutorials/custom_robots.md) to learn how to make new robots and tune them.

ManiSkill is a supporter of open-source and we encourage you to make contributions to help grow our list of robots in simulation!

## Contributing the Robot to ManiSkill

We recommend first opening an issue on our GitHub about your interest in adding a new robot as to not conflict with others and to consolidate information. Once done our maintainers can give a go ahead.
We recommend first opening an issue on our GitHub about your interest in adding a new robot as to not conflict with others and to consolidate information. Once done, our maintainers can give a go ahead.

In your pull request, we ask you to do the following:
- The robot / agent class code should be placed in `mani_skill/agents/<agent_group_name>/your_robot.py`. If you want to re-use an agent class (e.g. as done with the Allegro hand robot and the Panda robot) you can create a folder that groups all the different agent classes together.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/contributing/tasks.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ class YourEnv(BaseEnv):

Whenever possible, task code should be written in batch mode (assuming all data in and out are batched by the number of parallel environments). This generally ensures that the task is then GPU simulatable, which is of great benefit to workflows that leverage sim data collection at scale.

GPU simulation also entails tuning the GPU simulation configurations. You can opt to do two ways, dynamic or fix GPU simulation configurations.
GPU simulation also entails tuning the GPU simulation configurations. You can opt to do two ways, dynamic or fixed GPU simulation configurations.

A version of fixed configurations can be seen in `mani_skill/envs/tasks/push_cube.py` which defines the default

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

This page documents code and results of benchmarking various robotics simulators on a number of dimensions. It is still a WIP as we write more fair benchmarking environments for other simulators. Given the number of factors that impact simulation speed and rendering (e.g number of objects, geometry complexity etc.) trends that appear in results in this page may not necessarily be the case on some environments.

Currently we just compare ManiSkill and [Isaac Lab](https://github.com/isaac-sim/IsaacLab) on one task, Cartpole Balancing (control). For details on benchmarking methodology see [this section](#benchmarking-detailsmethodology)
Currently we just compare ManiSkill to [Isaac Lab](https://github.com/isaac-sim/IsaacLab) on one task, Cartpole Balancing (control). For details on benchmarking methodology see [this section](#benchmarking-detailsmethodology)

Raw benchmark results can be read from the .csv files in the [results folder on GitHub](https://github.com/haosulab/ManiSkill/blob/main/docs/source/user_guide/additional_resources/benchmarking_results). There are also plotted figures in that folder. Below we show a selection of some of the figures/results from testing on an RTX 4090. The figures are also sometimes annotated with the GPU memory usage in GB.

Expand Down Expand Up @@ -85,7 +85,7 @@ There is currently one benchmarked task: Cartpole Balance (classic control). Det

For simulators that use physx (like ManiSkill and Isaac Lab), for comparison we try to align as many simulation configuration parameters (like number of solver position iterations) as well as object types (e.g. collision meshes, size of objects etc.) as close as possible.

In the future we plan to benchmark other simulators using other physics engines (like Mujoco) although how to fairly do so is still WIP.
In the future we plan to benchmark other simulators using other physics engines (like Mujoco) although how to do so fairly is still WIP.

Reward functions and evaluation functions are purposely left out and not benchmarked.

Expand Down
4 changes: 2 additions & 2 deletions docs/source/user_guide/concepts/controllers.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ keyframe qpos of rest panda modified to kf.qpos[-4] += 0.5
robot pose modified to sapien.Pose([-0.302144, -3.72529e-09, -5.96046e-08], [0.984722, 9.31323e-10, -1.50322e-08, -0.174137])
-->

For rotation in this controller, the user specifies a delta X, Y, and Z axis rotation (in radians if not normalized) indicating how far to rotate in all those dimensions. They are processed as XYZ euler angles and converted to a quaternion internally. Inverse kinematics is then used to determine the required joint actions to achieve the desired rotation.
For rotation in this controller, the user specifies a delta X, Y, and Z axis rotation (in radians if not normalized) indicating how far to rotate in all those dimensions. They are processed as XYZ Euler angles and converted to a quaternion internally. Inverse kinematics is then used to determine the required joint actions to achieve the desired rotation.

ManiSkill implements two types of rotation based control that are generally the most intuitive to understand and commonly used in real-world robots, which is rotation under one orientation aligned/positioned at another frame. In particular there are two rotation frames supported: root aligned body and body aligned body. A aligned B means rotation in the frame with the same orientation as frame A and same position as frame B. Both frames are shown below by setting the corresponding dimension in the action to > 0 and the rest to 0.

Expand Down Expand Up @@ -126,7 +126,7 @@ To help detail how controllers work in detail, below we explain with formulae ho

### Terminology

- fixed joint: a joint that can not be controlled. The degree of freedom (DoF) is 0.
- fixed joint: a joint that cannot be controlled. The degree of freedom (DoF) is 0.
- `qpos` ( $q$ ): controllable joint positions
- `qvel` ( $\dot{q}$ ): controllable joint velocities
- target joint position ( $\bar{q}$ ): target position of the motor which drives the joint
Expand Down
4 changes: 2 additions & 2 deletions docs/source/user_guide/concepts/gpu_simulation.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ The idea of sub-scenes is that reading data of e.g. actor poses is automatically
:::{figure} images/physx_scene_subscene_relationship.png
:::

SAPIEN permits sub-scenes to be located at any location you want, ManiSkill just picks the most square setup with a fixed spacing parameter. Notice that if objects in one sub-scene go beyond it's workspace, it can actually affect the other sub-scenes. This is a common bug users may face when simulating larger scenes of e.g. houses or out-door settings where the spacing parameter is set too low so objects from e.g. sub-scene-0 will contact with objects in sub-scene-1.
SAPIEN permits sub-scenes to be located at any location you want, ManiSkill just picks the most square setup with a fixed spacing parameter. Notice that if objects in one sub-scene go beyond its workspace, it can actually affect the other sub-scenes. This is a common bug users may face when simulating larger scenes of e.g. houses or out-door settings where the spacing parameter is set too low so objects from e.g. sub-scene-0 will contact with objects in sub-scene-1.


## GPU Simulation Lifecycle
Expand All @@ -20,7 +20,7 @@ In ManiSkill, the gym API is adopted to create, reset, and step through environm

The `env.reset` part consists of one time reconfiguration followed by initialization:

1. Reconfiguration: Loading objects (comrpised of actors/articulations/lights) into the scene (basically spawning them in with an initial pose and not doing anything else)
1. Reconfiguration: Loading objects (comprised of actors/articulations/lights) into the scene (basically spawning them in with an initial pose and not doing anything else)
2. A call to `physx_system.gpu_init()` to initialize all GPU memory buffers and setting up all the rendering groups for parallelized rendering
3. Initializing all actors and articulations (set poses, qpos values etc.).
4. Running `physx_system.gpu_apply_*` to then save all the initialized data in step 3 to the GPU buffers to prepare for simulation
Expand Down
4 changes: 2 additions & 2 deletions docs/source/user_guide/concepts/observation.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ In addition to `agent` and `extra`, `sensor_data` and `sensor_param` are introdu

If the data comes from a camera sensor:
- `Color`: [H, W, 4], `torch.uint8`. RGB+Alpha values..
- `PositionSegmentation`: [H, W, 4], `torch.int16`. The first 3 dimensions stand for (x, y, z) coordinates in the OpenGL/Blender convension. The unit is millimeters. The last dimension represents segmentation ID, see the [Segmentation data section](#segmentation-data) for more details.
- `PositionSegmentation`: [H, W, 4], `torch.int16`. The first 3 dimensions stand for (x, y, z) coordinates in the OpenGL/Blender convention. The unit is millimeters. The last dimension represents segmentation ID, see the [Segmentation data section](#segmentation-data) for more details.

- `sensor_param`: parameters of each sensor, which varies depending on type of sensor
- `{sensor_uid}`:
Expand All @@ -61,7 +61,7 @@ This observation mode has the same data format as the [sensor_data mode](#sensor
Note that this data is not scaled/normalized to [0, 1] or [-1, 1] in order to conserve memory, so if you consider to train on RGB, depth, and/or segmentation data be sure to scale your data before training on it.


ManiSkill by default flexibly supports different combinations of RGB, depth, and segmentation data, namely `rgb`, `depth`, `segmentation`, `rgb+depth`, `rgb+depth+segmentation`, `rgb+segmentation`, and`depth+segmentation`. (`rgbd` is a short hand for `rgb+depth`). Whichever image modality that is not chosen will not be included in the observation and conserves some memory and GPU bandwith.
ManiSkill by default flexibly supports different combinations of RGB, depth, and segmentation data, namely `rgb`, `depth`, `segmentation`, `rgb+depth`, `rgb+depth+segmentation`, `rgb+segmentation`, and`depth+segmentation`. (`rgbd` is a short hand for `rgb+depth`). Whichever image modality that is not chosen will not be included in the observation and conserves some memory and GPU bandwidth.

The RGB and depth data visualized can look like below:
```{image} images/replica_cad_rgbd.png
Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/concepts/simulation_101.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Actors are generally "singular" objects that when physically acted upon with som

In simulation actors are composed of two major elements, collision shapes and visual shapes.

**Colision Shapes:**
**Collision Shapes:**

Collision shapes define how an object will behave in simulation. A single actor can also be composed of several convex collision shapes in simulation.

Expand Down
6 changes: 3 additions & 3 deletions docs/source/user_guide/data_collection/motionplanning.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@

ManiSkill provides simple tools to use motion planning to generate robot trajectories, primarily via the open-source [mplib](https://github.com/haosulab/MPlib) library. If you install ManiSkill, mplib will come installed already so no extra installation is necessary.

For an in depth tutorial on how to use more advanced features of mplib check out their documentation here: https://motion-planning-lib.readthedocs.io/latest/. Otherwise this section will cover some example code you can use and modify to generate motion planned demonstrations. The example code here is written for the Panda arm but should be modifiable to work for other robots.
For an in-depth tutorial on how to use more advanced features of mplib check out their documentation here: https://motion-planning-lib.readthedocs.io/latest/. Otherwise this section will cover some example code you can use and modify to generate motion planned demonstrations. The example code here is written for the Panda arm but can be modified to work for other robots.

## Motion Planning with Panda Arm

We provide some built in motion planning solutions for some tasks using the Panda arm at https://github.com/haosulab/ManiSkill/tree/main/mani_skill/examples/motionplanning/panda. You can run a quick demo below, which will save trajectory data as .h5 files to `demos/motionplanning/<env_id>` and optionally save videos and/or visualize with a GUI.
We provide some built-in motion planning solutions for some tasks using the Panda arm at https://github.com/haosulab/ManiSkill/tree/main/mani_skill/examples/motionplanning/panda. You can run a quick demo below, which will save trajectory data as .h5 files to `demos/motionplanning/<env_id>` and optionally save videos and/or visualize with a GUI.

```bash
python -m mani_skill.examples.motionplanning.panda.run -e "PickCube-v1" --save-video # runs headless and only saves video
Expand All @@ -28,4 +28,4 @@ For example, the PickCube-v1 task is composed of
3. close the gripper
4. move the gripper to above the goal location so the tool center point (tcp) of the gripper is at the goal

Note that while motion planning can generate and solve a wide variety of tasks, it's main limitations is that it often requires an human/engineer to tune and write, as well as being unable to generate solutions for more dynamical tasks.
Note that while motion planning can generate and solve a wide variety of tasks, its main limitation is that it often requires an human/engineer to tune and write, as well as being unable to generate solutions for more dynamical tasks.
6 changes: 3 additions & 3 deletions docs/source/user_guide/datasets/demos.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Demonstrations

We provide a command line tool to download demonstrations directly from our [Hugging Face 🤗 dataset](https://huggingface.co/datasets/haosulab/ManiSkill_Demonstrations) which are done by task ID. The tool will download the demonstration files to a folder and also a few demonstration videos visualizing what the demonstrations look like. See [Tasks](../../tasks/index.md) for a list of all supported tasks.
We provide a command line tool to download demonstrations directly from our [Hugging Face 🤗 dataset](https://huggingface.co/datasets/haosulab/ManiSkill_Demonstrations) which is done by task ID. The tool will download the demonstration files to a folder and also a few demonstration videos visualizing what the demonstrations look like. See [Tasks](../../tasks/index.md) for a list of all supported tasks.

<!-- TODO: add a table here detailing the data info in detail -->
<!-- Please see our [notes](https://docs.google.com/document/d/1bBKmsR-R_7tR9LwaT1c3J26SjIWw27tWSLdHnfBR01c/edit?usp=sharing) about the details of the demonstrations. -->
Expand All @@ -15,7 +15,7 @@ python -m mani_skill.utils.download_demo all

## Format

All demonstrations for an task are saved in the HDF5 format openable by [h5py](https://github.com/h5py/h5py). Each HDF5 dataset is named `trajectory.{obs_mode}.{control_mode}.{sim_backend}.h5`, and is associated with a JSON metadata file with the same base name. Unless otherwise specified, `trajectory.h5` is short for `trajectory.none.pd_joint_pos.cpu.h5`, which contains the original demonstrations generated by the `pd_joint_pos` controller with the `none` observation mode (empty observations) in the CPU based simulation. However, there may exist demonstrations generated by other controllers. **Thus, please check the associated JSON to ensure which controller is used.**
All demonstrations for a task are saved in the HDF5 format openable by [h5py](https://github.com/h5py/h5py). Each HDF5 dataset is named `trajectory.{obs_mode}.{control_mode}.{sim_backend}.h5`, and is associated with a JSON metadata file with the same base name. Unless otherwise specified, `trajectory.h5` is short for `trajectory.none.pd_joint_pos.cpu.h5`, which contains the original demonstrations generated by the `pd_joint_pos` controller with the `none` observation mode (empty observations) in the CPU based simulation. However, there may exist demonstrations generated by other controllers. **Thus, please check the associated JSON to ensure which controller is used.**
<!--
:::{note}
For `PickSingleYCB-v0`, `TurnFaucet-v0`, the dataset is named `{model_id}.h5` for each asset. It is due to some legacy issues, and might be changed in the future.
Expand Down Expand Up @@ -84,7 +84,7 @@ env_state = {
```
In the trajectory file `env_states` will be the same structure but each value/leaf in the dictionary will be a sequence of states representing the state of that particular entity in the simulation over time.

In practice it is may be more useful to use slices of the env_states data (or the observations data), which can be done with
In practice it may be more useful to use slices of the env_states data (or the observations data), which can be done with
```python
import mani_skill.trajectory.utils as trajectory_utils
env_states = trajectory_utils.dict_to_list_of_dicts(env_states)
Expand Down
6 changes: 3 additions & 3 deletions docs/source/user_guide/datasets/replay.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Replaying/Converting Trajectories

ManiSkill provides tools to not only collect/load trajectories, but to also replay trajectories and convert observations/actions.
ManiSkill provides tools to not only collect/load trajectories, but also to replay trajectories and convert observations/actions.

To replay the demonstrations (without changing the observation mode and control mode):

Expand Down Expand Up @@ -113,7 +113,7 @@ Since some demonstrations are collected in a non-quasi-static way (objects are n

As the replay trajectory tool is fairly complex and feature rich, we suggest a few example workflows that may be useful for various use cases

### Replaying Trajectories from One Control Mode to a Easier to Learn Control Mode
### Replaying Trajectories from One Control Mode to an Easier-to-Learn Control Mode

In machine learning workflows, it can sometimes be easier to learn from some control modes such as end-effector control ones. The example below does that exactly

Expand All @@ -126,7 +126,7 @@ python -m mani_skill.trajectory.replay_trajectory \

Note that some target control modes are difficult to convert to due to inherent differences in controllers and the behavior of the demonstrations.

### Adding rewards/observations in trajectories
### Adding rewards/observations to trajectories

To conserve memory, demonstrations are typically stored without observations and rewards. The example below shows how to add rewards and RGB observations and normalized dense rewards (assuming the environment supports dense rewards) back in. `--use-env-states` is added as a way to ensure the state/observation data replayed exactly as the original trajectory generated.

Expand Down
Loading