Update README.md
Browse files
README.md
CHANGED
|
@@ -7,13 +7,13 @@ tags:
|
|
| 7 |
---
|
| 8 |
## Dataset Description:
|
| 9 |
|
| 10 |
-
The Arena-
|
| 11 |
|
| 12 |
-
| Dataset Name
|
| 13 |
-
|
| 14 |
-
|
|
| 15 |
|
| 16 |
-
This dataset is ideal for behavior cloning, policy learning, and generalist robotic
|
| 17 |
|
| 18 |
This dataset is ready for commercial use.
|
| 19 |
|
|
@@ -40,7 +40,7 @@ This dataset is intended for:
|
|
| 40 |
- Automatic/Sensors
|
| 41 |
- Synthetic
|
| 42 |
|
| 43 |
-
|
| 44 |
|
| 45 |
### Labeling Method
|
| 46 |
|
|
@@ -48,18 +48,18 @@ Not Applicable
|
|
| 48 |
|
| 49 |
## Dataset Format:
|
| 50 |
We provide a few dataset files, including
|
| 51 |
-
|
| 52 |
-
- a
|
| 53 |
-
- a Mimic-generated 50 demonstrations in HDF5 dataset file (`
|
| 54 |
- a GR00T-Lerobot formatted dataset converted from the Mimic-generated HDF5 dataset file (`lerobot`)
|
| 55 |
-
|
| 56 |
Each demo in GR00T-Lerobot datasets consists of a time-indexed sequence of the following modalities:
|
| 57 |
|
| 58 |
### Actions
|
| 59 |
-
- action (FP64): joint desired positions for all body joints (
|
| 60 |
|
| 61 |
### Observations
|
| 62 |
-
- observation.state (FP64): joint positions for all body joints (
|
| 63 |
|
| 64 |
### Task-specific
|
| 65 |
- timestamp (FP64): simulation time in seconds of each recorded data entry.
|
|
@@ -70,7 +70,7 @@ Each demo in GR00T-Lerobot datasets consists of a time-indexed sequence of the f
|
|
| 70 |
|
| 71 |
|
| 72 |
### Videos
|
| 73 |
-
-
|
| 74 |
|
| 75 |
In additional, a set of metadata describing the followings is provided,
|
| 76 |
- `episodes.jsonl` contains a list of all the episodes in the entire dataset. Each episode contains a list of tasks and the length of the episode.
|
|
@@ -83,13 +83,13 @@ In additional, a set of metadata describing the followings is provided,
|
|
| 83 |
|
| 84 |
### Record Count
|
| 85 |
|
| 86 |
-
####
|
| 87 |
- Number of demonstrations/trajectories: 50
|
| 88 |
- Number of RGB videos: 50
|
| 89 |
|
| 90 |
### Total Storage
|
| 91 |
|
| 92 |
-
|
| 93 |
|
| 94 |
## Ethical Considerations:
|
| 95 |
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
|
|
@@ -102,4 +102,4 @@ Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.
|
|
| 102 |
author={Mandlekar, Ajay and Nasiriany, Soroush and Wen, Bowen and Akinola, Iretiayo and Narang, Yashraj and Fan, Linxi and Zhu, Yuke and Fox, Dieter},
|
| 103 |
booktitle={7th Annual Conference on Robot Learning},
|
| 104 |
year={2023}
|
| 105 |
-
}
|
|
|
|
| 7 |
---
|
| 8 |
## Dataset Description:
|
| 9 |
|
| 10 |
+
The Arena-GR1-Manipulation-Task dataset is multimodal collections of trajectories generated in Isaac Lab. It supports humanoid (GR1) manipulation task in IsaacLab-Arena environment. Each entry provides the full context (state, vision, language, action) needed to train and evaluate generalist robot policies for opening microwave task.
|
| 11 |
|
| 12 |
+
| Dataset Name | # Trajectories |
|
| 13 |
+
|-----------------------|----------------|
|
| 14 |
+
| GR1 Manipulation Task | 50 |
|
| 15 |
|
| 16 |
+
This dataset is ideal for behavior cloning, policy learning, and generalist robotic manipulation research. It has been for post-training GR00T N1.5 model.
|
| 17 |
|
| 18 |
This dataset is ready for commercial use.
|
| 19 |
|
|
|
|
| 40 |
- Automatic/Sensors
|
| 41 |
- Synthetic
|
| 42 |
|
| 43 |
+
10 human teleoperated demonstrations are collected through a depth camera and keyboard in Isaac Lab. All 50 demos are generated automatically using a synthetic motion trajectory generation framework, Mimicgen [1]. Each demo is generated at 50 Hz.
|
| 44 |
|
| 45 |
### Labeling Method
|
| 46 |
|
|
|
|
| 48 |
|
| 49 |
## Dataset Format:
|
| 50 |
We provide a few dataset files, including
|
| 51 |
+
|
| 52 |
+
- a human-annoated 10 demonstrations in HDF5 dataset file (`arena_gr1_manipulation_dataset_annotated.hdf5`)
|
| 53 |
+
- a Mimic-generated 50 demonstrations in HDF5 dataset file (`arena_gr1_manipulation_dataset_generated.hdf5`)
|
| 54 |
- a GR00T-Lerobot formatted dataset converted from the Mimic-generated HDF5 dataset file (`lerobot`)
|
| 55 |
+
|
| 56 |
Each demo in GR00T-Lerobot datasets consists of a time-indexed sequence of the following modalities:
|
| 57 |
|
| 58 |
### Actions
|
| 59 |
+
- action (FP64): joint desired positions for all body joints (36 DoF)
|
| 60 |
|
| 61 |
### Observations
|
| 62 |
+
- observation.state (FP64): joint positions for all body joints (54 DoF)
|
| 63 |
|
| 64 |
### Task-specific
|
| 65 |
- timestamp (FP64): simulation time in seconds of each recorded data entry.
|
|
|
|
| 70 |
|
| 71 |
|
| 72 |
### Videos
|
| 73 |
+
- 512 x 512 RGB videos in mp4 format from first-person-view camera
|
| 74 |
|
| 75 |
In additional, a set of metadata describing the followings is provided,
|
| 76 |
- `episodes.jsonl` contains a list of all the episodes in the entire dataset. Each episode contains a list of tasks and the length of the episode.
|
|
|
|
| 83 |
|
| 84 |
### Record Count
|
| 85 |
|
| 86 |
+
#### GR1 Manipulation Task
|
| 87 |
- Number of demonstrations/trajectories: 50
|
| 88 |
- Number of RGB videos: 50
|
| 89 |
|
| 90 |
### Total Storage
|
| 91 |
|
| 92 |
+
5.16 GB
|
| 93 |
|
| 94 |
## Ethical Considerations:
|
| 95 |
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
|
|
|
|
| 102 |
author={Mandlekar, Ajay and Nasiriany, Soroush and Wen, Bowen and Akinola, Iretiayo and Narang, Yashraj and Fan, Linxi and Zhu, Yuke and Fox, Dieter},
|
| 103 |
booktitle={7th Annual Conference on Robot Learning},
|
| 104 |
year={2023}
|
| 105 |
+
}
|