Update README.md
Browse files
README.md
CHANGED
|
@@ -1,101 +1,102 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: openrail
|
| 3 |
-
tags:
|
| 4 |
-
- robotics
|
| 5 |
-
- trajectory-prediction
|
| 6 |
-
- manipulation
|
| 7 |
-
- computer-vision
|
| 8 |
-
- time-series
|
| 9 |
-
pretty_name: Codatta Robotic Manipulation Trajectory
|
| 10 |
-
configs:
|
| 11 |
-
- config_name: default
|
| 12 |
-
data_files:
|
| 13 |
-
- split: train
|
| 14 |
-
path: data/train-*
|
| 15 |
-
dataset_info:
|
| 16 |
-
features:
|
| 17 |
-
- name: id
|
| 18 |
-
dtype: string
|
| 19 |
-
- name: total_frames
|
| 20 |
-
dtype: int32
|
| 21 |
-
- name: annotations
|
| 22 |
-
dtype: string
|
| 23 |
-
- name: trajectory_image
|
| 24 |
-
dtype: image
|
| 25 |
-
- name: video_path
|
| 26 |
-
dtype: string
|
| 27 |
-
splits:
|
| 28 |
-
- name: train
|
| 29 |
-
num_bytes: 39054025
|
| 30 |
-
num_examples: 50
|
| 31 |
-
download_size: 38738419
|
| 32 |
-
dataset_size: 39054025
|
| 33 |
-
language:
|
| 34 |
-
- en
|
| 35 |
-
size_categories:
|
| 36 |
-
- n<1K
|
| 37 |
-
---
|
| 38 |
|
| 39 |
# Codatta Robotic Manipulation Trajectory (Sample)
|
| 40 |
|
| 41 |
-
##
|
| 42 |
|
| 43 |
-
This dataset contains high-quality annotated trajectories of robotic gripper manipulations.
|
| 44 |
|
| 45 |
-
|
| 46 |
|
| 47 |
-
|
| 48 |
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
* **Trajectory Prediction:** Predicting the path of a gripper based on visual context.
|
| 52 |
-
* **Keyframe Extraction:** Identifying critical moments in a manipulation task (e.g., contact, velocity change).
|
| 53 |
-
* **Robotic Control:** Imitation learning from human-demonstrated or teleoperated data.
|
| 54 |
-
|
| 55 |
-
## Dataset Structure
|
| 56 |
|
| 57 |
### Data Fields
|
| 58 |
-
|
| 59 |
* **`id`** (string): Unique identifier for the trajectory sequence.
|
| 60 |
* **`total_frames`** (int32): Total number of frames in the video sequence.
|
| 61 |
* **`video_path`** (string): Path to the source MP4 video file recording the manipulation action.
|
| 62 |
* **`trajectory_image`** (image): A JPEG preview showing the overlaid trajectory path or keyframe visualization.
|
| 63 |
-
* **`annotations`** (string): A JSON-formatted string containing the detailed coordinate data.
|
| 64 |
-
* *Structure:* Contains lists of keyframes, timestamp, and the 5-point coordinates for the gripper in each annotated frame.
|
| 65 |
|
| 66 |
-
###
|
| 67 |
-
|
| 68 |
|
| 69 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 70 |
|
| 71 |
-
The
|
|
|
|
| 72 |
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
|
|
|
|
|
|
| 76 |
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
2. [cite_start]**End Frame:** The gripper leaves the screen[cite: 22].
|
| 81 |
-
3. [cite_start]**Velocity Change:** Frames where the speed direction suddenly changes (marking the minimum speed point)[cite: 23].
|
| 82 |
-
4. [cite_start]**State Change:** Frames where the gripper opens or closes[cite: 24].
|
| 83 |
-
5. [cite_start]**Contact:** The precise moment the gripper touches the object[cite: 25].
|
| 84 |
|
| 85 |
-
|
| 86 |
-
[cite_start]For every annotated keyframe, the gripper is labeled with **5 specific coordinate points** to capture its pose and state accurately[cite: 27]:
|
| 87 |
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 93 |
|
| 94 |
-
|
| 95 |
-
*
|
| 96 |
-
* **
|
|
|
|
| 97 |
|
| 98 |
-
|
| 99 |
|
| 100 |
```python
|
| 101 |
from datasets import load_dataset
|
|
@@ -113,4 +114,9 @@ sample['trajectory_image'].show()
|
|
| 113 |
|
| 114 |
# Parse annotations
|
| 115 |
annotations = json.loads(sample['annotations'])
|
| 116 |
-
print(f"Keyframes count: {len(annotations)}")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: openrail
|
| 3 |
+
tags:
|
| 4 |
+
- robotics
|
| 5 |
+
- trajectory-prediction
|
| 6 |
+
- manipulation
|
| 7 |
+
- computer-vision
|
| 8 |
+
- time-series
|
| 9 |
+
pretty_name: Codatta Robotic Manipulation Trajectory
|
| 10 |
+
configs:
|
| 11 |
+
- config_name: default
|
| 12 |
+
data_files:
|
| 13 |
+
- split: train
|
| 14 |
+
path: data/train-*
|
| 15 |
+
dataset_info:
|
| 16 |
+
features:
|
| 17 |
+
- name: id
|
| 18 |
+
dtype: string
|
| 19 |
+
- name: total_frames
|
| 20 |
+
dtype: int32
|
| 21 |
+
- name: annotations
|
| 22 |
+
dtype: string
|
| 23 |
+
- name: trajectory_image
|
| 24 |
+
dtype: image
|
| 25 |
+
- name: video_path
|
| 26 |
+
dtype: string
|
| 27 |
+
splits:
|
| 28 |
+
- name: train
|
| 29 |
+
num_bytes: 39054025
|
| 30 |
+
num_examples: 50
|
| 31 |
+
download_size: 38738419
|
| 32 |
+
dataset_size: 39054025
|
| 33 |
+
language:
|
| 34 |
+
- en
|
| 35 |
+
size_categories:
|
| 36 |
+
- n<1K
|
| 37 |
+
---
|
| 38 |
|
| 39 |
# Codatta Robotic Manipulation Trajectory (Sample)
|
| 40 |
|
| 41 |
+
## Overview
|
| 42 |
|
| 43 |
+
This dataset contains high-quality annotated trajectories of robotic gripper manipulations. Produced by **Codatta**, it focuses on third-person views of robotic arms performing pick-and-place or manipulation tasks. The dataset is designed to train models for fine-grained control, trajectory prediction, and object interaction tasks.
|
| 44 |
|
| 45 |
+
The scope specifically includes third-person views (fixed camera recording the robot) while explicitly excluding first-person views (Eye-in-Hand) to ensure consistent coordinate mapping.
|
| 46 |
|
| 47 |
+
## Dataset Contents
|
| 48 |
|
| 49 |
+
Each sample in this dataset includes the raw video, a visualization of the trajectory, and a rigorous JSON annotation of keyframes and coordinate points.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 50 |
|
| 51 |
### Data Fields
|
|
|
|
| 52 |
* **`id`** (string): Unique identifier for the trajectory sequence.
|
| 53 |
* **`total_frames`** (int32): Total number of frames in the video sequence.
|
| 54 |
* **`video_path`** (string): Path to the source MP4 video file recording the manipulation action.
|
| 55 |
* **`trajectory_image`** (image): A JPEG preview showing the overlaid trajectory path or keyframe visualization.
|
| 56 |
+
* **`annotations`** (string): A JSON-formatted string containing the detailed coordinate data. It contains lists of keyframes, timestamps, and 5-point coordinates for the gripper.
|
|
|
|
| 57 |
|
| 58 |
+
### Annotation Standards
|
| 59 |
+
The data follows a strict protocol to ensure precision:
|
| 60 |
|
| 61 |
+
**1. Keyframe Selection**
|
| 62 |
+
Annotations are sparse, focusing on specific Keyframes defined by the following events:
|
| 63 |
+
* **Start Frame:** The gripper first appears in the screen.
|
| 64 |
+
* **End Frame:** The gripper leaves the screen.
|
| 65 |
+
* **Velocity Change:** Frames where the speed direction suddenly changes (marking the minimum speed point).
|
| 66 |
+
* **State Change:** Frames where the gripper opens or closes.
|
| 67 |
+
* **Contact:** The precise moment the gripper touches the object.
|
| 68 |
|
| 69 |
+
**2. The 5-Point Annotation Method**
|
| 70 |
+
For every annotated keyframe, the gripper is labeled with **5 specific coordinate points** to capture its pose and state accurately:
|
| 71 |
|
| 72 |
+
| Point ID | Description | Location Detail |
|
| 73 |
+
| :--- | :--- | :--- |
|
| 74 |
+
| **Point 1 & 2** | **Fingertips** | Center of the bottom edge of the gripper tips. |
|
| 75 |
+
| **Point 3 & 4** | **Gripper Ends** | The rearmost points of the closing area (indicating the finger direction). |
|
| 76 |
+
| **Point 5** | **Tiger's Mouth** | The center of the crossbeam (base of the gripper). |
|
| 77 |
|
| 78 |
+
**3. Quality Control**
|
| 79 |
+
* **Accuracy:** All datasets passed a rigorous quality assurance process with a minimum **95% accuracy rate**.
|
| 80 |
+
* **Occlusion Handling:** Sequences where the gripper is fully occluded or only shows a side profile without clear features are discarded.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 81 |
|
| 82 |
+
## Key Statistics
|
|
|
|
| 83 |
|
| 84 |
+
* **Total Examples:** 50 annotated examples (Sample Dataset).
|
| 85 |
+
* **Language:** English (`en`).
|
| 86 |
+
* **Splits:** Train split available.
|
| 87 |
+
* **Download Size:** ~38.7 MB.
|
| 88 |
+
* **Dataset Size:** ~39.0 MB.
|
| 89 |
+
|
| 90 |
+
## Usage
|
| 91 |
+
|
| 92 |
+
This dataset is suitable for research and development in the field of Embodied AI and Computer Vision. It is specifically curated to support the following downstream tasks and application scenarios:
|
| 93 |
|
| 94 |
+
* **Trajectory Prediction:** The high-precision coordinate data allows for training models to predict the future path of a gripper based on initial visual contexts.
|
| 95 |
+
* **Keyframe Extraction & Event Detection:** By leveraging the labeled event types (e.g., "Contact", "Velocity Change"), models can be trained to automatically identify critical moments in long-horizon manipulation tasks.
|
| 96 |
+
* **Fine-Grained Robotic Control:** The 5-point annotation system provides detailed pose information, enabling Imitation Learning (IL) from human-demonstrated or teleoperated data for precise pick-and-place operations.
|
| 97 |
+
* **Object Interaction Analysis:** The dataset helps in understanding gripper-object relationships, specifically modeling the transition states when the gripper opens, closes, or makes contact with an object.
|
| 98 |
|
| 99 |
+
### Usage Example
|
| 100 |
|
| 101 |
```python
|
| 102 |
from datasets import load_dataset
|
|
|
|
| 114 |
|
| 115 |
# Parse annotations
|
| 116 |
annotations = json.loads(sample['annotations'])
|
| 117 |
+
print(f"Keyframes count: {len(annotations)}")
|
| 118 |
+
```
|
| 119 |
+
|
| 120 |
+
## License and Open-Source Details
|
| 121 |
+
|
| 122 |
+
* **License:** This dataset is released under the **OpenRAIL** license.
|