license: openrail
tags:
- robotics
- trajectory-prediction
- manipulation
- computer-vision
- time-series
pretty_name: Codatta Robotic Manipulation Trajectory
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: total_frames
dtype: int32
- name: annotations
dtype: string
- name: trajectory_image
dtype: image
- name: video_path
dtype: string
splits:
- name: train
num_bytes: 39054025
num_examples: 50
download_size: 38738419
dataset_size: 39054025
language:
- en
size_categories:
- n<1K
Codatta Robotic Manipulation Trajectory (Sample)
Overview
This dataset contains high-quality annotated trajectories of robotic gripper manipulations. Produced by Codatta, it focuses on third-person views of robotic arms performing pick-and-place or manipulation tasks. The dataset is designed to train models for fine-grained control, trajectory prediction, and object interaction tasks.
The scope specifically includes third-person views (fixed camera recording the robot) while explicitly excluding first-person views (Eye-in-Hand) to ensure consistent coordinate mapping.
Dataset Contents
Each sample in this dataset includes the raw video, a visualization of the trajectory, and a rigorous JSON annotation of keyframes and coordinate points.
Data Fields
id(string): Unique identifier for the trajectory sequence.total_frames(int32): Total number of frames in the video sequence.video_path(string): Path to the source MP4 video file recording the manipulation action.trajectory_image(image): A JPEG preview showing the overlaid trajectory path or keyframe visualization.annotations(string): A JSON-formatted string containing the detailed coordinate data. It contains lists of keyframes, timestamps, and 5-point coordinates for the gripper.
Annotation Standards
The data follows a strict protocol to ensure precision:
1. Keyframe Selection Annotations are sparse, focusing on specific Keyframes defined by the following events:
- Start Frame: The gripper first appears in the screen.
- End Frame: The gripper leaves the screen.
- Velocity Change: Frames where the speed direction suddenly changes (marking the minimum speed point).
- State Change: Frames where the gripper opens or closes.
- Contact: The precise moment the gripper touches the object.
2. The 5-Point Annotation Method For every annotated keyframe, the gripper is labeled with 5 specific coordinate points to capture its pose and state accurately:
| Point ID | Description | Location Detail |
|---|---|---|
| Point 1 & 2 | Fingertips | Center of the bottom edge of the gripper tips. |
| Point 3 & 4 | Gripper Ends | The rearmost points of the closing area (indicating the finger direction). |
| Point 5 | Tiger's Mouth | The center of the crossbeam (base of the gripper). |
3. Quality Control
- Accuracy: All datasets passed a rigorous quality assurance process with a minimum 95% accuracy rate.
- Occlusion Handling: Sequences where the gripper is fully occluded or only shows a side profile without clear features are discarded.
Key Statistics
- Total Examples: 50 annotated examples (Sample Dataset).
- Language: English (
en). - Splits: Train split available.
- Download Size: ~38.7 MB.
- Dataset Size: ~39.0 MB.
Usage
This dataset is suitable for research and development in the field of Embodied AI and Computer Vision. It is specifically curated to support the following downstream tasks and application scenarios:
- Trajectory Prediction: The high-precision coordinate data allows for training models to predict the future path of a gripper based on initial visual contexts.
- Keyframe Extraction & Event Detection: By leveraging the labeled event types (e.g., "Contact", "Velocity Change"), models can be trained to automatically identify critical moments in long-horizon manipulation tasks.
- Fine-Grained Robotic Control: The 5-point annotation system provides detailed pose information, enabling Imitation Learning (IL) from human-demonstrated or teleoperated data for precise pick-and-place operations.
- Object Interaction Analysis: The dataset helps in understanding gripper-object relationships, specifically modeling the transition states when the gripper opens, closes, or makes contact with an object.
Usage Example
from datasets import load_dataset
import json
# Load the dataset
ds = load_dataset("Codatta/robotic-manipulation-trajectory", split="train")
# Access a sample
sample = ds[0]
# View the image
print(f"Trajectory ID: {sample['id']}")
sample['trajectory_image'].show()
# Parse annotations
annotations = json.loads(sample['annotations'])
print(f"Keyframes count: {len(annotations)}")
License and Open-Source Details
- License: This dataset is released under the OpenRAIL license.