nemo04's picture
Improve dataset card: add metadata, links and documentation (#2)
2b34bf0 verified
metadata
license: apache-2.0
task_categories:
  - robotics
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
dataset_info:
  features:
    - name: camera_images
      list: image
    - name: depth_images
      list: image
    - name: normal_images
      list: image
    - name: frame_id
      dtype: int32
    - name: scene_id
      dtype: string
  splits:
    - name: train
      num_bytes: 3671744232.849
      num_examples: 1473
  download_size: 3336228908
  dataset_size: 3671744232.849

RoboTransfer-RealData

Project Page | Paper | GitHub

RoboTransfer-RealData is a real-world robotic manipulation dataset collected using the ALOHA-AgileX robot system. It was introduced as part of the paper "RoboTransfer: Controllable Geometry-Consistent Video Diffusion for Manipulation Policy Transfer".

The dataset contains real-world trajectories used to evaluate policy transfer from synthetic data generated by RoboTransfer, a diffusion-based framework designed for geometry-consistent robotic data synthesis.

Dataset Description

The dataset includes multi-modal visual data for robotic tasks:

  • camera_images: RGB frames captured from the robot's camera system.
  • depth_images: Corresponding depth maps for geometric conditioning.
  • normal_images: Estimated surface normal maps.
  • frame_id: The sequential index of the frame.
  • scene_id: Identifier for specific recorded scenes.

Usage

As specified in the RoboTransfer GitHub repository, you can process raw RGB images from this dataset into the RoboTransfer format with geometric conditioning using the following script:

script/process_real.sh

Citation

If you use this dataset or the RoboTransfer framework in your research, please cite:

@misc{liu2025robotransfergeometryconsistentvideodiffusion,
      title={RoboTransfer: Geometry-Consistent Video Diffusion for Robotic Visual Policy Transfer},
      author={Liu Liu and Xiaofeng Wang and Guosheng Zhao and Keyu Li and Wenkang Qin and Jiaxiong Qiu and Zheng Zhu and Guan Huang and Zhizhong Su},
      year={2025},
      eprint={2505.23171},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2505.23171},
}