HusseinLezzaik's picture
Upload README.md with huggingface_hub
d3b9ca0 verified
metadata
license: mit
task_categories:
  - robotics
  - video-classification
tags:
  - minecraft
  - vla
  - vision-language-action
  - gaming
  - behavioral-cloning
size_categories:
  - 1M<n<10M

Minecraft VLA Stage 1: Action Pretraining Data

Vision-Language-Action training data for Minecraft, processed from OpenAI's VPT contractor dataset.

Dataset Description

This dataset contains frame-action pairs from Minecraft gameplay, designed for training VLA models following the Lumine methodology.

Source

  • Original: OpenAI VPT Contractor Data (7.x subset)
  • Videos: 17,886 videos (330 hours of early-game gameplay)
  • Task: "Play Minecraft" with focus on first 30 minutes of new worlds

Format

Each sample contains:

Field Type Description
image bytes 640x360 JPEG frame
video_id string Source video identifier
frame_idx int Frame number at 5Hz
action string Lumine-format action string

Action Format

<|action_start|> mouse_x mouse_y scroll ; K1 ; K2 ; K3 ; K4 <|action_end|>
  • mouse_x, mouse_y: Mouse delta (-1000 to 1000)
  • scroll: Hotbar scroll (always 0 - VPT uses number keys)
  • K1 to K4: Key combinations per 50ms chunk

Example:

<|action_start|> 45 -12 0 ; W ; W Space ; W LMB ; W LMB <|action_end|>

Processing Details

  • Frame rate: 5 FPS (downsampled from VPT's 20 FPS)
  • Action chunks: 4 per frame (each 50ms = 200ms total)
  • Filtering: Idle frames removed, loading screens filtered

Usage

from datasets import load_dataset

# Streaming (recommended - no download required)
ds = load_dataset("TESS-Computer/minecraft-vla-stage1", split="train", streaming=True)

for sample in ds:
    image = sample["image"]  # PIL Image or bytes
    action = sample["action"]
    # Process...

Training Pipeline

This is Stage 1 of a 3-stage training pipeline:

  1. Stage 1 (this dataset): Action pretraining - learn observation→action mapping
  2. Stage 2: Instruction following - add task instructions from JARVIS-VLA
  3. Stage 3: Reasoning - add chain-of-thought before complex actions

Citation

If you use this dataset, please cite:

License

MIT License. Original VPT data is released under MIT by OpenAI.