Datasets:
File size: 2,603 Bytes
d3b9ca0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 |
---
license: mit
task_categories:
- robotics
- video-classification
tags:
- minecraft
- vla
- vision-language-action
- gaming
- behavioral-cloning
size_categories:
- 1M<n<10M
---
# Minecraft VLA Stage 1: Action Pretraining Data
Vision-Language-Action training data for Minecraft, processed from OpenAI's VPT contractor dataset.
## Dataset Description
This dataset contains frame-action pairs from Minecraft gameplay, designed for training VLA models following the [Lumine](https://www.lumine-ai.org/) methodology.
### Source
- **Original**: [OpenAI VPT Contractor Data](https://github.com/openai/Video-Pre-Training) (7.x subset)
- **Videos**: ~17,886 videos (~330 hours of early-game gameplay)
- **Task**: "Play Minecraft" with focus on first 30 minutes of new worlds
### Format
Each sample contains:
| Field | Type | Description |
|-------|------|-------------|
| `image` | bytes | 640x360 JPEG frame |
| `video_id` | string | Source video identifier |
| `frame_idx` | int | Frame number at 5Hz |
| `action` | string | Lumine-format action string |
### Action Format
```
<|action_start|> mouse_x mouse_y scroll ; K1 ; K2 ; K3 ; K4 <|action_end|>
```
- `mouse_x`, `mouse_y`: Mouse delta (-1000 to 1000)
- `scroll`: Hotbar scroll (always 0 - VPT uses number keys)
- `K1` to `K4`: Key combinations per 50ms chunk
**Example:**
```
<|action_start|> 45 -12 0 ; W ; W Space ; W LMB ; W LMB <|action_end|>
```
### Processing Details
- **Frame rate**: 5 FPS (downsampled from VPT's 20 FPS)
- **Action chunks**: 4 per frame (each 50ms = 200ms total)
- **Filtering**: Idle frames removed, loading screens filtered
## Usage
```python
from datasets import load_dataset
# Streaming (recommended - no download required)
ds = load_dataset("TESS-Computer/minecraft-vla-stage1", split="train", streaming=True)
for sample in ds:
image = sample["image"] # PIL Image or bytes
action = sample["action"]
# Process...
```
## Training Pipeline
This is Stage 1 of a 3-stage training pipeline:
1. **Stage 1** (this dataset): Action pretraining - learn observation→action mapping
2. **Stage 2**: Instruction following - add task instructions from JARVIS-VLA
3. **Stage 3**: Reasoning - add chain-of-thought before complex actions
## Citation
If you use this dataset, please cite:
- [OpenAI VPT](https://arxiv.org/abs/2206.11795) - Original contractor data
- [JARVIS-VLA](https://craftjarvis.github.io/JarvisVLA/) - Instruction annotations
- [Lumine](https://www.lumine-ai.org/) - Training methodology
## License
MIT License. Original VPT data is released under MIT by OpenAI.
|