Datasets:
metadata
dataset_info:
features:
- name: run_id
dtype: string
- name: frame
dtype: int32
- name: timestamp
dtype: float32
- name: image_front
dtype: image
- name: image_front_left
dtype: image
- name: image_front_right
dtype: image
- name: image_rear
dtype: image
- name: location_x
dtype: float32
- name: location_y
dtype: float32
- name: location_z
dtype: float32
- name: rotation_pitch
dtype: float32
- name: rotation_yaw
dtype: float32
- name: rotation_roll
dtype: float32
- name: velocity_x
dtype: float32
- name: velocity_y
dtype: float32
- name: velocity_z
dtype: float32
- name: speed_kmh
dtype: float32
- name: throttle
dtype: float32
- name: steer
dtype: float32
- name: brake
dtype: float32
- name: nearby_vehicles_50m
dtype: int32
- name: total_npc_vehicles
dtype: int32
- name: total_npc_walkers
dtype: int32
- name: map_name
dtype: string
- name: weather_cloudiness
dtype: float32
- name: weather_precipitation
dtype: float32
- name: weather_fog_density
dtype: float32
- name: weather_sun_altitude
dtype: float32
- name: vehicles_spawned
dtype: int32
- name: walkers_spawned
dtype: int32
- name: duration_seconds
dtype: int32
splits:
- name: train
num_bytes: 155077419480.6
num_examples: 56200
- name: validation
num_bytes: 14948709540
num_examples: 4800
- name: test
num_bytes: 17602075134
num_examples: 7200
download_size: 189226141844
dataset_size: 187628204154.6
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
license: mit
ta:
- vision-to-control
- imitation-learning
- autonomous-driving
- multimodal
- computer-vision
- reinforcement-learning
language:
- en
pretty_name: CARLA Autopilot Image Dataset
size_categories:
- 10K<n<100K
task_categories:
- image-feature-extraction
- any-to-any
- reinforcement-learning
CARLA Autopilot Images Dataset
Note: A newer, extended version of this dataset is available.
🤗 CARLA Autopilot Multimodal Dataset 🤗
It includes semantic segmentation, LiDAR, 2D bounding boxes, and additional environment metadata.
Use it if your research requires multimodal signals beyond the RGB images and vehicle state/control data provided here.
This dataset contains autonomous driving data collected from CARLA simulator using autopilot.
Dataset Structure
- Train/val/test split is by run, not by frame, to ensure generalization
- Total train samples: 56.2K
- Total val samples: 4.8K
- Total test samples: 7.2K
- Runs processed: 24
Features
Images
Multiple camera views are available depending on the run configuration:
image_front: Front-facing camera viewimage_rear: Rear-facing camera viewimage_front_left: Front-left camera viewimage_front_right: Front-right camera view
Vehicle State
- Position:
location_x,location_y,location_z - Orientation:
rotation_pitch,rotation_yaw,rotation_roll - Velocity:
velocity_x,velocity_y,velocity_z,speed_kmh
Vehicle Controls (Targets)
throttle: Throttle input [0.0, 1.0]steer: Steering input [-1.0, 1.0]brake: Brake input [0.0, 1.0]
Environment
- Traffic density information
- Weather conditions
- Map information
Usage
from datasets import load_dataset
dataset = load_dataset("immanuelpeter/carla-autopilot-images")
train_dataset = dataset["train"]
val_dataset = dataset["validation"]
test_dataset = dataset["test"]
Citation
If you use this dataset, please cite the CARLA simulator:
@inproceedings{Dosovitskiy17,
title = {CARLA: An Open Urban Driving Simulator},
author = {Alexey Dosovitskiy and German Ros and Felipe Codevilla and Antonio Lopez and Vladlen Koltun},
booktitle = {Proceedings of the 1st Annual Conference on Robot Learning},
pages = {1--16},
year = {2017}
}