dataset_info:
features:
- name: split
dtype: string
- name: image_id
dtype: string
- name: file_name
dtype: string
- name: image_info
struct:
- name: data_source
dtype: string
- name: file_name
dtype: string
- name: height
dtype: int64
- name: id
dtype: string
- name: width
dtype: int64
- name: caption_info
struct:
- name: caption
dtype: string
- name: caption_ann
dtype: string
- name: id
dtype: int64
- name: image_id
dtype: string
- name: label_matched
list:
- name: mask_ids
sequence: int64
- name: txt_desc
dtype: string
- name: labels
sequence: string
- name: mask_annotations
list:
- name: area
dtype: int64
- name: bbox
sequence: float64
- name: category_id
dtype: int64
- name: id
dtype: int64
- name: image_id
dtype: string
- name: iscrowd
dtype: int64
- name: segmentation
struct:
- name: counts
dtype: string
- name: size
sequence: int64
- name: thing_or_stuff
dtype: string
- name: categories
list:
- name: id
dtype: int64
- name: name
dtype: string
splits:
- name: train
num_bytes: 29443350
num_examples: 2070
- name: val
num_bytes: 4782919
num_examples: 420
- name: test
num_bytes: 10976834
num_examples: 980
download_size: 25273455
dataset_size: 45203103
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
PanoCaps (PANORAMA): Panoptic grounded captioning via mask-guided refinement
PanoCaps is a unified dataset for panoptic grounded captioning. A model must generate a full-scene caption and ground every mentioned entity (things and stuff) with pixel-level masks.
Every caption:
- Is human-written
- Covers the entire visible scene
- Contains rich open-vocabulary descriptions beyond category labels
- Includes inline grounding tags referring to segmentation masks
- Supports one-to-many and many-to-one text β mask mappings
This makes PanoCaps suitable for training and evaluating visionβlanguage models requiring both detailed scene understanding and fine-grained spatial grounding.
The repository includes:
- Raw annotations in JSON format (
annotations/) β best for training & evaluation - A processed Hugging Face dataset β best for visualization & inspection
This dataset is intended exclusively for research and non-commercial use.
Dataset Details
Dataset Description
This benchmark supports panoptic grounded captioningβa task requiring models to generate long-form, descriptive captions for the entire scene and link all mentioned entities (things and stuff) to pixel-level masks. Masks follow standard COCO-style panoptic annotations.
The dataset comprises 3,470 images with a total of 34K panoptic regions, averaging ~9 grounded entities per image. The human-written captions are designed for maximum quality and detail:
- Comprehensive: Covers the entire visible scene.
- Open-Vocabulary: Entity descriptions extend beyond simple category labels.
- Fully Grounded: Uses in-text markers and explicit mapping structures (
label_matched) to link text spans to masks, ensuring >99% of regions are grounded.
Images
Images are not included in this repository.
To use the dataset, download the original images from the source datasets:
| Dataset | Data Download Link | Associated Publication |
|---|---|---|
| ADE20K | ADE20K Download | ADE20K Paper |
| COCONut | COCONut GitHub | COCONut Paper |
| VIPSeg | VIPSeg GitHub | VIPSeg Paper |
The JSON annotations reference these images by consistent file_name and id.
Repository Structure
Show Repository Structure
PanoCaps/
β
βββ π annotations/
β βββ π test_caption.json
β βββ π test_mask.json
β βββ π train_caption.json
β βββ π train_mask.json
β βββ π val_caption.json
β βββ π val_mask.json
βββ π data/ (parquet/HF version)
βββ π README.md
Recommended Usage
This dataset is provided in two complementary formats:
1. Hugging Face Dataset Format (recommended for inspection & visualization)
The train, val, and test splits uploaded to the Hugging Face Hub combine captioning and panoptic mask information into a single unified entry per image. This format is ideal for browsing samples interactively in the Dataset Viewer or quick experimentation.
2. Original COCO-Style JSON Format (recommended for training & evaluation)
Raw annotations are provided under annotations/ as pairs of Caption files and Mask files (e.g., train_caption.json / train_mask.json).
These follow the original COCO-style structure and are best suited for:
- Model training
- Model evaluation
- Direct integration into COCO-based pipelines
Caption and mask files can be matched using the shared image_id / id fields in images[*] and annotations[*].
Detailed COCO Format
Show Caption File Example (Structure + Single Entry)
{
"annotations": [
{
"caption": "The image shows a small, brightly lit bathroom dominated by a white tiled wall...",
// Clean natural-language caption
"caption_ann": "The image shows a small, brightly lit bathroom dominated by a <0:white tiled wall>...",
// Caption with grounded <mask_id:text> references
"label_matched": [
{ "mask_ids": [0], "txt_desc": "white tiled wall" },
{ "mask_ids": [5], "txt_desc": "white bathtub with chrome faucets" }
// ...
],
// Mapping text spans β one or more mask IDs
// Masks may appear multiple times with different descriptions
"id": 0,
// Caption annotation ID
"image_id": "00000006",
// Matches the images[*].id field
"labels": ["wall", "floor", "ceiling", "window", "curtain", "tub", "sink"]
// All unique semantic labels from the original annotations
}
],
"images": [
{
"file_name": "00000006.jpg",
// Image filename
"height": 973,
"width": 512,
// Image resolution
"id": "00000006",
// Image identifier (matches annotation.image_id)
"data_source": "ADE20K"
// Image source
}
]
}
Show Mask File Example (Structure + Single Entry)
{
"annotations": [
{
"id": 0,
// Unique ID of this panoptic region
"image_id": "00000006",
// Links this region to the image and caption (matches images[*].id and caption image_id)
"category_id": 100,
// Semantic category ID (from the original annotations)
"segmentation": {
"size": [973, 512],
// Height and width of the full image (needed to decode the RLE mask)
"counts": "d1`1Zk0P2C=C<D=C=C6J=..."
// RLE-encoded mask in COCO panoptic format
},
"area": 214858,
// Number of pixels covered by this segment
"bbox": [0.0, 0.0, 511.0, 760.0],
// COCO-format bounding box [x, y, width, height]
"iscrowd": 0,
// 0 for normal segment, 1 if this region is a crowd
"thing_or_stuff": "stuff"
// Whether this region is an object-like "thing" or background-like "stuff"
}
],
"images": [
{
"file_name": "00000006.jpg",
// Image file name (in the original dataset)
"height": 973,
"width": 512,
// Image resolution
"id": "00000006"
// Image identifier (matches annotations[*].image_id and caption image_id)
"data_source": "ADE20K"
// Image source
}
],
"categories": [
{
"id": 1,
// Category ID (referenced by annotations[*].category_id)
"name": "object"
// Human-readable category name
}
]
}
Curation and Annotation Details
PanoCaps was built to overcome the limitations of prior grounded captioning datasets (e.g., auto-generated captions, limited vocabulary, and incomplete grounding). Our goal was to create a resource where captions describe every meaningful region using open-vocabulary language, with explicit grounding for each referenced entity. The creation process involved four stages:
- Image Selection: A diverse subset of images was curated from ADE20K, COCONut, and VIPSeg to ensure visual quality and suitability for dense grounding.
- Captioning: Professional annotators wrote long-form, fine-grained scene descriptions, highlighting attributes, relationships, and all visible entities.
- Grounding: Annotators tagged textual references with
<mask_id:description>markers and produced label_matched structures that map text spans to one or more segmentation masks. - Validation: A second QC stage verified the correctness of grounding IDs, completeness of region coverage, and annotation consistency. Data Producers: The base panoptic masks were sourced from the original datasets (ADE20K, COCONut, VIPSeg). However, all captions and grounding annotations were created specifically for PanoCaps by paid professional annotators following internal guidelines.
License (Research Only)
Because this repository merges, normalizes, and redistributes content from already existing datasets, the combined dataset is provided strictly for research and non-commercial use. Commercial use is not permitted. Users must comply with the licenses of each original source dataset.
Citation
If you find our work useful for your research, please consider citing our paper:
@article{YOUR_CITATION_HERE,
title={Your Title},
author={Your Name},
year={2024}
}