|
|
--- |
|
|
dataset_info: |
|
|
features: |
|
|
- name: split |
|
|
dtype: string |
|
|
- name: image_id |
|
|
dtype: string |
|
|
- name: file_name |
|
|
dtype: string |
|
|
- name: image_info |
|
|
struct: |
|
|
- name: data_source |
|
|
dtype: string |
|
|
- name: file_name |
|
|
dtype: string |
|
|
- name: height |
|
|
dtype: int64 |
|
|
- name: id |
|
|
dtype: string |
|
|
- name: width |
|
|
dtype: int64 |
|
|
- name: caption_info |
|
|
struct: |
|
|
- name: caption |
|
|
dtype: string |
|
|
- name: caption_ann |
|
|
dtype: string |
|
|
- name: id |
|
|
dtype: int64 |
|
|
- name: image_id |
|
|
dtype: string |
|
|
- name: label_matched |
|
|
list: |
|
|
- name: mask_ids |
|
|
sequence: int64 |
|
|
- name: txt_desc |
|
|
dtype: string |
|
|
- name: labels |
|
|
sequence: string |
|
|
- name: mask_annotations |
|
|
list: |
|
|
- name: area |
|
|
dtype: int64 |
|
|
- name: bbox |
|
|
sequence: float64 |
|
|
- name: category_id |
|
|
dtype: int64 |
|
|
- name: id |
|
|
dtype: int64 |
|
|
- name: image_id |
|
|
dtype: string |
|
|
- name: iscrowd |
|
|
dtype: int64 |
|
|
- name: segmentation |
|
|
struct: |
|
|
- name: counts |
|
|
dtype: string |
|
|
- name: size |
|
|
sequence: int64 |
|
|
- name: thing_or_stuff |
|
|
dtype: string |
|
|
- name: categories |
|
|
list: |
|
|
- name: id |
|
|
dtype: int64 |
|
|
- name: name |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 29443350 |
|
|
num_examples: 2070 |
|
|
- name: val |
|
|
num_bytes: 4782919 |
|
|
num_examples: 420 |
|
|
- name: test |
|
|
num_bytes: 10976834 |
|
|
num_examples: 980 |
|
|
download_size: 25273455 |
|
|
dataset_size: 45203103 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/train-* |
|
|
- split: val |
|
|
path: data/val-* |
|
|
- split: test |
|
|
path: data/test-* |
|
|
--- |
|
|
# PanoCaps (PANORAMA): Panoptic grounded captioning via mask-guided refinement |
|
|
|
|
|
[]() |
|
|
[](https://github.com/sarapieri/panorama_grounding) |
|
|
[](https://huggingface.co/datasets/HuggingSara/PanoCaps) |
|
|
[](https://www.di.ens.fr/willow/research/panorama/) |
|
|
<!-- []() --> |
|
|
|
|
|
<p align="center"> |
|
|
<img src="https://www.di.ens.fr/willow/research/panorama/resources/panorama_teaser.jpg" |
|
|
width="100%" |
|
|
alt="Panorama teaser image" /> |
|
|
</p> |
|
|
|
|
|
PanoCaps is a unified dataset for **panoptic grounded captioning**. A model must generate a full-scene caption and ground every mentioned entity (things and stuff) with pixel-level masks. |
|
|
|
|
|
Every caption: |
|
|
- Is **human-written** |
|
|
- Covers the **entire visible scene** |
|
|
- Contains **rich open-vocabulary descriptions** beyond category labels |
|
|
- Includes **inline grounding tags** referring to segmentation masks |
|
|
- Supports **one-to-many** and **many-to-one** text ↔ mask mappings |
|
|
|
|
|
This makes PanoCaps suitable for training and evaluating **vision–language models** requiring both detailed scene understanding and fine-grained spatial grounding. |
|
|
|
|
|
The repository includes: |
|
|
|
|
|
1. **Raw annotations** in JSON format (`annotations/`) → best for **training & evaluation** |
|
|
2. **A processed Hugging Face dataset** → best for **visualization & inspection** |
|
|
|
|
|
This dataset is intended **exclusively for research and non-commercial use**. |
|
|
|
|
|
## Dataset Details |
|
|
|
|
|
### Dataset Description |
|
|
|
|
|
This benchmark supports **panoptic grounded captioning**—a task requiring models to generate long-form, descriptive captions for the entire scene and link all mentioned entities (things and stuff) to pixel-level masks. Masks follow standard **COCO-style panoptic annotations**. |
|
|
|
|
|
The dataset comprises **3,470 images** with a total of **34K panoptic regions**, averaging **~9 grounded entities per image**. The human-written captions are designed for maximum quality and detail: |
|
|
* **Comprehensive:** Covers the entire visible scene. |
|
|
* **Open-Vocabulary:** Entity descriptions extend beyond simple category labels. |
|
|
* **Fully Grounded:** Uses in-text markers and explicit mapping structures (`label_matched`) to link text spans to masks, ensuring **>99% of regions are grounded**. |
|
|
|
|
|
|
|
|
### Images |
|
|
|
|
|
**Images are *not* included** in this repository. |
|
|
|
|
|
To use the dataset, download the original images from the source datasets: |
|
|
|
|
|
| Dataset | Data Download Link | Associated Publication | |
|
|
|---------|---------------------------------------------|-----------------------------------------| |
|
|
| ADE20K | [ADE20K Download](https://groups.csail.mit.edu/vision/datasets/ADE20K/) | [ADE20K Paper](https://arxiv.org/abs/1608.05442) | |
|
|
| COCONut | [COCONut GitHub](https://github.com/bytedance/coconut_cvpr2024) | [COCONut Paper](https://arxiv.org/abs/2404.08639) | |
|
|
| VIPSeg | [VIPSeg GitHub](https://github.com/VIPSeg-Dataset/VIPSeg-Dataset/) | [VIPSeg Paper](https://openaccess.thecvf.com/content/CVPR2022/html/Miao_Large-Scale_Video_Panoptic_Segmentation_in_the_Wild_A_Benchmark_CVPR_2022_paper.html) | |
|
|
|
|
|
|
|
|
The JSON annotations reference these images by consistent `file_name` and `id`. |
|
|
|
|
|
### Repository Structure |
|
|
|
|
|
<details> |
|
|
<summary>Show Repository Structure</summary> |
|
|
<pre> |
|
|
PanoCaps/ |
|
|
│ |
|
|
├── 📁 annotations/ |
|
|
│ ├── 📄 test_caption.json |
|
|
│ ├── 📄 test_mask.json |
|
|
│ ├── 📄 train_caption.json |
|
|
│ ├── 📄 train_mask.json |
|
|
│ ├── 📄 val_caption.json |
|
|
│ └── 📄 val_mask.json |
|
|
├── 📁 data/ (parquet/HF version) |
|
|
└── 📄 README.md |
|
|
</pre> |
|
|
</details> |
|
|
|
|
|
### Recommended Usage |
|
|
|
|
|
This dataset is provided in two complementary formats: |
|
|
|
|
|
### **1. Hugging Face Dataset Format (recommended for inspection & visualization)** |
|
|
The `train`, `val`, and `test` splits uploaded to the Hugging Face Hub combine **captioning** and **panoptic mask** information into a **single unified entry per image**. This format is ideal for browsing samples interactively in the Dataset Viewer or quick experimentation. |
|
|
|
|
|
### **2. Original COCO-Style JSON Format (recommended for training & evaluation)** |
|
|
Raw annotations are provided under `annotations/` as pairs of Caption files and Mask files (e.g., `train_caption.json` / `train_mask.json`). |
|
|
|
|
|
These follow the original COCO-style structure and are best suited for: |
|
|
- Model training |
|
|
- Model evaluation |
|
|
- Direct integration into COCO-based pipelines |
|
|
|
|
|
Caption and mask files can be matched using the shared `image_id` / `id` fields in `images[*]` and `annotations[*]`. |
|
|
|
|
|
### Detailed COCO Format |
|
|
|
|
|
<details> |
|
|
<summary>Show Caption File Example (Structure + Single Entry)</summary> |
|
|
|
|
|
```javascript |
|
|
{ |
|
|
"annotations": [ |
|
|
{ |
|
|
"caption": "The image shows a small, brightly lit bathroom dominated by a white tiled wall...", |
|
|
// Clean natural-language caption |
|
|
"caption_ann": "The image shows a small, brightly lit bathroom dominated by a <0:white tiled wall>...", |
|
|
// Caption with grounded <mask_id:text> references |
|
|
"label_matched": [ |
|
|
{ "mask_ids": [0], "txt_desc": "white tiled wall" }, |
|
|
{ "mask_ids": [5], "txt_desc": "white bathtub with chrome faucets" } |
|
|
// ... |
|
|
], |
|
|
// Mapping text spans → one or more mask IDs |
|
|
// Masks may appear multiple times with different descriptions |
|
|
"id": 0, |
|
|
// Caption annotation ID |
|
|
"image_id": "00000006", |
|
|
// Matches the images[*].id field |
|
|
"labels": ["wall", "floor", "ceiling", "window", "curtain", "tub", "sink"] |
|
|
// All unique semantic labels from the original annotations |
|
|
} |
|
|
], |
|
|
"images": [ |
|
|
{ |
|
|
"file_name": "00000006.jpg", |
|
|
// Image filename |
|
|
"height": 973, |
|
|
"width": 512, |
|
|
// Image resolution |
|
|
"id": "00000006", |
|
|
// Image identifier (matches annotation.image_id) |
|
|
"data_source": "ADE20K" |
|
|
// Image source |
|
|
} |
|
|
] |
|
|
} |
|
|
``` |
|
|
</details> |
|
|
|
|
|
<details> |
|
|
<summary>Show Mask File Example (Structure + Single Entry)</summary> |
|
|
|
|
|
```javascript |
|
|
{ |
|
|
"annotations": [ |
|
|
{ |
|
|
"id": 0, |
|
|
// Unique ID of this panoptic region |
|
|
"image_id": "00000006", |
|
|
// Links this region to the image and caption (matches images[*].id and caption image_id) |
|
|
"category_id": 100, |
|
|
// Semantic category ID (from the original annotations) |
|
|
"segmentation": { |
|
|
"size": [973, 512], |
|
|
// Height and width of the full image (needed to decode the RLE mask) |
|
|
"counts": "d1`1Zk0P2C=C<D=C=C6J=..." |
|
|
// RLE-encoded mask in COCO panoptic format |
|
|
}, |
|
|
"area": 214858, |
|
|
// Number of pixels covered by this segment |
|
|
"bbox": [0.0, 0.0, 511.0, 760.0], |
|
|
// COCO-format bounding box [x, y, width, height] |
|
|
"iscrowd": 0, |
|
|
// 0 for normal segment, 1 if this region is a crowd |
|
|
"thing_or_stuff": "stuff" |
|
|
// Whether this region is an object-like "thing" or background-like "stuff" |
|
|
} |
|
|
], |
|
|
"images": [ |
|
|
{ |
|
|
"file_name": "00000006.jpg", |
|
|
// Image file name (in the original dataset) |
|
|
"height": 973, |
|
|
"width": 512, |
|
|
// Image resolution |
|
|
"id": "00000006" |
|
|
// Image identifier (matches annotations[*].image_id and caption image_id) |
|
|
"data_source": "ADE20K" |
|
|
// Image source |
|
|
} |
|
|
], |
|
|
"categories": [ |
|
|
{ |
|
|
"id": 1, |
|
|
// Category ID (referenced by annotations[*].category_id) |
|
|
"name": "object" |
|
|
// Human-readable category name |
|
|
} |
|
|
] |
|
|
} |
|
|
``` |
|
|
</details> |
|
|
|
|
|
--- |
|
|
|
|
|
## Curation and Annotation Details |
|
|
|
|
|
PanoCaps was built to overcome the limitations of prior grounded captioning datasets (e.g., auto-generated captions, limited vocabulary, and incomplete grounding). Our goal was to create a resource where captions describe every meaningful region using open-vocabulary language, with explicit grounding for each referenced entity. |
|
|
The creation process involved four stages: |
|
|
|
|
|
1. **Image Selection:** A diverse subset of images was curated from ADE20K, COCONut, and VIPSeg to ensure visual quality and suitability for dense grounding. |
|
|
2. **Captioning:** Professional annotators wrote long-form, fine-grained scene descriptions, highlighting attributes, relationships, and all visible entities. |
|
|
3. **Grounding:** Annotators tagged textual references with `<mask_id:description>` markers and produced **label_matched** structures that map text spans to one or more segmentation masks. |
|
|
4. **Validation:** A second QC stage verified the correctness of grounding IDs, completeness of region coverage, and annotation consistency. |
|
|
**Data Producers:** The base panoptic masks were sourced from the original datasets (ADE20K, COCONut, VIPSeg). However, all **captions and grounding annotations** were created specifically for PanoCaps by paid professional annotators following internal guidelines. |
|
|
|
|
|
--- |
|
|
|
|
|
## License (Research Only) |
|
|
Because this repository merges, normalizes, and redistributes content from already existing datasets, the combined dataset is provided **strictly for research and non-commercial use**. |
|
|
Commercial use is **not permitted**. Users must comply with the licenses of each original source dataset. |
|
|
|
|
|
--- |
|
|
|
|
|
## Citation |
|
|
If you find our work useful for your research, please consider citing our [paper](): |
|
|
``` |
|
|
@article{YOUR_CITATION_HERE, |
|
|
title={Your Title}, |
|
|
author={Your Name}, |
|
|
year={2024} |
|
|
} |
|
|
``` |
|
|
|
|
|
|