File size: 11,333 Bytes
85739ee |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 |
---
dataset_info:
features:
- name: split
dtype: string
- name: image_id
dtype: string
- name: file_name
dtype: string
- name: image_info
struct:
- name: data_source
dtype: string
- name: file_name
dtype: string
- name: height
dtype: int64
- name: id
dtype: string
- name: width
dtype: int64
- name: caption_info
struct:
- name: caption
dtype: string
- name: caption_ann
dtype: string
- name: id
dtype: int64
- name: image_id
dtype: string
- name: label_matched
list:
- name: mask_ids
sequence: int64
- name: txt_desc
dtype: string
- name: labels
sequence: string
- name: mask_annotations
list:
- name: area
dtype: int64
- name: bbox
sequence: float64
- name: category_id
dtype: int64
- name: id
dtype: int64
- name: image_id
dtype: string
- name: iscrowd
dtype: int64
- name: segmentation
struct:
- name: counts
dtype: string
- name: size
sequence: int64
- name: thing_or_stuff
dtype: string
- name: categories
list:
- name: id
dtype: int64
- name: name
dtype: string
splits:
- name: train
num_bytes: 29443350
num_examples: 2070
- name: val
num_bytes: 4782919
num_examples: 420
- name: test
num_bytes: 10976834
num_examples: 980
download_size: 25273455
dataset_size: 45203103
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
---
# PanoCaps (PANORAMA): Panoptic grounded captioning via mask-guided refinement
[]()
[](https://github.com/sarapieri/panorama_grounding)
[](https://huggingface.co/datasets/HuggingSara/PanoCaps)
[](https://www.di.ens.fr/willow/research/panorama/)
<!-- []() -->
<p align="center">
<img src="https://www.di.ens.fr/willow/research/panorama/resources/panorama_teaser.jpg"
width="100%"
alt="Panorama teaser image" />
</p>
PanoCaps is a unified dataset for **panoptic grounded captioning**. A model must generate a full-scene caption and ground every mentioned entity (things and stuff) with pixel-level masks.
Every caption:
- Is **human-written**
- Covers the **entire visible scene**
- Contains **rich open-vocabulary descriptions** beyond category labels
- Includes **inline grounding tags** referring to segmentation masks
- Supports **one-to-many** and **many-to-one** text β mask mappings
This makes PanoCaps suitable for training and evaluating **visionβlanguage models** requiring both detailed scene understanding and fine-grained spatial grounding.
The repository includes:
1. **Raw annotations** in JSON format (`annotations/`) β best for **training & evaluation**
2. **A processed Hugging Face dataset** β best for **visualization & inspection**
This dataset is intended **exclusively for research and non-commercial use**.
## Dataset Details
### Dataset Description
This benchmark supports **panoptic grounded captioning**βa task requiring models to generate long-form, descriptive captions for the entire scene and link all mentioned entities (things and stuff) to pixel-level masks. Masks follow standard **COCO-style panoptic annotations**.
The dataset comprises **3,470 images** with a total of **34K panoptic regions**, averaging **~9 grounded entities per image**. The human-written captions are designed for maximum quality and detail:
* **Comprehensive:** Covers the entire visible scene.
* **Open-Vocabulary:** Entity descriptions extend beyond simple category labels.
* **Fully Grounded:** Uses in-text markers and explicit mapping structures (`label_matched`) to link text spans to masks, ensuring **>99% of regions are grounded**.
### Images
**Images are *not* included** in this repository.
To use the dataset, download the original images from the source datasets:
| Dataset | Data Download Link | Associated Publication |
|---------|---------------------------------------------|-----------------------------------------|
| ADE20K | [ADE20K Download](https://groups.csail.mit.edu/vision/datasets/ADE20K/) | [ADE20K Paper](https://arxiv.org/abs/1608.05442) |
| COCONut | [COCONut GitHub](https://github.com/bytedance/coconut_cvpr2024) | [COCONut Paper](https://arxiv.org/abs/2404.08639) |
| VIPSeg | [VIPSeg GitHub](https://github.com/VIPSeg-Dataset/VIPSeg-Dataset/) | [VIPSeg Paper](https://openaccess.thecvf.com/content/CVPR2022/html/Miao_Large-Scale_Video_Panoptic_Segmentation_in_the_Wild_A_Benchmark_CVPR_2022_paper.html) |
The JSON annotations reference these images by consistent `file_name` and `id`.
### Repository Structure
<details>
<summary>Show Repository Structure</summary>
<pre>
PanoCaps/
β
βββ π annotations/
β βββ π test_caption.json
β βββ π test_mask.json
β βββ π train_caption.json
β βββ π train_mask.json
β βββ π val_caption.json
β βββ π val_mask.json
βββ π data/ (parquet/HF version)
βββ π README.md
</pre>
</details>
### Recommended Usage
This dataset is provided in two complementary formats:
### **1. Hugging Face Dataset Format (recommended for inspection & visualization)**
The `train`, `val`, and `test` splits uploaded to the Hugging Face Hub combine **captioning** and **panoptic mask** information into a **single unified entry per image**. This format is ideal for browsing samples interactively in the Dataset Viewer or quick experimentation.
### **2. Original COCO-Style JSON Format (recommended for training & evaluation)**
Raw annotations are provided under `annotations/` as pairs of Caption files and Mask files (e.g., `train_caption.json` / `train_mask.json`).
These follow the original COCO-style structure and are best suited for:
- Model training
- Model evaluation
- Direct integration into COCO-based pipelines
Caption and mask files can be matched using the shared `image_id` / `id` fields in `images[*]` and `annotations[*]`.
### Detailed COCO Format
<details>
<summary>Show Caption File Example (Structure + Single Entry)</summary>
```javascript
{
"annotations": [
{
"caption": "The image shows a small, brightly lit bathroom dominated by a white tiled wall...",
// Clean natural-language caption
"caption_ann": "The image shows a small, brightly lit bathroom dominated by a <0:white tiled wall>...",
// Caption with grounded <mask_id:text> references
"label_matched": [
{ "mask_ids": [0], "txt_desc": "white tiled wall" },
{ "mask_ids": [5], "txt_desc": "white bathtub with chrome faucets" }
// ...
],
// Mapping text spans β one or more mask IDs
// Masks may appear multiple times with different descriptions
"id": 0,
// Caption annotation ID
"image_id": "00000006",
// Matches the images[*].id field
"labels": ["wall", "floor", "ceiling", "window", "curtain", "tub", "sink"]
// All unique semantic labels from the original annotations
}
],
"images": [
{
"file_name": "00000006.jpg",
// Image filename
"height": 973,
"width": 512,
// Image resolution
"id": "00000006",
// Image identifier (matches annotation.image_id)
"data_source": "ADE20K"
// Image source
}
]
}
```
</details>
<details>
<summary>Show Mask File Example (Structure + Single Entry)</summary>
```javascript
{
"annotations": [
{
"id": 0,
// Unique ID of this panoptic region
"image_id": "00000006",
// Links this region to the image and caption (matches images[*].id and caption image_id)
"category_id": 100,
// Semantic category ID (from the original annotations)
"segmentation": {
"size": [973, 512],
// Height and width of the full image (needed to decode the RLE mask)
"counts": "d1`1Zk0P2C=C<D=C=C6J=..."
// RLE-encoded mask in COCO panoptic format
},
"area": 214858,
// Number of pixels covered by this segment
"bbox": [0.0, 0.0, 511.0, 760.0],
// COCO-format bounding box [x, y, width, height]
"iscrowd": 0,
// 0 for normal segment, 1 if this region is a crowd
"thing_or_stuff": "stuff"
// Whether this region is an object-like "thing" or background-like "stuff"
}
],
"images": [
{
"file_name": "00000006.jpg",
// Image file name (in the original dataset)
"height": 973,
"width": 512,
// Image resolution
"id": "00000006"
// Image identifier (matches annotations[*].image_id and caption image_id)
"data_source": "ADE20K"
// Image source
}
],
"categories": [
{
"id": 1,
// Category ID (referenced by annotations[*].category_id)
"name": "object"
// Human-readable category name
}
]
}
```
</details>
---
## Curation and Annotation Details
PanoCaps was built to overcome the limitations of prior grounded captioning datasets (e.g., auto-generated captions, limited vocabulary, and incomplete grounding). Our goal was to create a resource where captions describe every meaningful region using open-vocabulary language, with explicit grounding for each referenced entity.
The creation process involved four stages:
1. **Image Selection:** A diverse subset of images was curated from ADE20K, COCONut, and VIPSeg to ensure visual quality and suitability for dense grounding.
2. **Captioning:** Professional annotators wrote long-form, fine-grained scene descriptions, highlighting attributes, relationships, and all visible entities.
3. **Grounding:** Annotators tagged textual references with `<mask_id:description>` markers and produced **label_matched** structures that map text spans to one or more segmentation masks.
4. **Validation:** A second QC stage verified the correctness of grounding IDs, completeness of region coverage, and annotation consistency.
**Data Producers:** The base panoptic masks were sourced from the original datasets (ADE20K, COCONut, VIPSeg). However, all **captions and grounding annotations** were created specifically for PanoCaps by paid professional annotators following internal guidelines.
---
## License (Research Only)
Because this repository merges, normalizes, and redistributes content from already existing datasets, the combined dataset is provided **strictly for research and non-commercial use**.
Commercial use is **not permitted**. Users must comply with the licenses of each original source dataset.
---
## Citation
If you find our work useful for your research, please consider citing our [paper]():
```
@article{YOUR_CITATION_HERE,
title={Your Title},
author={Your Name},
year={2024}
}
```
|