Update README.md
Browse files
README.md
CHANGED
|
@@ -15,46 +15,183 @@ configs:
|
|
| 15 |
path: "val.parquet"
|
| 16 |
---
|
| 17 |
|
| 18 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
|
| 20 |
Building upon previous DocVQA benchmarks, this evaluation dataset introduces challenging reasoning questions over a diverse collection of documents spanning eight domains, including business reports, scientific papers, slides, posters, maps, comics, infographics, and engineering drawings.
|
| 21 |
|
| 22 |
By expanding coverage to new document domains and introducing richer question types, this benchmark seeks to push the boundaries of multimodal reasoning and promote the development of more general, robust document understanding models.
|
| 23 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
## Results
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 43 |
|
| 44 |
## Dataset Structure
|
| 45 |
|
| 46 |
The dataset consists of:
|
| 47 |
1. **Images:** High-resolution PNG renders of document pages located in the `images/` directory.
|
| 48 |
2. **Annotations:** A Parquet file (`val.parquet`) containing the questions, answers, and references to the image paths.
|
| 49 |
-
|
| 50 |
-
### Schema
|
| 51 |
-
|
| 52 |
-
| Field | Type | Description |
|
| 53 |
-
| :--- | :--- | :--- |
|
| 54 |
-
| `id` | `string` | Unique identifier for the question (e.g., `slide_1_q1`). |
|
| 55 |
-
| `doc_id` | `string` | Identifier for the document context. |
|
| 56 |
-
| `image_paths` | `list[string]` | A list of relative paths to all pages associated with the document. |
|
| 57 |
-
| `question` | `string` | The visual question. |
|
| 58 |
-
| `answer` | `string` | The ground truth answer. |
|
| 59 |
-
| `doc_category` | `string` | The type of document (e.g., `science_paper`, `financial_report`). |
|
| 60 |
-
|
|
|
|
| 15 |
path: "val.parquet"
|
| 16 |
---
|
| 17 |
|
| 18 |
+
<h1 align="center">DocVQA 2026</h1>
|
| 19 |
+
<h3 align="center">ICDAR2026 Competition on Multimodal Reasoning over Documents in Multiple Domains</h3>
|
| 20 |
+
|
| 21 |
+
<p align="center">
|
| 22 |
+
<a href="https://huggingface.co/datasets/VLR-CVC/DocVQA-2026">
|
| 23 |
+
<img src="https://img.shields.io/badge/🤗_Hugging_Face-Dataset-blue.svg" alt="Hugging Face Dataset">
|
| 24 |
+
</a>
|
| 25 |
+
</p>
|
| 26 |
|
| 27 |
Building upon previous DocVQA benchmarks, this evaluation dataset introduces challenging reasoning questions over a diverse collection of documents spanning eight domains, including business reports, scientific papers, slides, posters, maps, comics, infographics, and engineering drawings.
|
| 28 |
|
| 29 |
By expanding coverage to new document domains and introducing richer question types, this benchmark seeks to push the boundaries of multimodal reasoning and promote the development of more general, robust document understanding models.
|
| 30 |
|
| 31 |
+
## Load & Inspect the Data
|
| 32 |
+
|
| 33 |
+
```python
|
| 34 |
+
from datasets import load_dataset
|
| 35 |
+
|
| 36 |
+
# 1. Load the dataset
|
| 37 |
+
dataset = load_dataset("VLR-CVC/DocVQA-2026", split="val")
|
| 38 |
+
|
| 39 |
+
# 2. Access a single sample (one document)
|
| 40 |
+
sample = dataset[5]
|
| 41 |
+
|
| 42 |
+
doc_id = sample["doc_id"]
|
| 43 |
+
category = sample["doc_category"]
|
| 44 |
+
print(f"Document ID: {doc_id} ({category})")
|
| 45 |
+
|
| 46 |
+
# 3. Access Images
|
| 47 |
+
# 'document' is a list of PIL Images (one for each page)
|
| 48 |
+
images = sample["document"]
|
| 49 |
+
print(f"Number of pages: {len(images)}")
|
| 50 |
+
images[0].show()
|
| 51 |
+
|
| 52 |
+
# 4. Access Questions and Answers
|
| 53 |
+
questions = sample["questions"]
|
| 54 |
+
answers = sample["answers"]
|
| 55 |
+
|
| 56 |
+
# 5. Visualize Q&A pairs for a document
|
| 57 |
+
for q, a in zip(questions, answers):
|
| 58 |
+
print("-" * 50)
|
| 59 |
+
print(f"Question: {q['question']}")
|
| 60 |
+
print(f"Answer: {a['answer']}")
|
| 61 |
+
print("-" * 50)
|
| 62 |
+
|
| 63 |
+
```
|
| 64 |
+
|
| 65 |
+
## Structure of a Sample
|
| 66 |
+
|
| 67 |
+
<details>
|
| 68 |
+
<summary><b>Click to expand the JSON structure</b></summary>
|
| 69 |
+
|
| 70 |
+
```json
|
| 71 |
+
{
|
| 72 |
+
'doc_id': 'comics_1',
|
| 73 |
+
'doc_category': 'comics',
|
| 74 |
+
'document': [,
|
| 75 |
+
<PIL.PngImagePlugin.PngImageFile image mode=RGB size=1240x1754 at 0x7F...>,
|
| 76 |
+
...
|
| 77 |
+
<PIL.PngImagePlugin.PngImageFile image mode=RGB size=1240x1754 at 0x7F...>
|
| 78 |
+
|
| 79 |
+
],
|
| 80 |
+
'questions': [
|
| 81 |
+
{
|
| 82 |
+
'question_id': 'comics_1_q1',
|
| 83 |
+
'question': "How many times do people get in the head in Nyoka and the Witch Doctor's Madness?"
|
| 84 |
+
}
|
| 85 |
+
],
|
| 86 |
+
'answers': [
|
| 87 |
+
{
|
| 88 |
+
'question_id': 'comics_1_q1',
|
| 89 |
+
'answer': '4'
|
| 90 |
+
}
|
| 91 |
+
]
|
| 92 |
+
}
|
| 93 |
+
```
|
| 94 |
+
</details>
|
| 95 |
+
|
| 96 |
|
| 97 |
## Results
|
| 98 |
+
|
| 99 |
+
<p align="center">
|
| 100 |
+
<img src="./assets/results_chart.jpg" alt="DocVQA 2026 Results Chart" width="80%">
|
| 101 |
+
<br>
|
| 102 |
+
<em>Figure 1: Performance comparison across domains.</em>
|
| 103 |
+
</p>
|
| 104 |
+
|
| 105 |
+
|
| 106 |
+
<div align="center">
|
| 107 |
+
<table>
|
| 108 |
+
<thead>
|
| 109 |
+
<tr>
|
| 110 |
+
<th align="left">Category</th>
|
| 111 |
+
<th align="center">Gemini 3 Pro Preview</th>
|
| 112 |
+
<th align="center">GPT-5.2</th>
|
| 113 |
+
<th align="center">Gemini 3 Flash Preview</th>
|
| 114 |
+
<th align="center">GPT-5 Mini</th>
|
| 115 |
+
</tr>
|
| 116 |
+
</thead>
|
| 117 |
+
<tbody>
|
| 118 |
+
<tr>
|
| 119 |
+
<td align="left"><b>Overall Accuracy</b></td>
|
| 120 |
+
<td align="center"><b>0.375</b></td>
|
| 121 |
+
<td align="center">0.350</td>
|
| 122 |
+
<td align="center">0.3375</td>
|
| 123 |
+
<td align="center">0.225</td>
|
| 124 |
+
</tr>
|
| 125 |
+
<tr>
|
| 126 |
+
<td align="left">Business Report</td>
|
| 127 |
+
<td align="center">0.400</td>
|
| 128 |
+
<td align="center"><b>0.600</b></td>
|
| 129 |
+
<td align="center">0.200</td>
|
| 130 |
+
<td align="center">0.300</td>
|
| 131 |
+
</tr>
|
| 132 |
+
<tr>
|
| 133 |
+
<td align="left">Comics</td>
|
| 134 |
+
<td align="center">0.300</td>
|
| 135 |
+
<td align="center">0.200</td>
|
| 136 |
+
<td align="center"><b>0.400</b></td>
|
| 137 |
+
<td align="center">0.100</td>
|
| 138 |
+
</tr>
|
| 139 |
+
<tr>
|
| 140 |
+
<td align="left">Engineering Drawing</td>
|
| 141 |
+
<td align="center">0.300</td>
|
| 142 |
+
<td align="center">0.300</td>
|
| 143 |
+
<td align="center"><b>0.500</b></td>
|
| 144 |
+
<td align="center">0.200</td>
|
| 145 |
+
</tr>
|
| 146 |
+
<tr>
|
| 147 |
+
<td align="left">Infographics</td>
|
| 148 |
+
<td align="center"><b>0.700</b></td>
|
| 149 |
+
<td align="center">0.600</td>
|
| 150 |
+
<td align="center">0.500</td>
|
| 151 |
+
<td align="center">0.500</td>
|
| 152 |
+
</tr>
|
| 153 |
+
<tr>
|
| 154 |
+
<td align="left">Maps</td>
|
| 155 |
+
<td align="center">0.000</td>
|
| 156 |
+
<td align="center"><b>0.200</b></td>
|
| 157 |
+
<td align="center">0.000</td>
|
| 158 |
+
<td align="center">0.100</td>
|
| 159 |
+
</tr>
|
| 160 |
+
<tr>
|
| 161 |
+
<td align="left">Science Paper</td>
|
| 162 |
+
<td align="center">0.300</td>
|
| 163 |
+
<td align="center">0.400</td>
|
| 164 |
+
<td align="center"><b>0.500</b></td>
|
| 165 |
+
<td align="center">0.100</td>
|
| 166 |
+
</tr>
|
| 167 |
+
<tr>
|
| 168 |
+
<td align="left">Science Poster</td>
|
| 169 |
+
<td align="center"><b>0.300</b></td>
|
| 170 |
+
<td align="center">0.000</td>
|
| 171 |
+
<td align="center">0.200</td>
|
| 172 |
+
<td align="center">0.000</td>
|
| 173 |
+
</tr>
|
| 174 |
+
<tr>
|
| 175 |
+
<td align="left">Slide</td>
|
| 176 |
+
<td align="center"><b>0.700</b></td>
|
| 177 |
+
<td align="center">0.500</td>
|
| 178 |
+
<td align="center">0.400</td>
|
| 179 |
+
<td align="center">0.500</td>
|
| 180 |
+
</tr>
|
| 181 |
+
</tbody>
|
| 182 |
+
</table>
|
| 183 |
+
</div>
|
| 184 |
+
|
| 185 |
+
> [!NOTE]
|
| 186 |
+
> **Evaluation Parameters:**
|
| 187 |
+
> * **GPT Models:** "High thinking" enabled, temperature set to `1.0`.
|
| 188 |
+
> * **Gemini Models:** "High thinking" enabled, temperature set to `0.0`.
|
| 189 |
+
|
| 190 |
+
> [!WARNING]
|
| 191 |
+
> **API Constraints:** > Both models were evaluated via their respective APIs. If a sample fails because the input files are too large, the result counts as a failure. For example, the file input limit for OpenAI models is 50MB, and several comics in this dataset surpass that threshold.
|
| 192 |
|
| 193 |
## Dataset Structure
|
| 194 |
|
| 195 |
The dataset consists of:
|
| 196 |
1. **Images:** High-resolution PNG renders of document pages located in the `images/` directory.
|
| 197 |
2. **Annotations:** A Parquet file (`val.parquet`) containing the questions, answers, and references to the image paths.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|