task_categories:
- visual-question-answering
- document-question-answering
language:
- en
tags:
- multimodal
- benchmark
- document-understanding
configs:
- config_name: default
data_files:
- split: val
path: val.parquet
DocVQA 2026 | ICDAR2026 Competition on Multimodal Reasoning over Documents in Multiple Domains
Building upon previous DocVQA benchmarks, this evaluation dataset introduces challenging reasoning questions over a diverse collection of documents spanning eight domains, including business reports, scientific papers, slides, posters, maps, comics, infographics, and engineering drawings.
By expanding coverage to new document domains and introducing richer question types, this benchmark seeks to push the boundaries of multimodal reasoning and promote the development of more general, robust document understanding models.
🏆 Competition Hosting & Test Set
The official DocVQA 2026 competition is hosted on the Robust Reading Competition (RRC) platform, which provides the standardized framework for our leaderboards, submissions, and result tracking.
Test Set Status: Coming Soon! By the time being, please use the provided validation set and the evaluation code.
Join the Challenge on the RRC Platform
Load & Inspect the Data
from datasets import load_dataset
# 1. Load the dataset
dataset = load_dataset("VLR-CVC/DocVQA-2026", split="val")
# 2. Access a single sample (one document)
sample = dataset[0]
doc_id = sample["doc_id"]
category = sample["doc_category"]
print(f"Document ID: {doc_id} ({category})")
# 3. Access Images
# 'document' is a list of PIL Images (one for each page)
images = sample["document"]
print(f"Number of pages: {len(images)}")
images[0].show()
# 4. Access Questions and Answers
questions = sample["questions"]
answers = sample["answers"]
# 5. Visualize Q&A pairs for a document
for q, q_id, a in zip(questions['question'], questions['question_id'], answers['answer']):
print("-" * 50)
print(f"Question ID: {q_id}")
print(f"Question: {q}")
print(f"Answer: {a}")
print("-" * 50)
Structure of a Sample
Click to expand the JSON structure
{
"doc_id": "maps_2",
"doc_category": "maps",
"preview": "<image>",
"document": [
"<image>"
],
"questions": {
"question_id": [
"maps_2_q1",
"maps_2_q2",
"maps_2_q3",
"maps_2_q4",
"maps_2_q5"
],
"question": [
"By which kind of road are Colchester and Yantic connected?",
"Which is the most populated town in the E-10 coordinates?",
"What is the milage between Taunton and Dedham? Do not provide the unit.",
"From Worcester I take highway 140 towards Taunton, I take the second macadam & gravel road that I encounter, continuing on that road, what town do I reach?",
"If I follow highway 109 from Pittsfield to Northampton, how many towns do I cross (without counting start and ending location)?"
]
},
"answers": {
"question_id": [
"maps_2_q1",
"maps_2_q2",
"maps_2_q3",
"maps_2_q4",
"maps_2_q5"
],
"answer": [
"Macadam & Gravel",
"Wareham",
"27",
"Woonsocket",
"7"
]
}
}
Results
Figure 1: Performance comparison across domains.
| Category | Gemini 3 Pro Preview | GPT-5.2 | Gemini 3 Flash Preview | GPT-5 Mini |
|---|---|---|---|---|
| Overall Accuracy | 0.375 | 0.350 | 0.3375 | 0.225 |
| Business Report | 0.400 | 0.600 | 0.200 | 0.300 |
| Comics | 0.300 | 0.200 | 0.400 | 0.100 |
| Engineering Drawing | 0.300 | 0.300 | 0.500 | 0.200 |
| Infographics | 0.700 | 0.600 | 0.500 | 0.500 |
| Maps | 0.000 | 0.200 | 0.000 | 0.100 |
| Science Paper | 0.300 | 0.400 | 0.500 | 0.100 |
| Science Poster | 0.300 | 0.000 | 0.200 | 0.000 |
| Slide | 0.700 | 0.500 | 0.400 | 0.500 |
Evaluation Parameters:
- GPT Models: "High thinking" enabled, temperature set to
1.0.- Gemini Models: "High thinking" enabled, temperature set to
0.0.
API Constraints: > Both models were evaluated via their respective APIs. If a sample fails because the input files are too large, the result counts as a failure. For example, the file input limit for OpenAI models is 50MB, and several comics in this dataset surpass that threshold.
📝 Submission Guidelines & Formatting Rules
To ensure fair and accurate evaluation across all participants, submissions are evaluated using automated metrics. Therefore, all model outputs must strictly adhere to the following formatting rules:
- Source Adherence: Only provide answers found directly within the document. If the question is unanswerable given the provided image, the response must be exactly:
"Unknown". - Multiple Answers: List multiple answers in their order of appearance, separated by a comma and a single space. Do not use the word "and". (Example:
Answer A, Answer B) - Numbers & Units: Convert units to their standardized abbreviations (e.g., use
kginstead of "kilograms",minstead of "meters"). Always place a single space between the number and the unit. (Example:50 kg,10 USD) - Percentages: Attach the
%symbol directly to the number with no space. (Example:50%) - Dates: Convert all dates to the standardized
YYYY-MM-DDformat. (Example: "Jan 1st 24" becomes2024-01-01) - Decimals: Use a single period (
.) as a decimal separator, never a comma. (Example:3.14) - Thousands Separator: Do not use commas to separate large numbers. (Example:
1000, not1,000) - No Filler Text: Output only the requested data. Do not frame your answer in full sentences (e.g., avoid "The answer is...").
Final Output Format: When generating the final extracted data, your system must prefix the response with the following exact phrasing:
FINAL ANSWER: [Your formatted answer]
Evaluation Code & Baselines
To ensure consistency and fairness, all submissions are evaluated using our official automated evaluation pipeline. This pipeline handles the extraction of your model's answers and applies both strict formatting checks (for numbers, dates, and units) and relaxed text matching (ANLS) for text-based answers.
You can find the complete, ready-to-use evaluation script in our official GitHub repository: 👉 VLR-CVC/DocVQA2026 GitHub Repository
What you will find in the repository:
- The Evaluator Script: The core logic used to parse your model's outputs and calculate the final scores. You can use this script to test and evaluate your predictions locally before making an official submission.
- The Baseline Master Prompt: We have included the exact prompt structure (
get_evaluation_prompt()) used for our baseline experiments. This prompt is heavily engineered to enforce the competition's mandatory reasoning protocols and strict output formatting.
We highly recommend reviewing both the evaluation script and the Master Prompt. You are welcome to use the provided prompt out-of-the-box or adapt it to better guide your own custom models!
Dataset Structure
The dataset consists of:
- Images: High-resolution PNG renders of document pages located in the
images/directory. - Annotations: A Parquet file (
val.parquet) containing the questions, answers, and references to the image paths.