ReasonMap-Plus / README.md
FSCCS's picture
Enhance dataset card for ReasonMap-Plus: Add paper, links, usage, abstract, citation (#2)
3c2d36f verified
---
license: apache-2.0
task_categories:
- image-text-to-text
tags:
- multimodal
- visual-question-answering
- spatial-reasoning
- reinforcement-learning
- transit-maps
language:
- en
---
# ReasonMap-Plus Dataset
This repository hosts the `ReasonMap-Plus` dataset, an extended dataset introduced in the paper [RewardMap: Tackling Sparse Rewards in Fine-grained Visual Reasoning via Multi-Stage Reinforcement Learning](https://huggingface.co/papers/2510.02240).
## Paper Abstract
Fine-grained visual reasoning remains a core challenge for multimodal large language models (MLLMs). The recently introduced ReasonMap highlights this gap by showing that even advanced MLLMs struggle with spatial reasoning in structured and information-rich settings such as transit maps, a task of clear practical and scientific importance. However, standard reinforcement learning (RL) on such tasks is impeded by sparse rewards and unstable optimization. To address this, we first construct ReasonMap-Plus, an extended dataset that introduces dense reward signals through Visual Question Answering (VQA) tasks, enabling effective cold-start training of fine-grained visual understanding skills. Next, we propose RewardMap, a multi-stage RL framework designed to improve both visual understanding and reasoning capabilities of MLLMs. RewardMap incorporates two key designs. First, we introduce a difficulty-aware reward design that incorporates detail rewards, directly tackling the sparse rewards while providing richer supervision. Second, we propose a multi-stage RL scheme that bootstraps training from simple perception to complex reasoning tasks, offering a more effective cold-start strategy than conventional Supervised Fine-Tuning (SFT). Experiments on ReasonMap and ReasonMap-Plus demonstrate that each component of RewardMap contributes to consistent performance gains, while their combination yields the best results. Moreover, models trained with RewardMap achieve an average improvement of 3.47% across 6 benchmarks spanning spatial reasoning, fine-grained visual reasoning, and general tasks beyond transit maps, underscoring enhanced visual understanding and reasoning capabilities.
## Dataset Overview
`ReasonMap-Plus` addresses the core challenge of fine-grained visual reasoning for multimodal large language models (MLLMs). It extends the original `ReasonMap` dataset by introducing dense reward signals through Visual Question Answering (VQA) tasks, enabling effective cold-start training of fine-grained visual understanding skills. This dataset is crucial for the `RewardMap` framework, which aims to improve both visual understanding and reasoning capabilities of MLLMs in structured and information-rich settings like transit maps.
The dataset includes `ReasonMap-Plus` for evaluation and `ReasonMap-Train` for `RewardMap` training.
## Links
- **Project Page:** [https://fscdc.github.io/RewardMap](https://fscdc.github.io/RewardMap)
- **Code Repository:** [https://github.com/fscdc/RewardMap](https://github.com/fscdc/RewardMap)
<p align="center">
<img src="https://github.com/fscdc/RewardMap/raw/main/assets/rewardmap.svg" width = "95%" alt="RewardMap Framework Overview" align=center />
</p>
## Sample Usage
To get started with the RewardMap project and utilize the ReasonMap-Plus dataset, follow the steps below.
### 1. Install dependencies
If you face any issues with the installation, please feel free to open an issue. We will try our best to help you.
```bash
pip install -r requirements.txt
```
### 2. Download the dataset
<p align="center">
<img src="https://github.com/fscdc/RewardMap/raw/main/assets/overview_dataset.svg" width = "95%" alt="Dataset Overview" align=center />
</p>
You can download `ReasonMap-Plus` for evaluation and `ReasonMap-Train` for RewardMap Training from HuggingFace or by running the following command:
```bash
python utils/download_dataset.py
```
Then, put the data under the folder `data`.
### 3. Data Format Example
The data will be transferred into a format like:
```json
{
"conversations": [
{
"from": "human",
"value": "<image> Please solve the multiple choice problem and put your answer (one of ABCD) in one \"\\boxed{}\". According to the subway map, how many intermediate stops are there between Danube Station and lbn Battuta Station (except for this two stops)? \
A) 8 \
B) 1 \
C) 25 \
D) 12 \
"
},
{
"from": "gpt",
"value": "B"
}
],
"images": [
"./maps/united_arab_emirates/dubai.png"
]
},
```
## Citation
If you find this paper useful in your research, please consider citing our paper:
```bibtex
@article{feng2025rewardmap,
title={RewardMap: Tackling Sparse Rewards in Fine-grained Visual Reasoning via Multi-Stage Reinforcement Learning},
author={Feng, Sicheng and Tuo, Kaiwen and Wang, Song and Kong, Lingdong and Zhu, Jianke and Wang, Huan},
journal={arXiv preprint arXiv:2510.02240},
year={2025}
}
```