i2vec commited on
Commit
9462edd
Β·
verified Β·
1 Parent(s): 329b3ae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -3
README.md CHANGED
@@ -1,3 +1,63 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ # MM-R5: MultiModal Reasoning-Enhanced ReRanker via Reinforcement Learning for Document Retrieval
5
+
6
+ [![arXiv](https://img.shields.io/badge/arXiv-2506.12494-b31b1b.svg)](https://arxiv.org/abs/2506.12494)
7
+ [![Hugging Face](https://img.shields.io/badge/huggingface-MMR5-yellow.svg)](https://huggingface.co/i2vec/MM-R5)
8
+ [![Github](https://img.shields.io/badge/Github-MMR5-black.svg)](https://github.com/i2vec/MM-R5)
9
+ ****
10
+
11
+ # πŸ“– Table of Contents
12
+ - [πŸ“– Table of Contents](#-table-of-contents)
13
+ - [πŸ“’ News](#-news)
14
+ - [πŸ“– Introduction](#-Introduction)
15
+ - [πŸ“Š Resulsts](#-results)
16
+ - [πŸš€ Getting Started](#-getting-started)
17
+ - [🏷️ License](#️-license)
18
+ - [πŸ–‹οΈ Citation](#️-citation)
19
+ - [❀️ Acknowledgements](#️-acknowledgements)
20
+
21
+
22
+
23
+
24
+ # πŸ“’ News
25
+ - **2025-06-20**: Our model [MM-R5](https://huggingface.co/i2vec/MM-R5) is now publicly available on Hugging Face!
26
+ - **2025-06-14**: Our publication [MM-R5: MultiModal Reasoning-Enhanced ReRanker via Reinforcement Learning for Document Retrieval](https://arxiv.org/abs/2506.12364) is now available!
27
+
28
+ # πŸ“– Introduction
29
+ We introduce **MM-R5**, a novel *Multimodal Reasoning-Enhanced ReRanker* designed to improve document retrieval in complex, multimodal settings. Unlike traditional rerankers that treat candidates as isolated inputs, MM-R5 incorporates explicit chain-of-thought reasoning across textual, visual, and structural modalities to better assess relevance. The model follows a two-stage training paradigm: during the supervised fine-tuning (SFT) stage, it is trained to produce structured reasoning chains over multimodal content. To support this, we design a principled data construction method that generates high-quality reasoning traces aligned with retrieval intent, enabling the model to learn interpretable and effective decision paths. In the second stage, reinforcement learning is applied to further optimize the reranking performance using carefully designed reward functions, including task-specific ranking accuracy and output format validity. This combination of reasoning supervision and reward-driven optimization allows MM-R5 to deliver both accurate and interpretable reranking decisions. Experiments on the MMDocIR benchmark show that MM-R5 achieves state-of-the-art top-k retrieval performance, outperforming strong unimodal and large-scale multimodal baselines in complex document understanding scenarios.
30
+
31
+ # πŸš€ Getting Started
32
+ You can get the reranker from [here](https://github.com/i2vec/MM-R5/blob/main/examples/reranker.py)
33
+ ```python
34
+ from reranker import QueryReranker
35
+
36
+ reranker = QueryReranker("i2vec/MM-R5")
37
+
38
+ query = "What is the financial benefit of the partnership?"
39
+ image_list = [
40
+ "/path/to/images/image1.png",
41
+ "/path/to/images/image2.png",
42
+ "/path/to/images/image3.png",
43
+ "/path/to/images/image4.png",
44
+ "/path/to/images/image5.png"
45
+ ]
46
+
47
+ predicted_order = reranker.rerank(query, image_list)
48
+
49
+ print(f"Query: {query}")
50
+ print(f"Reranked order: {predicted_order}")
51
+ ```
52
+
53
+ # πŸ–‹οΈ Citation
54
+ If you use MM-R5 in your research, please cite our project:
55
+ ```bibtex
56
+
57
+ @article{xu2025mm,
58
+ title={MM-R5: MultiModal Reasoning-Enhanced ReRanker via Reinforcement Learning for Document Retrieval},
59
+ author={Xu, Mingjun and Dong, Jinhan and Hou, Jue and Wang, Zehui and Li, Sihang and Gao, Zhifeng and Zhong, Renxin and Cai, Hengxing},
60
+ journal={arXiv preprint arXiv:2506.12364},
61
+ year={2025}
62
+ }
63
+ ```