3v324v23 commited on
Commit
6636348
·
1 Parent(s): 22a7cb0

Add initial dataset

Browse files
Files changed (4) hide show
  1. README.md +68 -3
  2. assets/icon.png +3 -0
  3. assets/pipeline.png +3 -0
  4. vlm_router_data.tar.gz +3 -0
README.md CHANGED
@@ -1,3 +1,68 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <div align="center">
2
+
3
+ <p align="center">
4
+ <img src="assets/icon.png" width="220" alt="VL-RouterBench logo" />
5
+ </p>
6
+
7
+ ### VL-RouterBench: A Benchmark for Vision–Language Model Routing
8
+
9
+ [![arXiv](https://img.shields.io/badge/arXiv-2512.23562-red.svg)](https://arxiv.org/abs/2512.23562)
10
+ [![GitHub](https://img.shields.io/badge/GitHub-Repository-black.svg)](https://github.com/K1nght/VL-RouterBench)
11
+
12
+ </div>
13
+
14
+ ## Overview
15
+
16
+ We provides a clean, reproducible implementation of **VL-RouterBench**, a benchmark and toolkit for **routing across a pool of Vision–Language Models (VLMs)** under both **performance** and **performance–cost** objectives.
17
+
18
+ <p align="center">
19
+ <img src="assets/pipeline.png" width="900" alt="VL-RouterBench pipeline" />
20
+ </p>
21
+
22
+ ## 📦 Data Preparation
23
+
24
+ VL-RouterBench converts [**VLMEvalKit**](https://github.com/open-compass/VLMEvalKit) outputs into a unified routing benchmark.
25
+
26
+ To make data setup easier, we provide a pre-packaged archive **`vlm_router_data.tar.gz`** that contains everything needed to run the pipeline. You can download it from any of the following channels and extract it under the repo root:
27
+
28
+ - **Google Drive**: [vlm_router_data.tar.gz](https://drive.google.com/file/d/1Va18MW8nJqvatxDXQDQq0t9NAqr93hMg/view?usp=sharing)
29
+ - **Baidu Netdisk**: [vlm_router_data.tar.gz](https://pan.baidu.com/s/1D_P8YwY_E5kDA5dUB-ovng) (code: xb1s)
30
+ - **Hugging Face**: [vlm_router_data.tar.gz](https://huggingface.co/datasets/KinghtH/VL-RouterBench)
31
+
32
+ After downloading, extract it as:
33
+
34
+ ```bash
35
+ tar -xzf vlm_router_data.tar.gz
36
+ ```
37
+
38
+ By default, the pipeline expects the following directories (relative to repo root):
39
+
40
+ ```bash
41
+ vlm_router_data/
42
+ VLMEvalKit_evaluation/ # required (for is_correct / evaluation)
43
+ VLMEvalKit_inference/ # required for accurate output-token counting (Step 2)
44
+ TSV_images/ # optional (for TSV-packed image datasets)
45
+ ```
46
+
47
+ Notes:
48
+ - **`VLMEvalKit_evaluation/`** is used by Step 1 & 4 (contains correctness signals).
49
+ - **`VLMEvalKit_inference/`** is used by Step 2 (extract real model outputs to count output tokens).
50
+ - **`TSV_images/`** is used by routers for training and inference to make routing decisions.
51
+
52
+ ## 📝 Citation
53
+
54
+ If you find this benchmark useful, please cite:
55
+
56
+ ```bibtex
57
+ @misc{huang2025vlrouterbenchbenchmarkvisionlanguagemodel,
58
+ title={VL-RouterBench: A Benchmark for Vision-Language Model Routing},
59
+ author={Zhehao Huang and Baijiong Lin and Jingyuan Zhang and Jingying Wang and Yuhang Liu and Ning Lu and Tao Li and Xiaolin Huang},
60
+ year={2025},
61
+ eprint={2512.23562},
62
+ archivePrefix={arXiv},
63
+ primaryClass={cs.LG},
64
+ url={https://arxiv.org/abs/2512.23562},
65
+ }
66
+ ```
67
+
68
+ ---
assets/icon.png ADDED

Git LFS Details

  • SHA256: c0ecc17f3583b27ba7fda19bdf02f434ba072b0560853bd71c570a690e34298f
  • Pointer size: 131 Bytes
  • Size of remote file: 264 kB
assets/pipeline.png ADDED

Git LFS Details

  • SHA256: 9c01047389b3de9adec04369f6bed9afcb1401162021e7ce8b8b755d64af61f8
  • Pointer size: 132 Bytes
  • Size of remote file: 1.29 MB
vlm_router_data.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d39aeaf6b3a309396b05e0e0516a8ef69619cb11cb4221e8b83baac086012e6
3
+ size 6815073070