Improve dataset card: Add paper link and task category
Browse filesThis PR enhances the dataset card for MMCricBench by:
- Adding a direct link to the research paper, [Mind the (Language) Gap: Towards Probing Numerical and Cross-Lingual Limits of LVLMs](https://huggingface.co/papers/2508.17334), in the introductory section.
- Including the `image-text-to-text` task category in the metadata, providing a more general classification for this VQA dataset.
- Removing the redundant link to the dataset's own Hugging Face page, which is implicitly the current page.
README.md
CHANGED
|
@@ -1,15 +1,16 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
| 2 |
license: cc-by-nc-sa-4.0
|
|
|
|
|
|
|
| 3 |
task_categories:
|
| 4 |
- table-question-answering
|
| 5 |
- visual-question-answering
|
| 6 |
-
|
| 7 |
-
- en
|
| 8 |
-
- hi
|
| 9 |
tags:
|
| 10 |
- cricket
|
| 11 |
-
size_categories:
|
| 12 |
-
- 1K<n<10K
|
| 13 |
configs:
|
| 14 |
- config_name: default
|
| 15 |
data_files:
|
|
@@ -45,10 +46,9 @@ dataset_info:
|
|
| 45 |
# MMCricBench 🏏
|
| 46 |
**Multimodal Cricket Scorecard Benchmark for VQA**
|
| 47 |
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
**Dataset:** https://huggingface.co/datasets/DIALab/MMCricBench
|
| 51 |
|
|
|
|
| 52 |
|
| 53 |
---
|
| 54 |
|
|
@@ -129,4 +129,4 @@ Accuracy (%) on MMCricBench by split and language.
|
|
| 129 |
*Numbers are exact-match accuracy (higher is better). For C1/C2/C3 breakdowns, see Table 3 (single-image) and Table 5 (multi-image) in the paper.*
|
| 130 |
|
| 131 |
## Contact
|
| 132 |
-
For questions or issues, please open a discussion on the dataset page or email **Abhirama Subramanyam** at penamakuri.1@iitj.ac.in
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
- hi
|
| 5 |
license: cc-by-nc-sa-4.0
|
| 6 |
+
size_categories:
|
| 7 |
+
- 1K<n<10K
|
| 8 |
task_categories:
|
| 9 |
- table-question-answering
|
| 10 |
- visual-question-answering
|
| 11 |
+
- image-text-to-text
|
|
|
|
|
|
|
| 12 |
tags:
|
| 13 |
- cricket
|
|
|
|
|
|
|
| 14 |
configs:
|
| 15 |
- config_name: default
|
| 16 |
data_files:
|
|
|
|
| 46 |
# MMCricBench 🏏
|
| 47 |
**Multimodal Cricket Scorecard Benchmark for VQA**
|
| 48 |
|
| 49 |
+
This repository contains the dataset for the paper [Mind the (Language) Gap: Towards Probing Numerical and Cross-Lingual Limits of LVLMs](https://huggingface.co/papers/2508.17334).
|
|
|
|
|
|
|
| 50 |
|
| 51 |
+
MMCricBench evaluates **Large Vision-Language Models (LVLMs)** on **numerical reasoning**, **cross-lingual understanding**, and **multi-image reasoning** over semi-structured cricket scorecard images. It includes English and Hindi scorecards; all questions/answers are in English.
|
| 52 |
|
| 53 |
---
|
| 54 |
|
|
|
|
| 129 |
*Numbers are exact-match accuracy (higher is better). For C1/C2/C3 breakdowns, see Table 3 (single-image) and Table 5 (multi-image) in the paper.*
|
| 130 |
|
| 131 |
## Contact
|
| 132 |
+
For questions or issues, please open a discussion on the dataset page or email **Abhirama Subramanyam** at penamakuri.1@iitj.ac.in
|