Datasets:
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Tags:
multi-modal-qa
geometry-qa
abstract-reasoning
geometry-reasoning
visual-puzzle
non-verbal-reasoning
License:
Kexuan Sun
commited on
Commit
·
88ea1b1
1
Parent(s):
49f9bb4
Update README.md and answer labels
Browse files- README.md +19 -1
- marvel_illustration.jpeg +3 -0
- marvel_labels.jsonl +0 -0
README.md
CHANGED
|
@@ -1,5 +1,7 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
| 3 |
task_categories:
|
| 4 |
- visual-question-answering
|
| 5 |
language:
|
|
@@ -12,6 +14,7 @@ size_categories:
|
|
| 12 |
### Dataset Description
|
| 13 |
|
| 14 |
MARVEL is a new comprehensive benchmark dataset that evaluates multi-modal large language models' abstract reasoning abilities in six patterns across five different task configurations, revealing significant performance gaps between humans and SoTA MLLMs.
|
|
|
|
| 15 |
|
| 16 |
### Dataset Sources [optional]
|
| 17 |
|
|
@@ -27,6 +30,21 @@ Evaluations for multi-modal large language models' abstract reasoning abilities.
|
|
| 27 |
|
| 28 |
The directory **images** keeps all images, and the file **marvel_labels.jsonl** provides annotations and explanations for all questions.
|
| 29 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
## Citation [optional]
|
| 31 |
|
| 32 |
**BibTeX:**
|
|
@@ -37,4 +55,4 @@ The directory **images** keeps all images, and the file **marvel_labels.jsonl**
|
|
| 37 |
journal={arXiv preprint arXiv:2404.13591},
|
| 38 |
year={2024}
|
| 39 |
}
|
| 40 |
-
```
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
paperswithcode_id: marvel
|
| 4 |
+
pretty_name: MARVEL (Multidimensional Abstraction and Reasoning through Visual Evaluation and Learning)
|
| 5 |
task_categories:
|
| 6 |
- visual-question-answering
|
| 7 |
language:
|
|
|
|
| 14 |
### Dataset Description
|
| 15 |
|
| 16 |
MARVEL is a new comprehensive benchmark dataset that evaluates multi-modal large language models' abstract reasoning abilities in six patterns across five different task configurations, revealing significant performance gaps between humans and SoTA MLLMs.
|
| 17 |
+

|
| 18 |
|
| 19 |
### Dataset Sources [optional]
|
| 20 |
|
|
|
|
| 30 |
|
| 31 |
The directory **images** keeps all images, and the file **marvel_labels.jsonl** provides annotations and explanations for all questions.
|
| 32 |
|
| 33 |
+
### Fields
|
| 34 |
+
|
| 35 |
+
- **id** is of ID of the question
|
| 36 |
+
- **pattern** is the high-level pattern category of the question
|
| 37 |
+
- **task_configuration** is the task configuration of the question
|
| 38 |
+
- **avr_question** is the text of the AVR question
|
| 39 |
+
- **answer** is the answer to the AVR question
|
| 40 |
+
- **explanation** is the textual reasoning process to answer the question
|
| 41 |
+
- **f_perception_question** is the fine-grained perception question
|
| 42 |
+
- **f_perception_answer** is the answer to the fine-grained perception question
|
| 43 |
+
- **f_perception_distractor** is the distractor of the fine-grained perception question
|
| 44 |
+
- **c_perception_question_tuple** is a list of coarse-grained perception questions
|
| 45 |
+
- **c_perception_answer_tuple** is a list of answers to the coarse-grained perception questions
|
| 46 |
+
- **file** is the path to the image of the question
|
| 47 |
+
|
| 48 |
## Citation [optional]
|
| 49 |
|
| 50 |
**BibTeX:**
|
|
|
|
| 55 |
journal={arXiv preprint arXiv:2404.13591},
|
| 56 |
year={2024}
|
| 57 |
}
|
| 58 |
+
```
|
marvel_illustration.jpeg
ADDED
|
Git LFS Details
|
marvel_labels.jsonl
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|