Update README.md
Browse files
README.md
CHANGED
|
@@ -326,13 +326,13 @@ We introduce MileBench, a pioneering benchmark designed to test the **M**ult**I*
|
|
| 326 |
This benchmark comprises not only multimodal long contexts, but also multiple tasks requiring both comprehension and generation.
|
| 327 |
We establish two distinct evaluation sets, diagnostic and realistic, to systematically assess MLLMs’ long-context adaptation capacity and their ability to completetasks in long-context scenarios
|
| 328 |
|
| 329 |
-
<img src="./images/MileBench.png" width
|
| 330 |
|
| 331 |
To construct our evaluation sets, we gather 6,440 multimodal long-context samples from 21 pre-existing or self-constructed datasets,
|
| 332 |
with an average of 15.2 images and 422.3 words each, as depicted in the figure, and we categorize them into their respective subsets.
|
| 333 |
-
|
| 334 |
-
<img src="./images/
|
| 335 |
-
|
| 336 |
|
| 337 |
## How to use?
|
| 338 |
|
|
|
|
| 326 |
This benchmark comprises not only multimodal long contexts, but also multiple tasks requiring both comprehension and generation.
|
| 327 |
We establish two distinct evaluation sets, diagnostic and realistic, to systematically assess MLLMs’ long-context adaptation capacity and their ability to completetasks in long-context scenarios
|
| 328 |
|
| 329 |
+
<img src="./images/MileBench.png" width="600" alt="MileBench" align="center" />
|
| 330 |
|
| 331 |
To construct our evaluation sets, we gather 6,440 multimodal long-context samples from 21 pre-existing or self-constructed datasets,
|
| 332 |
with an average of 15.2 images and 422.3 words each, as depicted in the figure, and we categorize them into their respective subsets.
|
| 333 |
+
<center class="half">
|
| 334 |
+
<img src="./images/stat2.png" width="300" alt="stat2"/><img src="./images/stat1.png" width="300" alt="stat1"/>
|
| 335 |
+
</center>
|
| 336 |
|
| 337 |
## How to use?
|
| 338 |
|