Update README.md
Browse files
README.md
CHANGED
|
@@ -18,56 +18,48 @@ size_categories:
|
|
| 18 |
<!-- contents with emoji -->
|
| 19 |
|
| 20 |
## π News
|
|
|
|
| 21 |
π₯[2025-12]: We released our paper, benchmark, and evaluation codes.
|
| 22 |
|
| 23 |
|
| 24 |
-
##
|
| 25 |
|
| 26 |
-
|
| 27 |
-
`videos.zip` contains all original video data (optional),
|
| 28 |
-
`ref_images.zip` includes the reference images used in the question descriptions,
|
| 29 |
-
and `mmsivideo.json` corresponds to the MMSI-Video-Bench annotations. Each sample follows the format below:
|
| 30 |
|
| 31 |
-
|
| 32 |
-
|
|
|
|
| 33 |
|
| 34 |
-
|
| 35 |
-
"type": "Planning", // Type of the question (e.g., Spatial Construction, Motion Understanding, Planning, Prediction, Cross-View Reasoning)
|
| 36 |
-
"ref_images": [
|
| 37 |
-
"question_0004/image_content_1.png" // List of reference images associated with the question
|
| 38 |
-
],
|
| 39 |
-
"ori_question": "The video is shot from a bird's-eye view of a building. I will provide you with an image showing my position and orientation on the ground (the red dot represents my position and the arrow shows my orientation)<image>Please answer: If I want to view the statue from the front, how many paths are there to reach the target point? Among these paths, what is the shortest route?",
|
| 40 |
-
// Original question text; may contain placeholders like <image> to indicate where reference images should be shown
|
| 41 |
-
"options": [
|
| 42 |
-
"3 paths; the shortest path is to enter the building from the rear-right passage and walk straight to the target point",
|
| 43 |
-
...
|
| 44 |
-
],
|
| 45 |
-
"frames_list": [
|
| 46 |
-
[
|
| 47 |
-
"question_0004/dl3dv_0015_0/00:00.00_frame_1.00_0.jpg",
|
| 48 |
-
...
|
| 49 |
-
// List of key frames for the corresponding video segment
|
| 50 |
-
]
|
| 51 |
-
],
|
| 52 |
-
"video_list": [
|
| 53 |
-
{
|
| 54 |
-
"path": "question_0004/dl3dv_0015_0.mp4",
|
| 55 |
-
"start": 0.0,
|
| 56 |
-
"end": 30.0,
|
| 57 |
-
"base_fps": 1.0
|
| 58 |
-
}
|
| 59 |
-
// List of information of the corresponding video segment
|
| 60 |
-
]
|
| 61 |
-
"system_prompt": "Please answer questions based on video content. ...",
|
| 62 |
-
"task_prompt": "This task ...",
|
| 63 |
-
"user_prompt": "The video I provided to you is: <video>. Next, I will ask a question based on the video information. My question is: ...",
|
| 64 |
-
"format_prompt": " Please select the correct answer from the following options: ...",
|
| 65 |
-
"ground_truth": "D"
|
| 66 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 67 |
}
|
| 68 |
```
|
| 69 |
|
| 70 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 71 |
Please refer to the evaluation guidelines in our [github repo](https://github.com/InternRobotics/MMSI-Video-Bench).
|
| 72 |
|
| 73 |
## π Leaderboard
|
|
|
|
| 18 |
<!-- contents with emoji -->
|
| 19 |
|
| 20 |
## π News
|
| 21 |
+
π₯[2025-12]: Our MMSI-Video-Bench has been integrated into [VLMEvalKit](https://github.com/open-compass/VLMEvalKit).
|
| 22 |
π₯[2025-12]: We released our paper, benchmark, and evaluation codes.
|
| 23 |
|
| 24 |
|
| 25 |
+
## π Data Details
|
| 26 |
|
| 27 |
+
All of our data is available on [Hugging Face](https://huggingface.co/datasets/rbler/MMSI-Video-Bench) and includes the following components:
|
|
|
|
|
|
|
|
|
|
| 28 |
|
| 29 |
+
π₯ **Video Data** (`videos.zip`): Contains the video clip file (.mp4) corresponding to each sample. This file is generally not required for most models.
|
| 30 |
+
|
| 31 |
+
π₯ **Frame Data** (`frames.zip`): Contains the frames (.jpg) extracted from each sample's video at the **base sampling rate**. This rate ensures no key information loss during sampling. Each frame file is named using the format `{timestamp}_frame_{base_interval}_{image_id}` (e.g., 00:06.00_frame_1.50_4), where the timestamp, also shown on the **top-left corner** of the frame, indicates its **capture time in the original recording**.
|
| 32 |
|
| 33 |
+
πΌοΈ **Reference Image Data** (`ref_images.zip`): Contains the auxiliary images referenced in the questions for each sample.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
|
| 35 |
+
π **Text Annotation** (`mmsivideo.json`)οΌThis file contains the annotation information for MMSI-Video-Bench. All time references in the questions correspond to the capture time in the original recording and **align with** the timestamp flag on each frame. Key fields include:
|
| 36 |
+
|
| 37 |
+
```
|
| 38 |
+
{
|
| 39 |
+
"ref_images": [Paths to auxiliary images referenced in the question,...],
|
| 40 |
+
"video_list": [
|
| 41 |
+
{
|
| 42 |
+
"path": Video clip file path,
|
| 43 |
+
"start": Timestamp (in seconds) of the first frame of the video clip in the original recording,
|
| 44 |
+
"end": Timestamp (in seconds) of the last frame of the video clip in the original recording,
|
| 45 |
+
"base_fps": Base sampling rate
|
| 46 |
+
},
|
| 47 |
+
...
|
| 48 |
+
],
|
| 49 |
+
"frames_list": [[Paths to frames sampled at the base sampling rate,...],...],
|
| 50 |
+
"system_prompt": "...",
|
| 51 |
+
"task_prompt": Task-specific prompt,
|
| 52 |
+
"user_prompt": Question text, with <video> as a placeholder for video and <image> for auxiliary images,
|
| 53 |
+
"format_prompt": Output format requirements,
|
| 54 |
+
"ground_truth": Correct answer
|
| 55 |
}
|
| 56 |
```
|
| 57 |
|
| 58 |
+
|
| 59 |
+
Unless otherwise specified, the model input generally consists of:
|
| 60 |
+
`system_prompt + task_prompt + user_prompt + format_prompt`.
|
| 61 |
+
|
| 62 |
+
## π Evaluation
|
| 63 |
Please refer to the evaluation guidelines in our [github repo](https://github.com/InternRobotics/MMSI-Video-Bench).
|
| 64 |
|
| 65 |
## π Leaderboard
|