--- license: apache-2.0 viewer: false --- # MMEB-V2 (Massive Multimodal Embedding Benchmark) [**Website**](https://tiger-ai-lab.github.io/VLM2Vec/) |[**Github**](https://github.com/TIGER-AI-Lab/VLM2Vec) | [**🏆Leaderboard**](https://huggingface.co/spaces/TIGER-Lab/MMEB) | [**📖MMEB-V2/VLM2Vec-V2 Paper**](https://arxiv.org/abs/2507.04590) | | [**📖MMEB-V1/VLM2Vec-V1 Paper**](https://arxiv.org/abs/2410.05160) | ## Introduction Building upon on our original [**MMEB**](https://arxiv.org/abs/2410.05160), **MMEB-V2** expands the evaluation scope to include five new tasks: four video-based tasks — Video Retrieval, Moment Retrieval, Video Classification, and Video Question Answering — and one task focused on visual documents, Visual Document Retrieval. This comprehensive suite enables robust evaluation of multimodal embedding models across static, temporal, and structured visual data settings. **This Hugging Face repository contains the raw video files used in MMEB-V2. Typically, video frames are all you need, but we also release the raw video files here in case they are needed.** **Please check the [main data repository](https://huggingface.co/datasets/TIGER-Lab/MMEB-V2) for instructions on all MMEB-V2–related data.**