You need to agree to share your contact information to access this dataset
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
Explicit consent is given from Intelligent Interaction Group for the academic research. The rights to the annotation of MER dataset belong to Intelligent Interaction Group. No legal claims of any kind can be derived from accepting and using the database. Intelligent Interaction Group is not liable for any damage resulting from receiving, or using the database or any other files provided by Intelligent Interaction Group. The licensee is not permitted to hand over the database or any other files containing information derived from it (such as labelling files) to third parties, nor may they modify the database without obtaining expressed written consent from Intelligent Interaction Group.
Log in or Sign Up to review the conditions and access this dataset content.
π§ MER2024: Multimodal Emotion Recognition Challenge Dataset
MER2024 is a large-scale multimodal dataset released as part of the MER24 Challenge@IJCAI, which aims to advance the field of robust and practical multimodal emotion recognition. It builds upon the MER23 and MRAC23 datasets presented at ACM Multimedia, expanding both the data volume and task diversity to better reflect real-world challenges.
π― Challenge Background
Multimodal emotion recognition (MER) seeks to analyze human emotional states by integrating multiple modalities such as audio, visual, and text. While most prior research relies on fully labeled benchmarks and clean data, MER2024 targets real-world robustness and data sparsity by introducing:
- Noisy and partially labeled data.
- Open-vocabulary label generation.
- Large-scale human-centric unlabeled videos.
We continue to host the MER Challenge series to bring together the research community and promote applications of affective computing in health, education, entertainment, and beyond.
π Tracks & Tasks
π Track 1: MER-SEMI (Semi-supervised Emotion Recognition)
Due to the high cost of manual labeling, this track encourages participants to explore semi-supervised or unsupervised learning by utilizing a large set of unlabeled videos alongside a smaller labeled training set.
- β Task: Predict one of six emotion classes for all unlabeled samples.
π Track 2: MER-NOISE (Noise-Robust Emotion Recognition)
Emotion recognition in the wild is challenged by noisy audio and blurred images. This track evaluates model robustness under noise, including audio additive noise and visual blur.
- β Task: Develop models that perform well on noisy samples without knowing which samples are noisy.
π Track 3: MER-OV (Open-Vocabulary Emotion Recognition)
Conventional MER tasks restrict emotion labels to a fixed set. This track lifts that restriction and allows free-form emotional descriptions, encouraging models to produce open-vocabulary labels that reflect nuanced emotional states.
- β Task: Generate emotion descriptions in any category and any number of words per video.
π Dataset Structure
π Dataset Statistics
| Subset | # Samples |
|---|---|
| Train & Val | 5,030 |
| Unlabeled Data | 115,595 |
π§ Train & Val Data
| File | Description |
|---|---|
video-labeled.zip |
5,030 labeled video clips |
label-transcription.csv |
Subtitles (Chinese & English) of labeled videos |
label-disdim.csv |
Discrete emotion labels for all 5,030 samples |
final-EMER-reason.csv |
Emotion-related descriptions (332 samples) |
final-openset-chinese.csv |
Chinese open-vocabulary emotion labels from baselines |
final-openset-english.csv |
English open-vocabulary emotion labels from baselines |
reference-semi.csv |
Ground-truth test labels for the MER-SEMI track |
reference-noise.csv |
Ground-truth test labels for the MER-NOISE track |
π¬ Unlabeled Data
| File | Description |
|---|---|
video-unlabeled-with-test2noise.zip |
115,595 unlabeled video clips used across all challenge tracks |
unlabeled-with-test2noise-subtitle_csv.zip |
Pre-extracted audio and subtitles from the 115,595 unlabeled videos |
π Dataset Access
- Academic use only: Access to the dataset requires signing an EULA (End-User License Agreement).
- No redistribution: Uploading the dataset to any public platform or modifying it is prohibited.
- Approval required: After submitting the EULA on Hugging Face, access will be granted upon approval.
π₯ Access Instructions
After your access request has been approved, the dataset download instructions will be provided in the file:README_AFTER_APPROVAL.md
Please carefully review the LICENSE and EULA requirements before submitting your access request.
π Citation
If you use MER2024 in your research or publication, please cite the following paper:
@inproceedings{lian2024mer,
title={Mer 2024: Semi-supervised learning, noise robustness, and open-vocabulary multimodal emotion recognition},
author={Lian, Zheng and Sun, Haiyang and Sun, Licai and Wen, Zhuofan and Zhang, Siyuan and Chen, Shun and Gu, Hao and Zhao, Jinming and Ma, Ziyang and Chen, Xie and others},
booktitle={Proceedings of the 2nd International Workshop on Multimodal and Responsible Affective Computing},
pages={41--48},
year={2024}
}
π« Contact
For any questions, collaborations, or issues:
π Related Links
- π MER2024 Website: https://zeroqiaoba.github.io/MER2024-website/
- π οΈ Baseline & Tools: https://github.com/zeroQiaoba/MERTools/tree/master/MER2024
- Downloads last month
- 193
