Skip to content

Latest commit

 

History

History
45 lines (35 loc) · 2.01 KB

File metadata and controls

45 lines (35 loc) · 2.01 KB

MTMEUR

Beyond Emotion Recognition: A Multi-Turn Multimodal Emotion Understanding and Reasoning Benchmark

We have released the dataset on huggingface! For model downloads, please click here: MindIntLab/MTMEUR

Introduction

Multimodal large language models (MLLMs) have been widely applied across various fields due to their powerful perceptual andreasoning capabilities. In the realm of psychology, these modelshold promise for a deeper understanding of human emotions and behaviors. However, recent research primarily focuses on enhancingtheir emotion recognition abilities, leaving the substantial potentialin emotion reasoning, which is crucial for improving the naturalness and effectiveness of human-machine interactions. Therefore, in this paper, we introduce a multi-turn multimodal emotion understanding and reasoning (MTMEUR) benchmark.

Our proposed pipeline for generating data.

Quick Start

  1. Clone this project locally
git clone https://github.com/MACLAB-HFUT/MTMEUR.git
  1. Set up the environment
conda create -n MTMEUR python=3.10
conda activate MTMEUR
pip install -r requirements.txt
  1. Run the Python file
python evaluation.py

Citation

If this work is helpful, please kindly cite as:

@misc{hu2025emotionrecognitionmultiturnmultimodal,
      title={Beyond Emotion Recognition: A Multi-Turn Multimodal Emotion Understanding and Reasoning Benchmark}, 
      author={Jinpeng Hu and Hongchang Shi and Chongyuan Dai and Zhuo Li and Peipei Song and Meng Wang},
      year={2025},
      eprint={2508.16859},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2508.16859}, 
}