Skip to content

Peihao-Xiang/MTCAE-DFER

Repository files navigation

MTCAE-DFER: Multi-Task Cascaded Autoencoder for Dynamic Facial Expression Recognition (IEEE IJCB 2025)

HCPS link Arxiv link Citation link IJCB link
Peihao Xiang, Kaida Wu, and Ou Bai
HCPS Laboratory, Department of Electrical and Computer Engineering, Florida International University

visitor badge Open in Colab Hugging Face Datasets

Official TensorFlow implementation and ViT Decoder Module codes for MTCAE-DFER: Multi-Task Cascaded Autoencoder for Dynamic Facial Expression Recognitionn.

Note: The .ipynb is just a simple example. In addition, the VideoMAE encoder model should be pre-trained using the MAE-DFER method, but this repository does not provide it.

Overview

This paper expands the cascaded network branch of the autoencoder-based multi-task learning (MTL) framework for dynamic facial expression recognition, namely Multi-Task Cascaded Autoencoder for Dynamic Facial Expression Recognition (MTCAE-DFER). MTCAE-DFER builds a plug-and-play cascaded decoder module, which is based on the Vision Transformer (ViT) architecture and employs the decoder concept of Transformer to reconstruct the multi-head attention module. The decoder output from the previous task serves as the query (Q), representing local dynamic features, while the Video Masked Autoencoder (VideoMAE) shared encoder output acts as both the key (K) and value (V), representing global dynamic features. This setup facilitates interaction between global and local dynamic features across related tasks. Additionally, this proposal aims to alleviate overfitting of complex large model. We utilize autoencoder-based multi-task cascaded learning approach to explore the impact of dynamic face detection and dynamic face landmark on dynamic facial expression recognition, which enhances the model's generalization ability. After we conduct extensive ablation experiments and comparison with state-of-the-art (SOTA) methods on various public datasets for dynamic facial expression recognition, the robustness of the MTCAE-DFER model and the effectiveness of global-local dynamic feature interaction among related tasks have been proven.


Fig. 1 Illustration of the frameworks.

The differences between the following four frameworks: (a) Autoencoder-Based Single-Task Learning Framework, (b) Autoencoder-Based Non-Fully Shared Multi-Task Learning Framework, (c) Autoencoder-Based Fully Shared Multi-Task Learning Framework and (d) Our Autoencoder-Based Multi-Task Cascaded Learning Framework.


Fig. 2 MTCAE-DFER Model Structure.

Implementation details


Fig. 3 The architecture of MultiMAE-DER.

Main Results

Ablation Study

Ablation Study

RAVDESS

Result_on_RAVDESS

CREMA-D

Result_on_CREMA-D

MEAD

Result_on_MEAD

Contact

If you have any questions, please feel free to reach me out at pxian001@fiu.edu.

Acknowledgments

This project is built upon VideoMAE and MAE-DFER. Thanks for their great codebase.

In addition, this project is inspired by MTFormer and MNC.

License

This project is under the MIT License See LICENSE for details.

Citation

If you find this repository helpful, please consider citing our work:

@misc{xiang2024mtcaedfermultitaskcascadedautoencoder,
      title={MTCAE-DFER: Multi-Task Cascaded Autoencoder for Dynamic Facial Expression Recognition}, 
      author={Peihao Xiang and Kaida Wu and Chaohao Lin and Ou Bai},
      year={2024},
      eprint={2412.18988},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2412.18988}, 
}
@INPROCEEDINGS{11410713,
  author={Xiang, Peihao and Wu, Kaida and Bai, Ou},
  booktitle={2025 IEEE International Joint Conference on Biometrics (IJCB)}, 
  title={MTCAE-DFER: Multi-Task Cascaded Autoencoder for Dynamic Facial Expression Recognition}, 
  year={2025},
  volume={},
  number={},
  pages={1-9},
  keywords={Face recognition;Facial expressions;Autoencoders;Multitasking;Transformers;Feature extraction;Robustness;Decoding;Face detection;Videos},
  doi={10.1109/IJCB65343.2025.11410713}}

About

[IEEEE IJCB 2025 Poster] TensorFlow code implementation of "MTCAE-DFER: Multi-Task Cascaded Autoencoder for Dynamic Facial Expression Recognition"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages