SEDL: Learning Emotion Dynamics from Facial Representations Using Self-Supervised Approaches

Authors

  • Amol P. Chaudhari, Nitin B. Pawar, Pravin B. Mali

Keywords:

Self-Supervised Learning, Emotion Recognition, Facial Expression Analysis, Behavior Prediction, Mentally Retarded Children

Abstract

Facial Expression Recognition (FER) is an essential component of affective computing, with significant applications in healthcare, behavioral analysis, and assistive technologies for neurodiverse and mentally challenged individuals. Despite considerable progress, traditional machine learning and supervised deep learning approaches are often constrained by their dependence on large labeled datasets and their limited ability to capture the temporal dynamics of emotional expressions. To address these challenges, this paper proposes a novel Self-Supervised Emotion Dynamics Learning (SEDL) framework that integrates contrastive self-supervised learning with temporal emotion progression modeling. The proposed approach enables the learning of meaningful feature representations from unlabeled facial images while simultaneously capturing the evolution of emotional states over time. This combination enhances the model’s ability to generalize across diverse and real-world conditions. The framework is evaluated on a dataset of facial expressions from neurodiverse individuals, demonstrating its applicability in practical and sensitive environments. Comparative analysis with traditional machine learning, supervised deep learning, and self-supervised approaches indicates that the proposed method provides improved performance and robustness. Overall, the proposed SEDL framework offers a scalable and efficient solution for emotion recognition, addressing key limitations of existing FER systems. It has strong potential for deployment in real-time applications such as behavioral monitoring, mental health assessment, and intelligent assistive systems.

Downloads

Download data is not yet available.

References

Z. Zhang, P. Luo, C. C. Loy, and X. Tang, “Facial expression recognition using deep neural networks,” IEEE Transactions on Affective Computing, vol. 13, no. 1, pp. 1–12, 2021.

M. I. Georgescu, R. T. Ionescu, and M. Popescu, “Local learning with deep and handcrafted features for facial expression recognition,” Pattern Recognition Letters, vol. 128, pp. 461–467, 2021.

Y. Said and M. Barr, “Human emotion recognition based on facial expressions via deep learning on high-resolution images,” Multimedia Tools and Applications, vol. 80, no. 16, pp. 25241–25253, 2021.

L. Yao, Y. Wan, H. Ni, and B. Xu, “Action unit classification for facial expression recognition using active learning and SVM,” Multimedia Tools and Applications, vol. 80, pp. 24287–24301, 2021.

G. Zheng and Y. Xu, “Efficient face detection and tracking in video sequences based on deep learning,” Information Sciences, vol. 568, pp. 265–285, 2021.

Q. Xu, H. Xue, and Y. Qian, “Emotion progression analysis via deep metric learning,” IEEE Transactions on Multimedia, vol. 23, pp. 3204–3217, 2021.

J. Donahue et al., “Long-term recurrent convolutional networks for visual recognition and description,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2625–2634, 2015.

L. Zhang, D. Tjondronegoro, and V. Chandran, “Facial expression recognition using deep CNN and LSTM networks,” Pattern Recognition Letters, vol. 133, pp. 203–210, 2020.

T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in Proceedings of the 37th International Conference on Machine Learning (ICML), pp. 1597–1607, 2020.

K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual representation learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020.

J.-B. Grill et al., “Bootstrap your own latent: A new approach to self-supervised learning,” in Advances in Neural Information Processing Systems (NeurIPS), vol. 33, pp. 21271–21284, 2020.

H. Zhang, J. Lin, and J. Luo, “Self-supervised facial representation learning for robust emotion recognition in real-world conditions,” Pattern Recognition Letters, vol. 146, pp. 33–40, 2021.

A. Dapogny, K. Bailly, and S. Dubuisson, “Emotion recognition with small datasets: A semi-supervised and self-supervised approach,” Image and Vision Computing, vol. 113, p. 104223, 2021.

A. Jaiswal, A. R. Babu, M. Zadeh, D. Banerjee, and F. Makedon, “A survey on contrastive self-supervised learning,” Technologies, vol. 9, no. 1, 2021.

F. M. Talaat, “Autistic Children Facial Expression Dataset,” Kaggle, 2020.

Downloads

Published

31.01.2022

How to Cite

Amol P. Chaudhari. (2022). SEDL: Learning Emotion Dynamics from Facial Representations Using Self-Supervised Approaches. International Journal of Intelligent Systems and Applications in Engineering, 10(1s), 460 –. Retrieved from https://www.ijisae.org/index.php/IJISAE/article/view/8125

Issue

Section

Research Article