Development of Real Time Emotion Detection in Faces using Deep Learning Approach
Keywords:
Facial Emotion Recognition, Deep Learning Recognition, Neural Network, Image RecognitionAbstract
The rapid growth of artificial intelligence has significantly impacted the technology world. Traditional algorithms often fail to meet real-time human needs, whereas machine learning and deep learning algorithms have achieved great success in various applications, such as classification systems, recommendation systems, and pattern recognition. Emotions play a vital role in shaping human thoughts, behaviours, and feelings. By leveraging deep learning, an emotion recognition system can be developed, which can be applied in areas like feedback analysis and face unlocking with high accuracy. This work focuses on creating a Deep Convolutional Neural Network (DCNN) model to classify five different human facial emotions. The model is trained, tested, and validated using a manually collected image dataset, aiming to accurately recognize and classify emotional expressions. The proposed CNN model, streamlined with three layers, achieves 77% accuracy on FER 2013 face dataset in emotion detection, underscoring its potential for practical applications requiring nuanced understanding of human emotions. In addition, the model is tested on locally collected face datasets and shows better accuracy of 97.33%.
Downloads
References
G. Bhardwaj, S. V. Singh, and V. Kumar, “An Empirical Study of Artificial Intelligence and its Impact on Human Resource Functions,” IEEE International Conference on Computation, Automation and Knowledge Management, pp. 47-51, 2020.
Ringeval, Fabien and Schuller, “State-of-Mind, Detecting Depression with AI, and Cross-Cultural Affect Recognition,” IEEE Workshop and Challenge, State-of-Mind, Detecting Depression with AI, and Cross-Cultural Affect Recognition, pp. 37-39, 2021.
S. Smys, and Jennifer S. Raj, “Analysis of Deep Learning Techniques for Early Detection of Depression on Social media Network - A Comparative Study”, Journal of trends in Computer Science and Smart technology, vol. 3, no. 1, pp. 24-39, 2021.
Pandey A and Kumar A. “Facial Emotion Intensity: A Fusion Way”, Springer Journal of SN Computer Science, vol 3, no. 162, pp. 1-12, 2022.
Khaireddin and Chen Z, “Facial emotion recognition: State of the art performance on FER2013,” IEEE International Conference on Electrical, Computer, Communications and Mechatronics Engineering, pp. 47-49, 2022.
Amal, V. S, Suresh S, and Deepa G, “Real time emotion recognition from facial expressions using a convolutional neural network with Fer2013 dataset”, Springer Ubiquitous Intelligent Systems, pp. 541-551, 2022.
A. Savva, V. Stylianou, K. Kyriacou, and F. Domenach, “Recognizing Student Facial Expressions: A Web Application,” IEEE Global Engineering Education Conference, pp. 1459–1462, 2022.
S. H. Wang, P. Phillips, Z. C. Dong and Y. D. Zhang, “Intelligent facial emotion recognition based on stationary wavelet entropy and Jaya algorithm,” Springer Neurocomputing, vol. 272, pp. 668–676, 2022.
J J. Pao, “Emotion Detection through Facial Feature Recognition,” Neurocomputing, vol. 265, pp. 234-245, 2021.
Merlin Steffi and J. John Raybin Jose, “Comparative Analysis of Facial Recognition involving Feature Extraction Techniques,” International Journal of Computer Sciences and Engineering, vol. 6, no. 2, pp. 81-86, 2020.
N. Irtija, M. Sami, and A. R. Ahad, “Fatigue Detection Using Facial Landmarks,” International Symposium on Affective Science and Engineering, pp. 6-9, 2020.
Alvarez V. M, Velazquez R, Gutierrez S and Enriquez-Zarate, “A Method for Facial Emotion Recognition Based on Interest Points”, IEEE International Conference on Research in Intelligent and Computing in Engineering, pp. 6-10, 2020.
S. Minaee, M. Minaei, and A. Abdolrashidi, “Deep emotion: Facial expression recognition using the attentional convolutional network,’’ MDPI Journal of Sensors, vol.21, no.9, pp.3025-3037, 2022.
B. Houshmand and N. M. Khan, ‘‘Facial expression recognition under partial occlusion from virtual reality headsets based on transfer learning,’’ IEEE International Conference on Multimedia Big Data, pp. 70- 75, 2020.
M. Li, H. Xu, X. Huang, Z. Song, X. Liu and X. Li, ‘‘Facial expression recognition with identity and emotion joint learning,’’ IEEE Transactions on Affective Computing, vol. 12, no. 2, pp. 544–550, 2021.
Q. Wang, T. Wu, H. Zheng and G. Guo, ‘‘Hierarchical pyramid diverse attention networks for face recognition,’’ IEEE Conference on Computer Vision and Pattern Recognition, pp. 8323–8332, 2020.
Athanasios, Nikolaos Doulamis, Anastasios Doulamis and Eftychios Protopapadakis “Deep Learning for Computer Vision: A Brief Review,” Computational Intelligence and Neuroscience, vol. 51, no. 5, pp. 3–11, 2021.
A. Mollahosseini, D. Chan and M. H. Mahoor, “Going deeper in facial expression recognition using deep neural networks “, IEEE Winter Conference on Applications of Computer Vision. pp. 1-10, 2019.
A. T. Lopes, E. de Aguiar, A. F. De Souza and Oliveira-Santos, “Facial expression recognition with Convolutional Neural Networks: Coping with few data and the training sample order”, Pattern Recognition, vol. 61, p. 610-628, 2022.
M. Mohammadpour, H. Khaliliardali, S. M. R. Hashemi and Alyan Nezhadi, “Facial emotion recognition using deep convolutional networks”, IEEE 4th International Conference on Knowledge-Based Engineering and Innovation, pp.17-21, 2019.
Y. Li, J. Zeng, S. Shan and X. Chen, “Occlusion Aware Facial Expression Recognition Using CNN With Attention Mechanism”, IEEE Transactions on Image Processing, vol. 28, no 5, pp. 2439-2450, 2020.
Gozde Yolcu, Ismail Oztel, Serap Kazan, Cemil Oz, Kannappan Palaniappan, Teresa E. Lever and Filiz Bunyak “Facial expression recognition for monitoring neurological disorders based on the convolutional neural network “, Springer Multimedia Tools and Applications, vol. 78, no 22, p. 31581-31603, 2019.
A. Agrawal and N. Mittal, “Using CNN for facial expression recognition: a study of the effects of kernel size and number of filters on accuracy”, Springer Visual Computer, vol. 36, pp. 405–412, 2022.
D. K. Jain, P. Shamsolmoal and P. Sehdev, “Extended deep neural network for facial emotion recognition”, Pattern Recognition Letters, vol. 120, pp. 69-74, 2019.
D. H. Kim, W. J. Baddar, J. Jang and Y. M. Ro, “Multi-Objective Based Spatio-Temporal Feature Representation Learning Robust to Expression Intensity Variations for Facial Expression Recognition”, IEEE Transactions on Affective Computing, vol. 10, no 2, pp. 223-236, 2019.
Y. Nan, J. Ju, Q. Hua, H. Zhang and B. Wang, A-MobileNet: “An approach of facial expression recognition”, Alexandria Engineering Journal, vol. 61, no. 6, pp. 4435–4444, 2023.
Z. Li, T. Zhang, X. Jing and Y. Wang, Facial expression-based analysis on emotion correlations, hotspots, and potential occurrence of urban crimes, Alexandria Engineering Journal, vol. 60, no. 1, pp. 1411–1420, 2021.
G. Tonguc and O. Ozkara, “Automatic recognition of student emotions from facial expressions during a lecture”, Computers & Education, vol. 148, pp. 1-11, 2020.
H. Li, M. Sui, F. Zhao, Z. Zha, and F. Wu, “Mvt: Mask vision transformer for facial expression recognition in the wild,” Computer Vision and Pattern Recognition, vol. 142, pp. 1-11, 2021.
X. Liang, L. Xu, W. Zhang, Y. Zhang, J. Liu and Z. Liu, “A convolution-transformer dual branch network for head-pose and occlusion facial expression recognition”, vol. 39, pp. 2277–2290, pp. 1–14, 2023.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.