Facial Emotion Recognition in Occluded Images Using Attention-Based CNN

Authors

Keywords:

Facial expression recognition, Occlusion, CNN, Channel-Spatial attention

Abstract

Facial expression recognition (FER) is widely used in various applications, yet few studies address its effectiveness under occlusion conditions. Occlusions can obscure critical facial features, leading to the loss of valuable expression information and negatively impacting recognition performance. This study enhances the robustness of FER models by integrating both channel and spatial attention mechanisms into a convolutional neural network (CNN). The attention module improves feature extraction by selectively focusing on visible facial regions while compensating for missing information, thereby enhancing recognition performance in obscured facial images. The proposed model is evaluated on both synthetic and real-world occlusion datasets, including RAF, FED-RO, CK+, JAFFE, FER2013, and AffectNet, demonstrating its robustness across different occlusion scenarios. Experimental results show that the prposed model achieves an accuracy rate of 66%, outperforming several state-of-the-art methods. Additionally, cross-dataset and k-fold validation confirm the model’s generalization capabilities across different datasets and occlusion patterns, further validating its reliability in real-world applications. The results demonstrate that attention-based CNNs effectively mitigate occlusion effects and improve emotion classification.

Published

2025-05-20

Issue

Section

Articles

How to Cite

Facial Emotion Recognition in Occluded Images Using Attention-Based CNN. (2025). Journal of Intelligence Technology and Innovation, 3(2), 40-55. https://www.itip-submit.com/index.php/JITI/article/view/112