Journal
MULTIMEDIA SYSTEMS
Volume -, Issue -, Pages -Publisher
SPRINGER
DOI: 10.1007/s00530-023-01145-3
Keywords
-
Ask authors/readers for more resources
This article proposes a method called LGNMNet, which combines the Lite General Network and MagFace CNN models, to predict the possibility of video frames belonging to micro-expression intervals. The experimental results show that this method achieves state-of-the-art performance in spotting micro-expressions in long videos.
Facial expressions, especially spontaneous micro-expressions, as an intuitive reflection of human emotions, have come through much concern along with rapid advances in computer vision recently. Micro-expressions are small in amplitude and short in duration and often appear together with macro-expressions, making micro-expression spotting in long videos a challenging task. In this article, we propose intersection over minimum labelling method combined with a Lite General Network and MagFace CNN (LGNMNet) model to predict the possibility of video frames belonging to a micro-expression interval, which balances easy and difficult samples to improve the learning effect of training process. Experimental results show that our method achieves state-of-the-art performance in spotting micro-expressions in long videos of both the CAS(ME)(2) and SAMM-LV datasets (with F1-scores of 0.2474 and 0.2555, respectively). Additionally, a new pair-merge way of combining nearby detected apex frames to construct micro-expression intervals in post-processing stage has been devised and analysed, providing a feasible solution for the task of macro- and micro-spotting in long videos.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available