APSIPA Transactions on Signal and Information Processing > Vol 10 > Issue 1

Subspace learning for facial expression recognition: an overview and a new perspective

Cigdem Turan, The Hong Kong Polytechnic University, Hong Kong AND Technical University of Darmstadt, Germany, cigdem.turan@connect.polyu.hk , Rui Zhao, The Hong Kong Polytechnic University, Hong Kong, Kin-Man Lam, The Hong Kong Polytechnic University, Hong Kong, Xiangjian He, Computer Science, School of Electrical and Data Engineering, University of Technology, Australia
 
Suggested Citation
Cigdem Turan, Rui Zhao, Kin-Man Lam and Xiangjian He (2021), "Subspace learning for facial expression recognition: an overview and a new perspective", APSIPA Transactions on Signal and Information Processing: Vol. 10: No. 1, e1. http://dx.doi.org/10.1017/ATSIP.2020.27

Publication Date: 14 Jan 2021
© 2021 Cigdem Turan, Rui Zhao, Kin-Man Lam and Xiangjian He
 
Subjects
 
Keywords
Subspace learningFacial expression recognitionDeep learning
 

Share

Open Access

This is published under the terms of the Creative Commons Attribution licence.

Downloaded: 3010 times

In this article:
I. INTRODUCTION 
II. AN OVERVIEW OF SUBSPACE LEARNING 
III. SOFT LOCALITY PRESERVING MAP 
IV. FEATURE DESCRIPTORS AND GENERATION 
V. EXPERIMENTAL SET-UP AND RESULTS 
VI. CONCLUSION 

Abstract

For image recognition, an extensive number of subspace-learning methods have been proposed to overcome the high-dimensionality problem of the features being used. In this paper, we first give an overview of the most popular and state-of-the-art subspace-learning methods, and then, a novel manifold-learning method, named soft locality preserving map (SLPM), is presented. SLPM aims to control the level of spread of the different classes, which is closely connected to the generalizability of the learned subspace. We also do an overview of the extension of manifold learning methods to deep learning by formulating the loss functions for training, and further reformulate SLPM into a soft locality preserving (SLP) loss. These loss functions are applied as an additional regularization to the learning of deep neural networks. We evaluate these subspace-learning methods, as well as their deep-learning extensions, on facial expression recognition. Experiments on four commonly used databases show that SLPM effectively reduces the dimensionality of the feature vectors and enhances the discriminative power of the extracted features. Moreover, experimental results also demonstrate that the learned deep features regularized by SLP acquire a better discriminability and generalizability for facial expression recognition.

DOI:10.1017/ATSIP.2020.27