Automatic facial expression recognition (FER) from non-frontal views is a challenging research topic which has recently started to attract the attention of the research community. Pose variations are difficult to ta...Automatic facial expression recognition (FER) from non-frontal views is a challenging research topic which has recently started to attract the attention of the research community. Pose variations are difficult to tackle and many face analysis methods require the use of sophisticated nor- malization and initialization procedures. Thus head-pose in- variant facial expression recognition continues to be an is- sue to traditional methods. In this paper, we propose a novel approach for pose-invariant FER based on pose-robust fea- tures which are learned by deep learning methods -- prin- cipal component analysis network (PCANet) and convolu- tional neural networks (CNN) (PRP-CNN). In the first stage, unlabeled frontal face images are used to learn features by PCANet. The features, in the second stage, are used as the tar- get of CNN to learn a feature mapping between frontal faces and non-frontal faces. We then describe the non-frontal face images using the novel descriptions generated by the maps, and get unified descriptors for arbitrary face images. Finally, the pose-robust features are used to train a single classifier for FER instead of training multiple models for each spe- cific pose. Our method, on the whole, does not require pose/ landmark annotation and can recognize facial expression in a wide range of orientations. Extensive experiments on two public databases show that our framework yields dramatic improvements in facial expression analysis.展开更多
文摘Automatic facial expression recognition (FER) from non-frontal views is a challenging research topic which has recently started to attract the attention of the research community. Pose variations are difficult to tackle and many face analysis methods require the use of sophisticated nor- malization and initialization procedures. Thus head-pose in- variant facial expression recognition continues to be an is- sue to traditional methods. In this paper, we propose a novel approach for pose-invariant FER based on pose-robust fea- tures which are learned by deep learning methods -- prin- cipal component analysis network (PCANet) and convolu- tional neural networks (CNN) (PRP-CNN). In the first stage, unlabeled frontal face images are used to learn features by PCANet. The features, in the second stage, are used as the tar- get of CNN to learn a feature mapping between frontal faces and non-frontal faces. We then describe the non-frontal face images using the novel descriptions generated by the maps, and get unified descriptors for arbitrary face images. Finally, the pose-robust features are used to train a single classifier for FER instead of training multiple models for each spe- cific pose. Our method, on the whole, does not require pose/ landmark annotation and can recognize facial expression in a wide range of orientations. Extensive experiments on two public databases show that our framework yields dramatic improvements in facial expression analysis.