Monday, March 23, 2015 - 02:30 pm
Swearingen, 3A75
DISSERTATION DEFENSE Department of Computer Science and Engineering, University of South Carolina Candidate: Ping Liu Advisor: Dr. Yan Tong Date: Monday, March 23, 2015 Time: 2:30pm Place: Swearingen, 3A75 Abstract As being characterized by various configurations of facial muscular movements, facial expression is the most natural and powerful means for human communications. A robust and accurate facial expression recognition system, which is with capabilities to automatically recognize facial activities in given images/videos, has applications in a wide range of areas. However, developing such an automatic system encounters several challenges. As a standard pattern recognition problem, facial expressions recognition (FER) consists of three major modules in training: feature learning/extraction, feature selection, and classifier construction. This research aims to improve the first two modules respectively, and furthermore integrate them in an iterative and unified way to enhance the final recognition performance. In order to improve the performance of feature learning, novel log transformed sparse coding features with a spatial pyramid structure are proposed to characterize nonrigid facial muscular movements with head movements. To select the most distinctive features in recognizing facial expressions, a framework based on kernel theory is proposed to choose facial regions and analyze their contributions to different target expressions. Moreover, to unify the three training stages, a model taking advantage of both deep learning and boosting theory is proposed, with the capability of learning hierarchical underlying patterns that describe the given images and automatically selecting the most important facial regions for facial expression analysis. The proposed methods have been validated on public databases including spontaneous facial expression database, which is collected under realistic environment including face pose variations and occlusions. The experimental results show that the proposed methods yield significant improvements in recognizing facial expressions. More than that, the significant performance improvement in cross-database validation demonstrates the good generality of our proposed methods.