WSEAS Transactions on Signal Processing
Print ISSN: 1790-5052, E-ISSN: 2224-3488
Volume 10, 2014
Multimodal Emotion Recognition Integrating Affective Speech with Facial Expression
Authors: , , ,
Abstract: In recent years, emotion recognition has attracted extensive interest in signal processing, artificial intelligence and pattern recognition due to its potential applications to human-computer-interaction (HCI). Most previously published works in the field of emotion recognition devote to performing emotion recognition by using either affective speech or facial expression. However, Affective speech and facial expression are mainly two important ways of human emotion expression, as they are the most natural and efficient manners for human beings to communicate their emotions and intentions. In this paper, we aim to develop a multimodal emotion recognition system integrating affective speech with facial expression and investigate the performance of multimodal emotion recognition at the feature-level and at the decision-level. After extracting acoustic features and facial features related to human emotion expression, the popular support vector machines (SVM) classifier is employed to perform emotion classification. Experimental results on the benchmarking eNTERFACE’05 emotional database indicate that the given approach of multimodal emotion recognition integrating affective speech with facial expression obtains obviously superior performance to the single emotion recognition approach, i.e., speech emotion recognition or facial expression recognition. The best performance obtained by using the product rule at the decision-level fusion is up to 67.44%.
Search Articles
Keywords: Mutimodal emotion recognition, affective speech, facial expression, support vector machines, speech emotion recognition, facial expression recognition
Pages: 526-537
WSEAS Transactions on Signal Processing, ISSN / E-ISSN: 1790-5052 / 2224-3488, Volume 10, 2014, Art. #54