Open Science Research Excellence

Open Science Index

Commenced in January 2007 Frequency: Monthly Edition: International Publications Count: 29212

Select areas to restrict search in scientific publication database:
Comparing Emotion Recognition from Voice and Facial Data Using Time Invariant Features
The problem of emotion recognition is a challenging problem. It is still an open problem from the aspect of both intelligent systems and psychology. In this paper, both voice features and facial features are used for building an emotion recognition system. A Support Vector Machine classifiers are built by using raw data from video recordings. In this paper, the results obtained for the emotion recognition are given, and a discussion about the validity and the expressiveness of different emotions is presented. A comparison between the classifiers build from facial data only, voice data only and from the combination of both data is made here. The need for a better combination of the information from facial expression and voice data is argued.
Digital Object Identifier (DOI):


[1] H. Gunes, B. Schuller, M. Pantic, and R. Cowie, “Emotion representation, analysis and synthesis in continuous space: A survey”, Automatic Face & Gesture Recognition and Workshops, Santa Barbara, 2011, pp. 872-834.
[2] R. Plutchik, “The nature of Emotions”. American Scientist, vo. 89, July-August, 2001, pp. 344-350.
[3] P. Ekman, W.V. Friesen and P. Ellsworth, “Emotion in the human face: Guidelines for research and an integration of findings”, New York: Pergamon Press, 1972.
[4] A. Metallinou, S. Lee and N. Sarayanan, “Audio-Visual Emotion Recognition Using Gaussian Mixture Models for Face and Voice”, Tenth IEEE International Symposium on Multimedia, Berkeley, CA, 2008, pp. 250-257.
[5] D. Chen, D. Jiang, I. Ravyse and H. Sahli, “Audio-Visual Emotion Recognition Based on a DBN Model with Constrained Asynchrony”, Fifth International Conference on Image and Graphics, 2009, pp. 912-916.
[6] V. Kirandziska and N. Ackovska, “Human-robot interaction based on human emotions extracted from speech”, In Proc. Of the TELFOR, Belgrade, Serbia, 2012, pp. 1381-1384.
[7] V. Kirandziska and N. Ackovska, “Effects and usage of emotion aware robots that perceive human voice”, IADIS Multi Conference Computer Science and Information Systems, Prague, Czech Republic, 2013.
[8] L. Malatesta, J. Murray, A. Raouzaiou, A. Hiolle, L. Cañamero and K. Karpouzis, “Emotion Modeling and Facial Affect Recognition in Human-Computer and Human-Robot Interaction”, Image, Video and Multimedia Systems Lab, National Technical University of Athens, and Adaptive Systems Research Group, School of Computer Science, University of Hertfordshire, 2009.
[9] Y. Miyakoshi and S. Kato, “Facial emotion detection considering partial occlusion of face using Bayesian network”, IEEE Symposium on Computer and Informatics, 2011, pp. 96-101.
[10] T. Vogt E. Andr´e and J. Wagner, “Automatic Recognition of Emotions from Speech: A Review of the Literature and Recommendations for Practical Realization”, Affect and Emotion in HCI, Springer-Verlag Berlin Heidelberg. LNCS 4868, 2008, pp. 75–91.
[11] O. Kwon, K. Chan, J. Hao and T. Lee, “Emotion Recognition by Speech Signals”. Proc. of Eurospeech, Geneva, September, 2003, pp. 125-128.
[12] P. Ekman, W.V. Friesen and J.C. Hager, “Facial Action Coding System Investigator’s Guide”, 2002.
[13] Ko K., Sim K.: Development of the Facial Feature Extraction and Emotion Recognition Method based on ASM and Bayesian Network. FUZZ-IEEE, Korea (2009)
[14] K. R. Scherer, “Vocal affect expression: A review and a model for future research”, Psychological Bulletin, vol. 99, 1986, pp.143-165.
[15] K.R. Scherer, R. Klaus, R. Banse, H.G. Wallbott and T. Goldbeck, “Vocal Cues in Emotion Encoding and Decoding. Motivation and Emotion”, 1991, pp. 123-148.
[16] Luxand Inc. Luxand SDK. Online. (accessed 2015).
[17] P. Boersma and Weenink, “PRAAT: doing photetics by computer” (Version 5.1.05). 2009. (accessed 2015).
[18] V. Kirandziska and N. Ackovska, “Sound features used in emotion classification”, The 9th International Conference for Informatics and Information Technology, Bitola, Macedonia, 2012, pp. 91-95.
[19] V. Kirandziska and N. Ackovska, “Finding Important Sound Features for Emotion Evaluation Classification”, IEEE Region 8 Conference EuroCon, Zagreb, Croatia, 2013.
[20] M. Kotsia, et al. “The enterface’05 audio-visual emotion database”, 2006.
[21] R Development Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. 2008. URL (accessed 2015).
Vol:13 No:01 2019
Vol:12 No:12 2018Vol:12 No:11 2018Vol:12 No:10 2018Vol:12 No:09 2018Vol:12 No:08 2018Vol:12 No:07 2018Vol:12 No:06 2018Vol:12 No:05 2018Vol:12 No:04 2018Vol:12 No:03 2018Vol:12 No:02 2018Vol:12 No:01 2018
Vol:11 No:12 2017Vol:11 No:11 2017Vol:11 No:10 2017Vol:11 No:09 2017Vol:11 No:08 2017Vol:11 No:07 2017Vol:11 No:06 2017Vol:11 No:05 2017Vol:11 No:04 2017Vol:11 No:03 2017Vol:11 No:02 2017Vol:11 No:01 2017
Vol:10 No:12 2016Vol:10 No:11 2016Vol:10 No:10 2016Vol:10 No:09 2016Vol:10 No:08 2016Vol:10 No:07 2016Vol:10 No:06 2016Vol:10 No:05 2016Vol:10 No:04 2016Vol:10 No:03 2016Vol:10 No:02 2016Vol:10 No:01 2016
Vol:9 No:12 2015Vol:9 No:11 2015Vol:9 No:10 2015Vol:9 No:09 2015Vol:9 No:08 2015Vol:9 No:07 2015Vol:9 No:06 2015Vol:9 No:05 2015Vol:9 No:04 2015Vol:9 No:03 2015Vol:9 No:02 2015Vol:9 No:01 2015
Vol:8 No:12 2014Vol:8 No:11 2014Vol:8 No:10 2014Vol:8 No:09 2014Vol:8 No:08 2014Vol:8 No:07 2014Vol:8 No:06 2014Vol:8 No:05 2014Vol:8 No:04 2014Vol:8 No:03 2014Vol:8 No:02 2014Vol:8 No:01 2014
Vol:7 No:12 2013Vol:7 No:11 2013Vol:7 No:10 2013Vol:7 No:09 2013Vol:7 No:08 2013Vol:7 No:07 2013Vol:7 No:06 2013Vol:7 No:05 2013Vol:7 No:04 2013Vol:7 No:03 2013Vol:7 No:02 2013Vol:7 No:01 2013
Vol:6 No:12 2012Vol:6 No:11 2012Vol:6 No:10 2012Vol:6 No:09 2012Vol:6 No:08 2012Vol:6 No:07 2012Vol:6 No:06 2012Vol:6 No:05 2012Vol:6 No:04 2012Vol:6 No:03 2012Vol:6 No:02 2012Vol:6 No:01 2012
Vol:5 No:12 2011Vol:5 No:11 2011Vol:5 No:10 2011Vol:5 No:09 2011Vol:5 No:08 2011Vol:5 No:07 2011Vol:5 No:06 2011Vol:5 No:05 2011Vol:5 No:04 2011Vol:5 No:03 2011Vol:5 No:02 2011Vol:5 No:01 2011
Vol:4 No:12 2010Vol:4 No:11 2010Vol:4 No:10 2010Vol:4 No:09 2010Vol:4 No:08 2010Vol:4 No:07 2010Vol:4 No:06 2010Vol:4 No:05 2010Vol:4 No:04 2010Vol:4 No:03 2010Vol:4 No:02 2010Vol:4 No:01 2010
Vol:3 No:12 2009Vol:3 No:11 2009Vol:3 No:10 2009Vol:3 No:09 2009Vol:3 No:08 2009Vol:3 No:07 2009Vol:3 No:06 2009Vol:3 No:05 2009Vol:3 No:04 2009Vol:3 No:03 2009Vol:3 No:02 2009Vol:3 No:01 2009
Vol:2 No:12 2008Vol:2 No:11 2008Vol:2 No:10 2008Vol:2 No:09 2008Vol:2 No:08 2008Vol:2 No:07 2008Vol:2 No:06 2008Vol:2 No:05 2008Vol:2 No:04 2008Vol:2 No:03 2008Vol:2 No:02 2008Vol:2 No:01 2008
Vol:1 No:12 2007Vol:1 No:11 2007Vol:1 No:10 2007Vol:1 No:09 2007Vol:1 No:08 2007Vol:1 No:07 2007Vol:1 No:06 2007Vol:1 No:05 2007Vol:1 No:04 2007Vol:1 No:03 2007Vol:1 No:02 2007Vol:1 No:01 2007