Comparing Emotion Recognition from Voice and Facial Data Using Time Invariant Features
 H. Gunes, B. Schuller, M. Pantic, and R. Cowie, “Emotion representation, analysis and synthesis in continuous space: A survey”, Automatic Face & Gesture Recognition and Workshops, Santa Barbara, 2011, pp. 872-834.
 R. Plutchik, “The nature of Emotions”. American Scientist, vo. 89, July-August, 2001, pp. 344-350.
 P. Ekman, W.V. Friesen and P. Ellsworth, “Emotion in the human face: Guidelines for research and an integration of findings”, New York: Pergamon Press, 1972.
 A. Metallinou, S. Lee and N. Sarayanan, “Audio-Visual Emotion Recognition Using Gaussian Mixture Models for Face and Voice”, Tenth IEEE International Symposium on Multimedia, Berkeley, CA, 2008, pp. 250-257.
 D. Chen, D. Jiang, I. Ravyse and H. Sahli, “Audio-Visual Emotion Recognition Based on a DBN Model with Constrained Asynchrony”, Fifth International Conference on Image and Graphics, 2009, pp. 912-916.
 V. Kirandziska and N. Ackovska, “Human-robot interaction based on human emotions extracted from speech”, In Proc. Of the TELFOR, Belgrade, Serbia, 2012, pp. 1381-1384.
 V. Kirandziska and N. Ackovska, “Effects and usage of emotion aware robots that perceive human voice”, IADIS Multi Conference Computer Science and Information Systems, Prague, Czech Republic, 2013.
 L. Malatesta, J. Murray, A. Raouzaiou, A. Hiolle, L. Cañamero and K. Karpouzis, “Emotion Modeling and Facial Affect Recognition in Human-Computer and Human-Robot Interaction”, Image, Video and Multimedia Systems Lab, National Technical University of Athens, and Adaptive Systems Research Group, School of Computer Science, University of Hertfordshire, 2009.
 Y. Miyakoshi and S. Kato, “Facial emotion detection considering partial occlusion of face using Bayesian network”, IEEE Symposium on Computer and Informatics, 2011, pp. 96-101.
 T. Vogt E. Andr´e and J. Wagner, “Automatic Recognition of Emotions from Speech: A Review of the Literature and Recommendations for Practical Realization”, Affect and Emotion in HCI, Springer-Verlag Berlin Heidelberg. LNCS 4868, 2008, pp. 75–91.
 O. Kwon, K. Chan, J. Hao and T. Lee, “Emotion Recognition by Speech Signals”. Proc. of Eurospeech, Geneva, September, 2003, pp. 125-128.
 P. Ekman, W.V. Friesen and J.C. Hager, “Facial Action Coding System Investigator’s Guide”, 2002.
 Ko K., Sim K.: Development of the Facial Feature Extraction and Emotion Recognition Method based on ASM and Bayesian Network. FUZZ-IEEE, Korea (2009)
 K. R. Scherer, “Vocal affect expression: A review and a model for future research”, Psychological Bulletin, vol. 99, 1986, pp.143-165.
 K.R. Scherer, R. Klaus, R. Banse, H.G. Wallbott and T. Goldbeck, “Vocal Cues in Emotion Encoding and Decoding. Motivation and Emotion”, 1991, pp. 123-148.
 Luxand Inc. Luxand SDK. Online. https://www.luxand.com/facesdk/. (accessed 2015).
 P. Boersma and Weenink, “PRAAT: doing photetics by computer” (Version 5.1.05). 2009. http://www.praat.org/ (accessed 2015).
 V. Kirandziska and N. Ackovska, “Sound features used in emotion classification”, The 9th International Conference for Informatics and Information Technology, Bitola, Macedonia, 2012, pp. 91-95.
 V. Kirandziska and N. Ackovska, “Finding Important Sound Features for Emotion Evaluation Classification”, IEEE Region 8 Conference EuroCon, Zagreb, Croatia, 2013.
 M. Kotsia, et al. “The enterface’05 audio-visual emotion database”, 2006.
 R Development Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. 2008. URL http://www.R-project.org (accessed 2015).