Speech emotion recognition in emotional feedbackfor Human-Robot Interaction
2015 (English)In: International Journal of Advanced Research in Artificial Intelligence (IJARAI), ISSN 2165-4050, E-ISSN 2165-4069, Vol. 4, no 2, 20-27 p.Article in journal (Refereed) Published
For robots to plan their actions autonomously and interact with people, recognizing human emotions is crucial. For most humans nonverbal cues such as pitch, loudness, spectrum, speech rate are efficient carriers of emotions. The features of the sound of a spoken voice probably contains crucial information on the emotional state of the speaker, within this framework, a machine might use such properties of sound to recognize emotions. This work evaluated six different kinds of classifiers to predict six basic universal emotions from non-verbal features of human speech. The classification techniques used information from six audio files extracted from the eNTERFACE05 audio-visual emotion database. The information gain from a decision tree was also used in order to choose the most significant speech features, from a set of acoustic features commonly extracted in emotion analysis. The classifiers were evaluated with the proposed features and the features selected by the decision tree. With this feature selection could be observed that each one of compared classifiers increased the global accuracy and the recall. The best performance was obtained with Support Vector Machine and bayesNet.
Place, publisher, year, edition, pages
2015. Vol. 4, no 2, 20-27 p.
Affective Computing, Detection of Emotional Information, Machine Learning, Speech Emotion Recognition
Research subject Computer and Systems Sciences
IdentifiersURN: urn:nbn:se:su:diva-122139DOI: 10.14569/IJARAI.2015.040204OAI: oai:DiVA.org:su-122139DiVA: diva2:865111