A Content and Knowledge Management System Supporting Emotion Detection from Speech

Binh Vu, Mikel deVelasco, PM McKevitt, RR Bond, Robin Turkington, Frederick Booth, Maurice Mulvenna, Michael Fuchs, Matthias Hemmje

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

motion recognition has recently attracted much attention in both indus- trial and academic research as it can be applied in many areas from education to national security. In healthcare, emotion detection has a key role as emotional state is an indicator of depression and mental disease. Much research in this area focuses on extracting emotion related features from images of the human face. Nevertheless, there are many other sources that can identify a person’s emotion. In the context of MENHIR, an EU-funded R&D project that applies Affective Computing to support people in their mental health, a new emotion-recognition system based on speech is being developed. However, this system requires comprehensive data-management support in order to manage its input data and analysis results. As a result, a cloud-based, high-performance, scalable, and accessible ecosystem for supporting speech- based emotion detection is currently developed and discussed here.
Original languageEnglish
Title of host publicationProceedings of International Workshop on Spoken Dialog Systems Technology 2020
Subtitle of host publicationIWSDS 2020
Place of PublicationMadrid, Spain
PublisherSpringer Cham
Number of pages10
Publication statusPublished - 18 May 2020

Fingerprint Dive into the research topics of 'A Content and Knowledge Management System Supporting Emotion Detection from Speech'. Together they form a unique fingerprint.

Cite this