Abstract
Audio presentation is an important modality in virtual storytelling. In this paper we present our work on audio presentation in our intelligent multimodal storytelling system, CONFUCIUS, which automatically generates 3D animation speech, and non-speech audio from natural language sentences. We provide an overview of the system and describe speech and non-speech audio in virtual storytelling by using linguistic approaches. We discuss several issues in auditory display, such as its relation to verb and adjective ontology, concepts and modalities, and media allocation. Finally we conclude that introducing linguistic knowledge provides more intelligent virtual storytelling, especially in audio presentation.
Original language | English |
---|---|
Title of host publication | Unknown Host Publication |
Editors | E Brazil |
Place of Publication | Limerick, Ireland |
Publisher | University of Limerick |
Pages | 358-363 |
Number of pages | 6 |
ISBN (Print) | 1-874653-81X |
Publication status | Published (in print/issue) - Jul 2005 |
Event | Proc. of the 11th International Conference on Auditory Display (ICAD-05) - University of Limerick, Limerick, Ireland Duration: 1 Jul 2005 → … |
Conference
Conference | Proc. of the 11th International Conference on Auditory Display (ICAD-05) |
---|---|
Period | 1/07/05 → … |