The Design of a Smart City Sonification System Using a Conceptual Blending and Musical Framework, Web Audio and Deep Learning Techniques

Stephen Roddy, Brian Bridges

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

89 Downloads (Pure)

Abstract

This paper describes an auditory display system for smart city data for Dublin City, Ireland. It introduces and describes the different layers of the system and outlines how they operate individually and interact with one another. The system uses a deep learning model called a variational autoencoder to generate musical content to represent data points. Further data-to-sound mappings are introduced via parameter mapping sonification techniques during sound synthesis and post-processing. Conceptual blending and music theory provide frameworks, which govern the design of the system. The paper ends with a discussion of the design process that contextualizes the contribution, highlighting the interdisciplinary nature of the project, which spans data analytics, music composition and human-computer interaction.
Original languageEnglish
Title of host publicationInternational Conference on Auditory Display 25-28 June 2021
Pages105-110
Number of pages6
DOIs
Publication statusPublished (in print/issue) - 25 Jun 2021

Keywords

  • sonification
  • auditory display
  • smart city
  • deep learning
  • conceptual blending
  • mapping
  • IoT

Fingerprint

Dive into the research topics of 'The Design of a Smart City Sonification System Using a Conceptual Blending and Musical Framework, Web Audio and Deep Learning Techniques'. Together they form a unique fingerprint.

Cite this