Semi-automated Annotation of Audible Home Activities

Matias Garcia-Constantino, Jessica Beltran-Marquez, Dagoberto Cruz-Sandoval, Irvin Hussein Lopez-Nava, Jesus Favela, Andrew Ennis, Chris Nugent, Joseph Rafferty, Ian Cleland, Jonathan Synnott, Netzahualcoyotl Hernandez-Cruz

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Citation (Scopus)


Data annotation is the process of segmenting and labelling any type of data (images, audio or text). It is an important task for producing reliable datasets that can be used to train machine learning algorithms for the purpose of Activity Recognition. This paper presents the work in progress towards a semi-automated approach for collecting and annotating audio data from simple sounds that are typically produced at home when people perform daily activities, for example the sound of running water when a tap is open. We propose the use of an app called ISSA (Intelligent System for Sound Annotation) running on smart microphones to facilitate the semi-automated annotation of audible activities. When a sound is produced, the app tries to classify the activity and notifies the user, who can correct the classification and/or provide additional information such as the location of the sound. To illustrate the feasibility of the approach, an initial version of ISSA was implemented to train an audio classifier in a one-bedroom apartment.
Original languageEnglish
Title of host publication2019 IEEE International Conference on Pervasive Computing and Communications Workshops, PerCom Workshops 2019
Number of pages6
ISBN (Electronic)9781538691519
Publication statusPublished (in print/issue) - 1 Mar 2019
Event2019 IEEE International Conference on Pervasive Computing and Communications - Kyoto, Japan, Kyoto, Japan
Duration: 11 Mar 201915 Mar 2019


Conference2019 IEEE International Conference on Pervasive Computing and Communications
Abbreviated titlePerCom


  • Data Annotation
  • Activity Recognition
  • Data Collection
  • Smart Microphones


Dive into the research topics of 'Semi-automated Annotation of Audible Home Activities'. Together they form a unique fingerprint.

Cite this