Semi-automated Annotation of Audible Home Activities

Matias Garcia-Constantino, Jessica Beltran-Marquez, Dagoberto Cruz-Sandoval, Irvin Hussein Lopez-Nava, Jesus Favela, Andrew Ennis, Chris Nugent, Joseph Rafferty, Ian Cleland, Jonathan Synnott, Netzahualcoyotl Hernandez-Cruz

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Data annotation is the process of segmenting and labelling any type of data (images, audio or text). It is an important task for producing reliable datasets that can be used to train machine learning algorithms for the purpose of Activity Recognition. This paper presents the work in progress towards a semi-automated approach for collecting and annotating audio data from simple sounds that are typically produced at home when people perform daily activities, for example the sound of running water when a tap is open. We propose the use of an app called ISSA (Intelligent System for Sound Annotation) running on smart microphones to facilitate the semi-automated annotation of audible activities. When a sound is produced, the app tries to classify the activity and notifies the user, who can correct the classification and/or provide additional information such as the location of the sound. To illustrate the feasibility of the approach, an initial version of ISSA was implemented to train an audio classifier in a one-bedroom apartment.
LanguageEnglish
Title of host publication2019 IEEE International Conference on Pervasive Computing and Communications Workshops, PerCom Workshops 2019
Pages40-45
Number of pages6
ISBN (Electronic)9781538691519
DOIs
Publication statusPublished - 1 Mar 2019
Event2019 IEEE International Conference on Pervasive Computing and Communications - Kyoto, Japan, Kyoto, Japan
Duration: 11 Mar 201915 Mar 2019

Conference

Conference2019 IEEE International Conference on Pervasive Computing and Communications
Abbreviated titlePerCom
CountryJapan
CityKyoto
Period11/03/1915/03/19

Fingerprint

Acoustic waves
Intelligent systems
Application programs
Microphones
Labeling
Learning algorithms
Learning systems
Classifiers
Water

Keywords

  • Data Annotation
  • Activity Recognition
  • Data Collection
  • Smart Microphones

Cite this

Garcia-Constantino, M., Beltran-Marquez, J., Cruz-Sandoval, D., Lopez-Nava, I. H., Favela, J., Ennis, A., ... Hernandez-Cruz, N. (2019). Semi-automated Annotation of Audible Home Activities. In 2019 IEEE International Conference on Pervasive Computing and Communications Workshops, PerCom Workshops 2019 (pp. 40-45). [8730729] https://doi.org/10.1109/PERCOMW.2019.8730729
Garcia-Constantino, Matias ; Beltran-Marquez, Jessica ; Cruz-Sandoval, Dagoberto ; Lopez-Nava, Irvin Hussein ; Favela, Jesus ; Ennis, Andrew ; Nugent, Chris ; Rafferty, Joseph ; Cleland, Ian ; Synnott, Jonathan ; Hernandez-Cruz, Netzahualcoyotl. / Semi-automated Annotation of Audible Home Activities. 2019 IEEE International Conference on Pervasive Computing and Communications Workshops, PerCom Workshops 2019. 2019. pp. 40-45
@inproceedings{8961c041dc5f4a698651a26e336fd45e,
title = "Semi-automated Annotation of Audible Home Activities",
abstract = "Data annotation is the process of segmenting and labelling any type of data (images, audio or text). It is an important task for producing reliable datasets that can be used to train machine learning algorithms for the purpose of Activity Recognition. This paper presents the work in progress towards a semi-automated approach for collecting and annotating audio data from simple sounds that are typically produced at home when people perform daily activities, for example the sound of running water when a tap is open. We propose the use of an app called ISSA (Intelligent System for Sound Annotation) running on smart microphones to facilitate the semi-automated annotation of audible activities. When a sound is produced, the app tries to classify the activity and notifies the user, who can correct the classification and/or provide additional information such as the location of the sound. To illustrate the feasibility of the approach, an initial version of ISSA was implemented to train an audio classifier in a one-bedroom apartment.",
keywords = "Data Annotation, Activity Recognition, Data Collection, Smart Microphones",
author = "Matias Garcia-Constantino and Jessica Beltran-Marquez and Dagoberto Cruz-Sandoval and Lopez-Nava, {Irvin Hussein} and Jesus Favela and Andrew Ennis and Chris Nugent and Joseph Rafferty and Ian Cleland and Jonathan Synnott and Netzahualcoyotl Hernandez-Cruz",
year = "2019",
month = "3",
day = "1",
doi = "10.1109/PERCOMW.2019.8730729",
language = "English",
pages = "40--45",
booktitle = "2019 IEEE International Conference on Pervasive Computing and Communications Workshops, PerCom Workshops 2019",

}

Garcia-Constantino, M, Beltran-Marquez, J, Cruz-Sandoval, D, Lopez-Nava, IH, Favela, J, Ennis, A, Nugent, C, Rafferty, J, Cleland, I, Synnott, J & Hernandez-Cruz, N 2019, Semi-automated Annotation of Audible Home Activities. in 2019 IEEE International Conference on Pervasive Computing and Communications Workshops, PerCom Workshops 2019., 8730729, pp. 40-45, 2019 IEEE International Conference on Pervasive Computing and Communications, Kyoto, Japan, 11/03/19. https://doi.org/10.1109/PERCOMW.2019.8730729

Semi-automated Annotation of Audible Home Activities. / Garcia-Constantino, Matias; Beltran-Marquez, Jessica; Cruz-Sandoval, Dagoberto; Lopez-Nava, Irvin Hussein; Favela, Jesus; Ennis, Andrew; Nugent, Chris; Rafferty, Joseph; Cleland, Ian; Synnott, Jonathan; Hernandez-Cruz, Netzahualcoyotl.

2019 IEEE International Conference on Pervasive Computing and Communications Workshops, PerCom Workshops 2019. 2019. p. 40-45 8730729.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Semi-automated Annotation of Audible Home Activities

AU - Garcia-Constantino, Matias

AU - Beltran-Marquez, Jessica

AU - Cruz-Sandoval, Dagoberto

AU - Lopez-Nava, Irvin Hussein

AU - Favela, Jesus

AU - Ennis, Andrew

AU - Nugent, Chris

AU - Rafferty, Joseph

AU - Cleland, Ian

AU - Synnott, Jonathan

AU - Hernandez-Cruz, Netzahualcoyotl

PY - 2019/3/1

Y1 - 2019/3/1

N2 - Data annotation is the process of segmenting and labelling any type of data (images, audio or text). It is an important task for producing reliable datasets that can be used to train machine learning algorithms for the purpose of Activity Recognition. This paper presents the work in progress towards a semi-automated approach for collecting and annotating audio data from simple sounds that are typically produced at home when people perform daily activities, for example the sound of running water when a tap is open. We propose the use of an app called ISSA (Intelligent System for Sound Annotation) running on smart microphones to facilitate the semi-automated annotation of audible activities. When a sound is produced, the app tries to classify the activity and notifies the user, who can correct the classification and/or provide additional information such as the location of the sound. To illustrate the feasibility of the approach, an initial version of ISSA was implemented to train an audio classifier in a one-bedroom apartment.

AB - Data annotation is the process of segmenting and labelling any type of data (images, audio or text). It is an important task for producing reliable datasets that can be used to train machine learning algorithms for the purpose of Activity Recognition. This paper presents the work in progress towards a semi-automated approach for collecting and annotating audio data from simple sounds that are typically produced at home when people perform daily activities, for example the sound of running water when a tap is open. We propose the use of an app called ISSA (Intelligent System for Sound Annotation) running on smart microphones to facilitate the semi-automated annotation of audible activities. When a sound is produced, the app tries to classify the activity and notifies the user, who can correct the classification and/or provide additional information such as the location of the sound. To illustrate the feasibility of the approach, an initial version of ISSA was implemented to train an audio classifier in a one-bedroom apartment.

KW - Data Annotation

KW - Activity Recognition

KW - Data Collection

KW - Smart Microphones

UR - http://www.scopus.com/inward/record.url?scp=85067986167&partnerID=8YFLogxK

U2 - 10.1109/PERCOMW.2019.8730729

DO - 10.1109/PERCOMW.2019.8730729

M3 - Conference contribution

SP - 40

EP - 45

BT - 2019 IEEE International Conference on Pervasive Computing and Communications Workshops, PerCom Workshops 2019

ER -

Garcia-Constantino M, Beltran-Marquez J, Cruz-Sandoval D, Lopez-Nava IH, Favela J, Ennis A et al. Semi-automated Annotation of Audible Home Activities. In 2019 IEEE International Conference on Pervasive Computing and Communications Workshops, PerCom Workshops 2019. 2019. p. 40-45. 8730729 https://doi.org/10.1109/PERCOMW.2019.8730729