A mechanism for nominating video clips to provide assistance for instrumental activities of daily living

Research output: Chapter in Book/Report/Conference proceedingChapter

3 Citations (Scopus)

Abstract

Current assistive smart homes have adopted a relatively rigid approach to modeling activities. The use of these activity models have introduced factors which block adoption of smart home technology. To address this, goal driven smart homes have been proposed, these are based upon more flexible activity structures. However, this goal-driven approach does have a disadvantage where flexibility in this activity modeling can lead to difficulty providing illustrative guidance. To address this, a video analysis and nomination mechanism is re-quired to provide suitable assistive clips for a given goal. This paper introduces a novel mechanism for nominating a suitable video clip given a pool of automat-ically generated metadata. This mechanism was then evaluated using a voice based assistant application and a tool emulating assistance requests by a goal-driven smart home. The initial evaluation produced promising results.
LanguageEnglish
Title of host publicationAmbient Assisted Living and Daily Activities
Pages55-66
Publication statusPublished - 11 Dec 2015

Fingerprint

Metadata

Keywords

  • Annotation
  • Automated Speech Recognition
  • Assistive Living
  • Guidance
  • Parsing
  • Ontology
  • Semantic Web
  • Smart Environments
  • Video
  • Vocal in-teraction

Cite this

@inbook{620d27965ad34e979573576b29f3f309,
title = "A mechanism for nominating video clips to provide assistance for instrumental activities of daily living",
abstract = "Current assistive smart homes have adopted a relatively rigid approach to modeling activities. The use of these activity models have introduced factors which block adoption of smart home technology. To address this, goal driven smart homes have been proposed, these are based upon more flexible activity structures. However, this goal-driven approach does have a disadvantage where flexibility in this activity modeling can lead to difficulty providing illustrative guidance. To address this, a video analysis and nomination mechanism is re-quired to provide suitable assistive clips for a given goal. This paper introduces a novel mechanism for nominating a suitable video clip given a pool of automat-ically generated metadata. This mechanism was then evaluated using a voice based assistant application and a tool emulating assistance requests by a goal-driven smart home. The initial evaluation produced promising results.",
keywords = "Annotation, Automated Speech Recognition, Assistive Living, Guidance, Parsing, Ontology, Semantic Web, Smart Environments, Video, Vocal in-teraction",
author = "Joseph Rafferty and Chris Nugent and Jun Liu and Liming Chen",
year = "2015",
month = "12",
day = "11",
language = "English",
isbn = "978-3-319-26409-7",
pages = "55--66",
booktitle = "Ambient Assisted Living and Daily Activities",

}

A mechanism for nominating video clips to provide assistance for instrumental activities of daily living. / Rafferty, Joseph; Nugent, Chris; Liu, Jun; Chen, Liming.

Ambient Assisted Living and Daily Activities. 2015. p. 55-66.

Research output: Chapter in Book/Report/Conference proceedingChapter

TY - CHAP

T1 - A mechanism for nominating video clips to provide assistance for instrumental activities of daily living

AU - Rafferty, Joseph

AU - Nugent, Chris

AU - Liu, Jun

AU - Chen, Liming

PY - 2015/12/11

Y1 - 2015/12/11

N2 - Current assistive smart homes have adopted a relatively rigid approach to modeling activities. The use of these activity models have introduced factors which block adoption of smart home technology. To address this, goal driven smart homes have been proposed, these are based upon more flexible activity structures. However, this goal-driven approach does have a disadvantage where flexibility in this activity modeling can lead to difficulty providing illustrative guidance. To address this, a video analysis and nomination mechanism is re-quired to provide suitable assistive clips for a given goal. This paper introduces a novel mechanism for nominating a suitable video clip given a pool of automat-ically generated metadata. This mechanism was then evaluated using a voice based assistant application and a tool emulating assistance requests by a goal-driven smart home. The initial evaluation produced promising results.

AB - Current assistive smart homes have adopted a relatively rigid approach to modeling activities. The use of these activity models have introduced factors which block adoption of smart home technology. To address this, goal driven smart homes have been proposed, these are based upon more flexible activity structures. However, this goal-driven approach does have a disadvantage where flexibility in this activity modeling can lead to difficulty providing illustrative guidance. To address this, a video analysis and nomination mechanism is re-quired to provide suitable assistive clips for a given goal. This paper introduces a novel mechanism for nominating a suitable video clip given a pool of automat-ically generated metadata. This mechanism was then evaluated using a voice based assistant application and a tool emulating assistance requests by a goal-driven smart home. The initial evaluation produced promising results.

KW - Annotation

KW - Automated Speech Recognition

KW - Assistive Living

KW - Guidance

KW - Parsing

KW - Ontology

KW - Semantic Web

KW - Smart Environments

KW - Video

KW - Vocal in-teraction

M3 - Chapter

SN - 978-3-319-26409-7

SP - 55

EP - 66

BT - Ambient Assisted Living and Daily Activities

ER -