Abstract
Existing activity recognition based assistive living solutions have adopted a relatively rigid approach to modelling activities. To address the deficiencies of such approaches, a goal-oriented solution has been proposed that will offer a method of flexibly modelling activities. This approach does, however, have a disadvantage in that the performance of goals may vary hence requiring differing video clips to be associated with these variations. In order to address this shortcoming, the use of rich metadata to facilitate automatic sequencing and matching of appropriate video clips is necessary. This paper introduces a mechanism of automatically generating rich metadata which details the actions depicted in video files to facilitate matching and sequencing. This mechanism was evaluated with 14 video files, producing annotations with a high degree of accuracy.
Original language | English |
---|---|
Title of host publication | Ambient Assisted Living and Daily Activities |
Publisher | Springer |
Pages | 123-130 |
Volume | 8868 |
ISBN (Print) | 978-3-319-13104-7 |
DOIs | |
Publication status | Published (in print/issue) - 3 Dec 2014 |
Keywords
- Annotation
- Automated Speech Recognition
- Parsing
- Ontology
- Assistive Living
- Smart Environments
- Video
- Guidance