TY - JOUR
T1 - Automatic Metadata Generation Through Analysis of Narration Within Instructional Videos
AU - Rafferty, Joseph
AU - Nugent, Christopher
AU - Liu, Jun
AU - Chen, Liming
PY - 2015/8/8
Y1 - 2015/8/8
N2 - Current activity recognition based assistive living solutions have adopted relatively rigid models of inhabitant activities. These solutions have some deficiencies associated with the use of these models. To address this, a goal-oriented solution has been proposed. In a goal-oriented solution, goal models offer a method of flexibly modelling inhabitant activity. The flexibility of these goal models can dynamically produce a large number of varying action plans that may be used to guide inhabitants. In order to provide illustrative, video-based, instruction for these numerous actions plans, a number of video clips would need to be associated with each variation. To address this, rich metadata may be used to automatically match appropriate video clips from a video repository to each specific, dynamically generated, activity plan. This study introduces a mechanism of automatically generating suitable rich metadata representing actions depicted within video clips to facilitate such video matching. This performance of this mechanism was evaluated using eighteen video files; during this evaluation metadata was automatically generated with a high level of accuracy.
AB - Current activity recognition based assistive living solutions have adopted relatively rigid models of inhabitant activities. These solutions have some deficiencies associated with the use of these models. To address this, a goal-oriented solution has been proposed. In a goal-oriented solution, goal models offer a method of flexibly modelling inhabitant activity. The flexibility of these goal models can dynamically produce a large number of varying action plans that may be used to guide inhabitants. In order to provide illustrative, video-based, instruction for these numerous actions plans, a number of video clips would need to be associated with each variation. To address this, rich metadata may be used to automatically match appropriate video clips from a video repository to each specific, dynamically generated, activity plan. This study introduces a mechanism of automatically generating suitable rich metadata representing actions depicted within video clips to facilitate such video matching. This performance of this mechanism was evaluated using eighteen video files; during this evaluation metadata was automatically generated with a high level of accuracy.
KW - Assistive living
KW - Automated speech recognition
KW - Metadata
KW - Ontology
KW - Parsing
KW - Smart environments
KW - Video
UR - https://pure.ulster.ac.uk/en/publications/automatic-metadata-generation-through-analysis-of-narration-withi-3
U2 - 10.1007/s10916-015-0295-2
DO - 10.1007/s10916-015-0295-2
M3 - Article
SN - 1573-689X
VL - 39
JO - Journal of Medical Systems
JF - Journal of Medical Systems
IS - 9
ER -