Indoor Localisation Through Object Detection on Real-Time Video Implementing a Single Wearable Camera

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper presents an accurate indoor localisation approach to provide context aware support for Activities of Daily Living. This paper explores the use of contemporary wearable technology (Google Glass) to facilitate a unique first-person view of the occupants environment. Machine vision techniques are then employed to determine an occupant’s location via environmental object detection within their field of view. Specifically, the video footage is streamed to a server where object recognition is performed using the Oriented Features from Accelerated Segment Test and Rotated Binary Robust Independent Elementary Features algorithm with a K-Nearest Neighbour matcher to match the saved keypoints of the objects to the scene. To validate the approach, an experimental set-up consisting of three ADL routines, each containing at least ten activities, ranging from drinking water to making a meal were considered. Ground truth was obtained from manually annotated video data and the approach was subsequently benchmarked against a common method of indoor localisation that employs dense sensor placement. The paper presents the results from these experiments, which highlight the feasibility of using off-the-shelf machine vision algorithms to determine indoor location based on data input from wearable video-based sensor technology. The results show a recall, precision, and F-measure of 0.82, 0.96, and 0.88 respectively. This method provides additional secondary benefits such as first person tracking within the environment and lack of required sensor interaction to determine occupant location.
LanguageEnglish
Title of host publicationUnknown Host Publication
Pages1231-1236
Number of pages6
Volume57
DOIs
Publication statusPublished - 17 Sep 2016
Event14th Mediterranean Conference on Medical and Biological Engineering and Computing, MEDICON 2016 - Paphos, Cyprus
Duration: 17 Sep 2016 → …

Conference

Conference14th Mediterranean Conference on Medical and Biological Engineering and Computing, MEDICON 2016
Period17/09/16 → …

Fingerprint

Cameras
Computer vision
Sensors
Object recognition
Potable water
Servers
Glass
Object detection
Experiments
Wearable technology

Keywords

  • Ageing in place
  • Ambient Assisted Living
  • Context-aware services
  • Machine vision
  • Wearable computing

Cite this

@inproceedings{5f66d302ff9e44f791a8484e972960dc,
title = "Indoor Localisation Through Object Detection on Real-Time Video Implementing a Single Wearable Camera",
abstract = "This paper presents an accurate indoor localisation approach to provide context aware support for Activities of Daily Living. This paper explores the use of contemporary wearable technology (Google Glass) to facilitate a unique first-person view of the occupants environment. Machine vision techniques are then employed to determine an occupant’s location via environmental object detection within their field of view. Specifically, the video footage is streamed to a server where object recognition is performed using the Oriented Features from Accelerated Segment Test and Rotated Binary Robust Independent Elementary Features algorithm with a K-Nearest Neighbour matcher to match the saved keypoints of the objects to the scene. To validate the approach, an experimental set-up consisting of three ADL routines, each containing at least ten activities, ranging from drinking water to making a meal were considered. Ground truth was obtained from manually annotated video data and the approach was subsequently benchmarked against a common method of indoor localisation that employs dense sensor placement. The paper presents the results from these experiments, which highlight the feasibility of using off-the-shelf machine vision algorithms to determine indoor location based on data input from wearable video-based sensor technology. The results show a recall, precision, and F-measure of 0.82, 0.96, and 0.88 respectively. This method provides additional secondary benefits such as first person tracking within the environment and lack of required sensor interaction to determine occupant location.",
keywords = "Ageing in place, Ambient Assisted Living, Context-aware services, Machine vision, Wearable computing",
author = "Colin Shewell and Chris Nugent and Mark Donnelly and Haiying Wang",
year = "2016",
month = "9",
day = "17",
doi = "10.1007/978-3-319-32703-7_237",
language = "English",
volume = "57",
pages = "1231--1236",
booktitle = "Unknown Host Publication",

}

Shewell, C, Nugent, C, Donnelly, M & Wang, H 2016, Indoor Localisation Through Object Detection on Real-Time Video Implementing a Single Wearable Camera. in Unknown Host Publication. vol. 57, pp. 1231-1236, 14th Mediterranean Conference on Medical and Biological Engineering and Computing, MEDICON 2016, 17/09/16. https://doi.org/10.1007/978-3-319-32703-7_237

Indoor Localisation Through Object Detection on Real-Time Video Implementing a Single Wearable Camera. / Shewell, Colin; Nugent, Chris; Donnelly, Mark; Wang, Haiying.

Unknown Host Publication. Vol. 57 2016. p. 1231-1236.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Indoor Localisation Through Object Detection on Real-Time Video Implementing a Single Wearable Camera

AU - Shewell, Colin

AU - Nugent, Chris

AU - Donnelly, Mark

AU - Wang, Haiying

PY - 2016/9/17

Y1 - 2016/9/17

N2 - This paper presents an accurate indoor localisation approach to provide context aware support for Activities of Daily Living. This paper explores the use of contemporary wearable technology (Google Glass) to facilitate a unique first-person view of the occupants environment. Machine vision techniques are then employed to determine an occupant’s location via environmental object detection within their field of view. Specifically, the video footage is streamed to a server where object recognition is performed using the Oriented Features from Accelerated Segment Test and Rotated Binary Robust Independent Elementary Features algorithm with a K-Nearest Neighbour matcher to match the saved keypoints of the objects to the scene. To validate the approach, an experimental set-up consisting of three ADL routines, each containing at least ten activities, ranging from drinking water to making a meal were considered. Ground truth was obtained from manually annotated video data and the approach was subsequently benchmarked against a common method of indoor localisation that employs dense sensor placement. The paper presents the results from these experiments, which highlight the feasibility of using off-the-shelf machine vision algorithms to determine indoor location based on data input from wearable video-based sensor technology. The results show a recall, precision, and F-measure of 0.82, 0.96, and 0.88 respectively. This method provides additional secondary benefits such as first person tracking within the environment and lack of required sensor interaction to determine occupant location.

AB - This paper presents an accurate indoor localisation approach to provide context aware support for Activities of Daily Living. This paper explores the use of contemporary wearable technology (Google Glass) to facilitate a unique first-person view of the occupants environment. Machine vision techniques are then employed to determine an occupant’s location via environmental object detection within their field of view. Specifically, the video footage is streamed to a server where object recognition is performed using the Oriented Features from Accelerated Segment Test and Rotated Binary Robust Independent Elementary Features algorithm with a K-Nearest Neighbour matcher to match the saved keypoints of the objects to the scene. To validate the approach, an experimental set-up consisting of three ADL routines, each containing at least ten activities, ranging from drinking water to making a meal were considered. Ground truth was obtained from manually annotated video data and the approach was subsequently benchmarked against a common method of indoor localisation that employs dense sensor placement. The paper presents the results from these experiments, which highlight the feasibility of using off-the-shelf machine vision algorithms to determine indoor location based on data input from wearable video-based sensor technology. The results show a recall, precision, and F-measure of 0.82, 0.96, and 0.88 respectively. This method provides additional secondary benefits such as first person tracking within the environment and lack of required sensor interaction to determine occupant location.

KW - Ageing in place

KW - Ambient Assisted Living

KW - Context-aware services

KW - Machine vision

KW - Wearable computing

U2 - 10.1007/978-3-319-32703-7_237

DO - 10.1007/978-3-319-32703-7_237

M3 - Conference contribution

VL - 57

SP - 1231

EP - 1236

BT - Unknown Host Publication

ER -