Abstract
The use of Artificial Intelligence (AI) in healthcare, particularly in recognising anomalous behaviour during Activities of Daily Living (ADLs), is useful for supporting independent living. Transparency and interpretability of ADLs can play a vital role in decision-making processes, particularly in healthcare sectors. This work intends to offer additional information to AI-based prediction of ADLs through the use of Local Interpretable Model-agnostic Explanations (LIME). In this study, 5,125 low resolution thermal images gleaned from ADLs in a laboratory environment which mimics a smart home were clustered and analysed using Data Mining software and AI algorithms respectively. Results indicated that LIME presented saliency maps of ADLs in diverse scenarios such as ‘Making Tea’ and ‘Sitting Down’ to consume it. Further work will seek to fine-tune the models for better accuracy
Original language | English |
---|---|
Title of host publication | Proceedings of the 35th Irish Systems and Signals Conference, ISSC 2024 |
Editors | Huiru Zheng, Ian Cleland, Adrian Moore, Haiying Wang, David Glass, Joe Rafferty, Raymond Bond, Jonathan Wallace |
Publisher | IEEE |
Pages | 1-6 |
Number of pages | 6 |
Volume | 10 |
ISBN (Electronic) | 979-8-3503-5298-6 |
ISBN (Print) | 979-8-3503-5299-3 |
DOIs | |
Publication status | Published online - 29 Jul 2024 |
Publication series
Name | Proceedings of the 35th Irish Systems and Signals Conference, ISSC 2024 |
---|
Bibliographical note
Publisher Copyright:© 2024 IEEE.
Keywords
- Training
- Analytical models
- Accuracy
- Software
- Medical services
- Predictive models
- Activities of daily living
- LIME
- Explainable AI
- Healthcare
- Thermal sensing