Abstract
Machine Learning (ML) models are increasingly used in systems that involve physical human interaction or decision-making systems that impact human health and safety. Ensuring that these systems are safe and reliable is an important topic of current AI research. For many ML models it is unclear how a prediction (output) is arrived at from the provided features (input). Critical systems cannot blindly trust the predictions of such "black box" models, but instead need additional reassurance via insight into the model's reasoning. A range of methods exist within the field of Explainable AI (XAI) to make the reasoning of black box ML models more understandable and transparent. The explanations provided by XAI methods may be evaluated in a number of (competing) ways. In this paper, we investigate the trade-off between selected metrics for an XAI method called UnRAvEL, which is similar to the popular LIME approach. Our results show that by weighting the terms within the acquisition function used in UnRAvEL, different trade-offs can be achieved.
Original language | English |
---|---|
Pages (from-to) | 36-42 |
Number of pages | 7 |
Journal | ACM SIGAda Ada Letters |
Volume | 43 |
Issue number | 2 |
DOIs | |
Publication status | Published (in print/issue) - 7 Jun 2024 |
Keywords
- Machine Learning
- Explainable AI
- Gaussian Process
- LIME
- UnRAvEL