With the emergence of autonomous vehicles, chatbots and deep learning, many digital technologies now have a degree of artificial intelligence (AI). As such, the design of these digital technologies need to consider the user experience (UX) design of human-AI interactions. This talk recognises that there are many opportunities for the fields of AI and UX to weave together. This can work bilaterally, for example, UX and human-centred design can improve the adoption, acceptance and usability of AI systems, and in reverse, AI can be used to provide a new revelation about the user’s experience or indeed automatically design user interfaces (computational aesthetics). More concrete examples might be that, 1) data mining can be used to quantitatively analyse usability and UX data, and discover new UX metrics from data collected through user logs, telemetry, digital phenotyping etc, 2) machine learning can be used to automatically infer the user’s experience in real time using affective computing and psychophysiological sensing (facial expression analysis, GSR, and eye tracking data etc.), 3) UX design can be used to optimise the presentation of algorithmic outputs (decisions) for transparent, fair and accountable decision making, 4) democratised machine learning tools that are accessible to non-experts could be improved for a better UX, 5) there is a need to design for trust or at least calibrate trust appropriately between humans and AI systems, 6) a need for data visualisations and interactive tools to provide traceability, explicability and white-box AI systems, 7) a need to visualise and communicate metadata such as the uncertainty of an AI algorithm, and 8) new UX tools for designing human-AI interactions, for example, conversational chatbot design tools which also require a new set of usability engineering heuristics. This talk will touch on a number of studies I have been involved in, including eye tracking experiments, consideration for common cognitive biases in human-AI interaction and in particular automation bias which is when humans simply over trust or depend on machine-based decisions, even when the AI system is clearly wrong. We will also discuss how automation bias can be mitigated using uncertainty indexes and with the presentation of confounding decisions allowing the human to take full responsibility in conflict resolution scenarios. We will also discuss the ethics of human-AI interactions and what can happen when they go wrong. There are many opportunities for human-centred AI which will be explored in this talk. Many of the studies presented will include examples from the field of digital health and medicine given most of my research has been done in this application area.
17 Oct 2019
RoCHI 2019: International Conference on Human-Computer Interaction