This talk will present research activities in an EU NPA funded project called ChatPal which has recently released a co-designed chatbot to support the mental wellbeing of people living in rural areas. The chatbot coaches people using psycho-education and a positive psychology framework called PERMA that was developed by Martin Seligman to engender Positive emotions, Engagement in work and life activities, to support Relationships and Meaning in life as well as engendering Ambition and Accomplishments. This presentation will focus on the challenges of creating mental health chatbots including, 1) the general ethical issues of human-computer conversations (anthropomorphism/personification of technology), 2) coping with the limitations of AI/NLP, and 3) the challenges of auditing and validating mental health chatbots (e.g. anticipating edge cases or unintended consequences). The talk will discuss the co-creation and living lab activities that we have carried out to responsibly prototype a chatbot. These activities and methods were employed to balance, 1) what users say they need, 2) what chatbots and features the mental health professionals say that they would support (and not support), and 3) what AI chatbots can do well in a high-risk context. In this project, we have carried out a number of activities to ensure responsible design thinking for the chatbot, 1) needs analysis workshops with end-users, 2) surveys to understand the perceptions, opinions and attitudes of mental health professionals in supporting chatbots, 3) the use of online tools for independent voting and ranking of chatbot requirements to specify the dialogues and design features for the chatbot, and 4) the use of online tools to facilitate multidisciplinary dialogue design activities that provided collaboration between designers, healthcare and computing using living documents that resulted in robust scripted dialogues that were later integrated into the chatbot. The main reflection in the talk is that we need robust design activities to responsibly create chatbots especially if the chatbot is being used by vulnerable people or indeed is being used to support people living with mental ill health. There are many risks, for example, there could be a multitude of permutations of different chatbot dialogues that can take place and all of these dialogues can’t be pre-assessed for clinical assurance and user experience, hence there is a risk for unintended harmful human-computer conversations. In this talk I will reflect on the need for ‘stakeholder-centred design’ since the experience and adoption of mental health chatbots will be a combination of what users need curtailed by what AI can offer and what kind of chatbots mental health professionals will support. This kind of method involves a systems thinking approach which is required in more complex ecosystems like mental health where the success of digital interventions will likely be dictated in this the ‘golden intersection’ of what users want, what professionals endorse and what AI does well.
2 Oct 2020
Mindful of AI: Language, Technology and Mental Health