Abstract
Artificial intelligence (AI) is widely used to support decision-making and interventions, arguably saving time, reducing bias and improving decision accuracy. The profession must urgently appraise the potential and pitfalls of this rapidly developing technology. This challenge was addressed at the 2024 European Social Work Research Association (ESWRA) conference in Vilnius, at which the Evidence into Practice Special Interest Group (EiPSIG) focused on three contemporary AI developments: (1) large language models (LLMs); (2) AI- and robot-supported interventions; and (3) predictive risk modelling (PRM). This short ‘Reflections, Exchange and Dialogue’ paper outlines the presentations, issues discussed, and further reflections. Although LLMs have an impressive ability to manipulate language, essential case detail and analysis remain human tasks. There are robot technologies already helping people in the domains of disability and elder care, and AI ‘language robots’ are being used favourably in low-risk mental health contexts, providing a non-judgmental (non-human) and ever-available ‘listener’ and ‘advisor’. PRMs raise many conflicting views. The ‘black box’ of AI may ‘hide’ systemic bias, although proponents argue that humans are biased also, so perfection is not an appropriate comparator. Our conclusion: a priority is to examine, shape and regulate the interface between humans and computer algorithms.
Original language | English |
---|---|
Pages (from-to) | 1-6 |
Number of pages | 6 |
Journal | European Social Work Research |
Early online date | 3 Mar 2025 |
DOIs | |
Publication status | Published online - 3 Mar 2025 |
Keywords
- artificial intelligence
- evidence base
- large language models
- predictive risk modelling
- robot
- social work
- Evidence Based Social Work