Abstract
Artificial intelligence (AI) is widely used to support decision making and interventions, arguably saving time, reducing bias and improving decision accuracy. The profession must urgently appraise the potential and pitfalls of this rapidly developing technology. This challenge was addressed at the 2024 European Social Work Research Association conference in Vilnius, at which the Evidence into Practice Special Interest Group focused on three contemporary AI developments: (1) large language models (LLMs); (2) AI- and robot-supported interventions; and (3) predictive risk modelling (PRM). This short ‘Reflection, exchange and dialogue’ article outlines the presentations, issues discussed and further reflections. Although LLMs have an impressive ability to manipulate language, essential case detail and analysis remain human tasks. There are robot technologies already helping people in the domains of disability and eldercare, and AI ‘language robots’ are being used favourably in low-risk mental health contexts, providing a non-judgemental (non-human) and ever-available ‘listener’ and ‘advisor’. PRMs raise many conflicting views. The ‘black box’ of AI may ‘hide’ systemic bias, though proponents argue that humans are biased too, so perfection is not an appropriate comparator. Our conclusion is that a priority is to examine, shape and regulate the interface between humans and computer algorithms.
| Original language | English |
|---|---|
| Pages (from-to) | 238-243 |
| Number of pages | 6 |
| Journal | European Social Work Research |
| Volume | 3 |
| Issue number | 2 |
| Early online date | 3 Mar 2025 |
| DOIs | |
| Publication status | Published (in print/issue) - 31 Jul 2025 |
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
Keywords
- artificial intelligence
- evidence base
- large language models
- predictive risk modelling
- robot
- social work
- Evidence Based Social Work