Abstract
In this chapter, we reflect on the deployment of Artificial Intelligence (AI) as a pedagogical and educational instrument, and the challenges that arise to ensure transparency and fairness to staff and students . We describe a thought experiment: ``simulation of AI in education as a massively multiplayer social online game'' (AIEd-MMOG). Here, all actors (humans, institutions, AI agents and algorithms) are required to conform to the definition of a player. Models of player behaviour that `understand' the game space provide an application-programming interface for typical algorithms, e.g. deep learning neural nets or reinforcement learning agents, to interact with humans and the game space. The definition of `player' is a role designed to maximise protection and benefit for human players during interaction with AI. The concept of benefit maximisation is formally defined as a Rawlsian justice game, played within the AIEd-MMOG to facilitate transparency and trust of the algorithms involved, without requiring algorithm-specific technical solutions to, e.g. `peek inside the black box'. Our thought experiment for an AIEd-MMOG simulation suggests solutions for the well-known challenges of explainable AI and distributive justice.
Original language | English |
---|---|
Title of host publication | AI in Learning – Designing the future |
Publisher | Springer Nature |
Pages | 297-316 |
Number of pages | 19 |
ISBN (Electronic) | 9783031096891 |
ISBN (Print) | 978-3-031-09686-0 |
Publication status | Published (in print/issue) - 6 Nov 2022 |
Keywords
- artificial intelligence
- Learning Assistant
- Learning Analytics
- Massively-multiplayer Game
- Thought experiment
- ETHICS