Abstract
Reinforcement learning (RL) is a powerful technique that allows agents to learn optimal decision-making policies through interactions with an environment. However, traditional RL algorithms suffer from several limitations such as the need for large amounts of data and long-term credit assignment, i.e. the problem of determining which actions actually produce a certain reward. Recently, Transformers have shown their capacity to address these constraints in this area of learning in an offline setting. This paper proposes a framework that uses Transformers to enhance the training of online off-policy RL agents and address the challenges described above through self-attention. The proposal introduces a hybrid agent with a mixed policy that combines an online off-policy agent with an offline Transformer agent using the Decision Transformer architecture. By sequentially exchanging the experience replay buffer between the agents, the agent’s learning training efficiency is improved in the first iterations and so is the training of Transformer-based RL agents in situations with limited data availability or unknown environments.
Original language | English |
---|---|
Article number | 2350065 |
Journal | International Journal of Neural Systems |
Volume | 33 |
Issue number | 12 |
Early online date | 20 Oct 2023 |
DOIs | |
Publication status | Published online - 20 Oct 2023 |
Bibliographical note
Publisher Copyright:© 2023 World Scientific Publishing Co. Pte Ltd. All rights reserved.
Keywords
- Reinforcement learning
- self-attention
- off-policy
- Transformer
- experience replay
- General Medicine
- Computer Networks and Communications