A Hybrid Online Off-Policy Reinforcement Learning Agent Framework Supported by Transformers

Enrique Adrian Villarrubia-Martin, Luis Rodriguez-Benitez, Luis Jimenez-Linares, David Muñoz-Valero, Jun Liu

Research output: Contribution to journalArticlepeer-review


Reinforcement learning (RL) is a powerful technique that allows agents to learn optimal decision-making policies through interactions with an environment. However, traditional RL algorithms suffer from several limitations such as the need for large amounts of data and long-term credit assignment, i.e. the problem of determining which actions actually produce a certain reward. Recently, Transformers have shown their capacity to address these constraints in this area of learning in an offline setting. This paper proposes a framework that uses Transformers to enhance the training of online off-policy RL agents and address the challenges described above through self-attention. The proposal introduces a hybrid agent with a mixed policy that combines an online off-policy agent with an offline Transformer agent using the Decision Transformer architecture. By sequentially exchanging the experience replay buffer between the agents, the agent’s learning training efficiency is improved in the first iterations and so is the training of Transformer-based RL agents in situations with limited data availability or unknown environments.
Original languageEnglish
Article number2350065
JournalInternational Journal of Neural Systems
Issue number12
Early online date20 Oct 2023
Publication statusPublished online - 20 Oct 2023

Bibliographical note

Publisher Copyright:
© 2023 World Scientific Publishing Company.


  • Reinforcement learning
  • self-attention
  • off-policy
  • Transformer
  • experience replay
  • General Medicine
  • Computer Networks and Communications


Dive into the research topics of 'A Hybrid Online Off-Policy Reinforcement Learning Agent Framework Supported by Transformers'. Together they form a unique fingerprint.

Cite this