Mobile agent path planning under uncertain environment using reinforcement learning and probabilistic model checking

Xia Wang, J. Liu, CD Nugent, Ian Cleland, Yang Xu

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)
7 Downloads (Pure)

Abstract

The major challenge in mobile agent path planning, within an uncertain environment, is effectively determining an optimal control model to discover the target location as quickly as possible and evaluating the control system's reliability. To address this challenge, we introduce a learning-verification integrated mobile agent path planning method to achieve both the effectiveness and the reliability. More specifically, we first propose a modified Q-learning algorithm (a popular reinforcement learning algorithm), called Q EA−learning algorithm, to find the best Q-table in the environment. We then determine the location transition probability matrix, and establish a probability model using the assumption that the agent selects a location with a higher Q-value. Secondly, the learnt behaviour of the mobile agent based on Q EA−learning algorithm, is formalized as a Discrete-time Markov Chain (DTMC) model. Thirdly, the required reliability requirements of the mobile agent control system are specified using Probabilistic Computation Tree Logic (PCTL). In addition, the DTMC model and the specified properties are taken as the input of the Probabilistic Model Checker PRISM for automatic verification. This is preformed to evaluate and verify the control system's reliability. Finally, a case study of a mobile agent walking in a grids map is used to illustrate the proposed learning algorithm. Here we have a special focus on the modelling approach demonstrating how PRISM can be used to analyse and evaluate the reliability of the mobile agent control system learnt via the proposed algorithm. The results show that the path identified using the proposed integrated method yields the largest expected reward.

Original languageEnglish
Article number110355
JournalKnowledge-Based Systems
Volume264
Early online date3 Feb 2023
DOIs
Publication statusPublished online - 3 Feb 2023

Bibliographical note

Funding Information:
This work was supported by the National Natural Science Foundation of China (No. 61976130 , 62206227 ), the Chengdu International Science Cooperation Project, China under Grant 2020-GH02-00064-HZ , and China Scholarship Council, China .

Publisher Copyright:
© 2023 Elsevier B.V.

Keywords

  • Expected reward
  • Mobile agent
  • Uncertain environment
  • Probabilistic model checking
  • Q,, -learning
  • Q-learning

Fingerprint

Dive into the research topics of 'Mobile agent path planning under uncertain environment using reinforcement learning and probabilistic model checking'. Together they form a unique fingerprint.

Cite this