Model Predictive Control-Based Reinforcement Learning

Research output: Chapter in Book/Conference paperConference paperpeer-review

Abstract

Reinforcement Learning (RL) has garnered much attention in the field of control due to its capacity to learn from interactions and adapt to complex and dynamic environments. However, RL is challenging because it needs to balance exploration, seeking new strategies, and exploitation, leveraging known strategies for maximum gain. To address these challenges, this paper proposes a Model Predictive Control (MPC) based RL approach, where the state value function in RL is utilized as the cost function in MPC, and the system dynamic model is represented by neural networks (NNs). This eliminates the need for human intervention and addresses inaccuracies in the system model. Additionally, MPC-guided RL accelerates convergence during RL training, thereby enhancing sample efficiency. Reported results demonstrate that the proposed method outperforms traditional RL algorithms and does not require prior knowledge of the system.

Original languageEnglish
Title of host publicationISCAS 2024 - IEEE International Symposium on Circuits and Systems
Place of PublicationPiscataway
PublisherIEEE, Institute of Electrical and Electronics Engineers
Number of pages5
ISBN (Electronic)9798350330991
DOIs
Publication statusPublished - 2 Jul 2024
Event2024 IEEE International Symposium on Circuits and Systems, ISCAS 2024 - Singapore, Singapore
Duration: 19 May 202422 May 2024

Publication series

NameProceedings - IEEE International Symposium on Circuits and Systems
ISSN (Print)0271-4310

Conference

Conference2024 IEEE International Symposium on Circuits and Systems, ISCAS 2024
Abbreviated titleISCAS 2024
Country/TerritorySingapore
CitySingapore
Period19/05/2422/05/24

Fingerprint

Dive into the research topics of 'Model Predictive Control-Based Reinforcement Learning'. Together they form a unique fingerprint.

Cite this