A Novel Integral Reinforcement Learning-Based Control Method Assisted by Twin Delayed Deep Deterministic Policy Gradient for Solid Oxide Fuel Cell in DC Microgrid

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)

Abstract

This paper proposes a new online integral reinforcement learning (IRL)-based control algorithm for the solid oxide fuel cell (SOFC) to overcome the long-lasting problems of model dependency and sensitivity to offline training dataset in the existing SOFC control approaches. The proposed method automatically updates the optimal control gains through the online neural network training. Unlike the other online learning-based control methods that rely on the assumption of initial stabilizing control or trial-and-error based initial control policy search, the proposed method employs the offline twin delayed deep deterministic policy gradient (TD3) algorithm to systematically determine the initial stabilizing control policy. Compared to the conventional IRL-based control, the proposed method contributes to greatly reduce the computational burden without compromising the control performance. The excellent performance of the proposed method is verified by hardware-in-the-loop experiments.

Original languageEnglish
Article number9961949
Pages (from-to)688-703
Number of pages16
JournalIEEE Transactions on Sustainable Energy
Volume14
Issue number1
DOIs
Publication statusPublished - 1 Jan 2023

Fingerprint

Dive into the research topics of 'A Novel Integral Reinforcement Learning-Based Control Method Assisted by Twin Delayed Deep Deterministic Policy Gradient for Solid Oxide Fuel Cell in DC Microgrid'. Together they form a unique fingerprint.

Cite this