This paper proposes a new online integral reinforcement learning (IRL)-based control algorithm for the solid oxide fuel cell (SOFC) to overcome the long-lasting problems of model dependency and sensitivity to offline training dataset in the existing SOFC control approaches. The proposed method automatically updates the optimal control gains through the online neural network training. Unlike the other online learning-based control methods that rely on the assumption of initial stabilizing control or trial-and-error based initial control policy search, the proposed method employs the offline twin delayed deep deterministic policy gradient (TD3) algorithm to systematically determine the initial stabilizing control policy. Compared to the conventional IRL-based control, the proposed method contributes to greatly reduce the computational burden without compromising the control performance. The excellent performance of the proposed method is verified by hardware-in-the-loop experiments.
|Number of pages||16|
|Journal||IEEE Transactions on Sustainable Energy|
|Publication status||Published - 1 Jan 2023|