A NoisyNet deep reinforcement learning method for frequency regulation in power systems

Research output: Contribution to journalArticlepeer-review

Abstract

This study thoroughly investigates the NoisyNet Deep Deterministic Policy Gradient (DDPG) for frequency regulation. Compared with the conventional DDPG method, the suggested method can provide several benefits. First, the parameter noise will explore different strategies more thoroughly and can potentially discover better policies that it might miss if only action noise were used, which helps the actor achieve an optimal control strategy, resulting in enhanced dynamic response. Second, by employing the delayed policy update policy work with the proposed framework, the training process exhibits faster convergence, enabling rapid adaptation to changing disturbances. To substantiate its efficacy, the scheme is subjected to simulation tests on both an IEEE three-area power system, an IEEE 39 bus power system, and an IEEE 68 bus system. A comprehensive performance comparison was performed against other DDPG-based methods to validate and evaluate the performance of the proposed LFC scheme.

Original languageEnglish
Pages (from-to)3042-3051
Number of pages10
JournalIET Generation, Transmission and Distribution
Volume18
Issue number19
Early online date2 Sept 2024
DOIs
Publication statusPublished - Oct 2024

Fingerprint

Dive into the research topics of 'A NoisyNet deep reinforcement learning method for frequency regulation in power systems'. Together they form a unique fingerprint.

Cite this