A generalized optimization-based generative adversarial network

Bahram Farhadinia, Mohammad Reza Ahangari, Aghileh Heydari, Amitava Datta

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Interest in Generative Adversarial Networks (GANs) continues to grow, with diverse GAN variations emerging for applications across various domains. However, substantial challenges persist in advancing GANs. Effective training of deep learning models, including GANs, heavily relies on well-defined loss functions. Specifically, establishing a logical and reciprocal connection between the training image and generator is crucial. In this context, we introduce a novel GAN loss function that employs the Sugeno complement concept to logically link the training image and generator. Our proposed loss function is a composition of logical elements, and we demonstrate through analytical analysis that it outperforms an existing loss function found in the literature. This superiority is further substantiated via comprehensive experiments, showcasing the loss function’s ability to facilitate smooth convergence during training and effectively address mode collapse issues in GANs.
Original languageEnglish
Article number123413
JournalExpert Systems with Applications
Volume248
Early online date8 Feb 2024
DOIs
Publication statusE-pub ahead of print - 8 Feb 2024

Fingerprint

Dive into the research topics of 'A generalized optimization-based generative adversarial network'. Together they form a unique fingerprint.

Cite this