BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

The rise in popularity of text-to-image generative artificial intelligence (AI) has attracted widespread public interest. We demonstrate that this technology can be attacked to generate content that subtly manipulates its users. We propose a Backdoor Attack on text-to-image Generative Models (BAGM), which upon triggering, infuses the generated images with manipulative details that are naturally blended in the content. Our attack is the first to target three popular text-to-image generative models across three stages of the generative process by modifying the behaviour of the embedded tokenizer, the language model or the image generative model. Based on the penetration level, BAGM takes the form of a suite of attacks that are referred to as surface, shallow and deep attacks in this article. Given the existing gap within this domain, we also contribute a comprehensive set of quantitative metrics designed specifically for assessing the effectiveness of backdoor attacks on text-to-image models. The efficacy of BAGM is established by attacking state-of-the-art generative models, using a marketing scenario as the target domain. To that end, we contribute a dataset of branded product images. Our embedded backdoors increase the bias towards the target outputs by more than five times the usual, without compromising the model robustness or the generated content utility. By exposing generative AI's vulnerabilities, we encourage researchers to tackle these challenges and practitioners to exercise caution when using pre-trained models. Relevant code and input prompts can be found at https://github.com/JJ-Vice/BAGM, and the dataset is available at: https://ieee-dataport.org/documents/marketable-foods-mf-dataset.

Original languageEnglish
Pages (from-to)4865-4880
Number of pages16
JournalIEEE Transactions on Information Forensics and Security
Volume19
Early online date8 Apr 2024
DOIs
Publication statusPublished - 2024

Fingerprint

Dive into the research topics of 'BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models'. Together they form a unique fingerprint.

Cite this