CAPAST: Content Affinity Preserved Arbitrary Style Transfer

  • Xinyuan Zheng
  • , Xiaojie Li
  • , Canghong Shi
  • , Jia He
  • , Zhan ao Huang
  • , Xian Zhang
  • , Imran Mumtaz

Research output: Contribution to journalConference articlepeer-review

Abstract

Balancing the consistency of style and the integrity of content is the main challenge in arbitrary style transfer domain. Currently, local style details can be effectively captured by attention mechanism but easily produce distorted style patterns and inconsistent content structure. In this paper, we propose a Content Affinity Preserving Arbitrary Style Transfer (CAPAST) framework to ensure style features can be stably integrated into the content structure. Considering the local feature learning ability of CNN and the global feature representation advantage of transformer, a dual encoder is proposed to capture local and global features of images with the combination between transformer and CNN. In addition, a channel and spatially aligned attention (CSAA) is introduced to generate high-quality results by stably fusing style features and content features. In experiments, we demonstrated the superior performance of our method in preventing content structure distortion and maintaining consistency between style and content. Codes are available at https://github.com/miaopashi-zxy/CAPAST.

Fingerprint

Dive into the research topics of 'CAPAST: Content Affinity Preserved Arbitrary Style Transfer'. Together they form a unique fingerprint.

Cite this