One-for-All: Towards Universal Domain Translation with a Single StyleGAN

1Ocean University of China,
2Shanghai Jiao Tong University,
3University of Southampton,
4University of California at Merced, Yonsei University, and Google,
5Singapore Management University.
IEEE TPAMI

*Indicates Equal Contribution

Abstract

In this paper, we propose a novel translation model, UniTranslator, for transforming representations between visually distinct domains under conditions of limited training data and significant visual differences. The main idea behind our approach is leveraging the domain-neutral capabilities of CLIP as a bridging mechanism, while utilizing a separate module to extract abstract, domain-agnostic semantics from the embeddings of both the source and target realms. Fusing these abstract semantics with target-specific semantics results in a transformed embedding within the CLIP space. To bridge the gap between the disparate worlds of CLIP and StyleGAN, we introduce a new non-linear mapper, the CLIP2P mapper. Utilizing CLIP embeddings, this module is tailored to approximate the latent distribution in the StyleGAN’s latent space, effectively acting as a connector between these two spaces. The proposed UniTranslator is versatile and capable of performing various tasks, including style mixing, stylization, and translations, even in visually challenging scenarios across different visual domains. Notably, UniTranslator generates high-quality translations that showcase domain relevance, diversity, and improved image quality. UniTranslator surpasses the performance of existing general-purpose models and performs well against specialized models in representative tasks.

Description of the image

We introduce UniTranslator, an innovative universal framework for translating across diverse visual domains. It can receive input from any real-world source domain and convert it into a specified target domain, all while ensuring high image quality, domain correspondence, and variability.

Video.

Pipeline

Description of the image

Overview of UniTranslator

BibTeX

@article{du2023one,
        title={One-for-All: Towards Universal Domain Translation with a Single StyleGAN},
        author={Du, Yong and Zhan, Jiahui and He, Shengfeng and Li, Xinzhe and Dong, Junyu and Chen, Sheng and Yang, Ming-Hsuan},
        journal={arXiv preprint arXiv:2310.14222},
        year={2023}
      }