APSIPA Transactions on Signal and Information Processing > Vol 9 > Issue 1

PortraitGAN for flexible portrait manipulation

Jiali Duan, University of Southern California, USA, jialidua@usc.edu , Xiaoyuan Guo, C.-C. Jay Kuo, University of Southern California, USA
 
Suggested Citation
Jiali Duan, Xiaoyuan Guo and C.-C. Jay Kuo (2020), "PortraitGAN for flexible portrait manipulation", APSIPA Transactions on Signal and Information Processing: Vol. 9: No. 1, e22. http://dx.doi.org/10.1017/ATSIP.2020.20

Publication Date: 06 Oct 2020
© 2020 Jiali Duan, Xiaoyuan Guo and C.-C. Jay Kuo
 
Subjects
 
Keywords
Generative adversarial learningPhoto-realistic
 

Share

Open Access

This is published under the terms of the Creative Commons Attribution licence.

Downloaded: 1259 times

In this article:
I. INTRODUCTION 
II. RELATED WORK 
III. PROPOSED METHOD 
IV. EXPERIMENTAL EVALUATION 
V. CONCLUSIONS 

Abstract

Previous methods have dealt with discrete manipulation of facial attributes such as smile, sad, angry, surprise, etc., out of canonical expressions and they are not flexible, operating in single modality. In this paper, we propose a novel framework that supports continuous edits and multi-modality portrait manipulation using adversarial learning. Specifically, we adapt cycle-consistency into the conditional setting by leveraging additional facial landmarks information. This has two effects: first cycle mapping induces bidirectional manipulation and identity preserving; second pairing samples from different modalities can thus be utilized. To ensure high-quality synthesis, we adopt texture-loss that enforces texture consistency and multi-level adversarial supervision that facilitates gradient flow. Quantitative and qualitative experiments show the effectiveness of our framework in performing flexible and multi-modality portrait manipulation with photo-realistic effects. Code will be made public: shorturl.at/chopD.

DOI:10.1017/ATSIP.2020.20