APSIPA Transactions on Signal and Information Processing > Vol 9 > Issue 1

Learning priors for adversarial autoencoders

Hui-Po Wang, National Chiao Tung University, Taiwan, Wen-Hsiao Peng, National Chiao Tung University, Taiwan, wpeng@cs.nctu.edu.tw , Wei-Jan Ko, National Chiao Tung University, Taiwan
 
Suggested Citation
Hui-Po Wang, Wen-Hsiao Peng and Wei-Jan Ko (2020), "Learning priors for adversarial autoencoders", APSIPA Transactions on Signal and Information Processing: Vol. 9: No. 1, e4. http://dx.doi.org/10.1017/ATSIP.2019.25

Publication Date: 20 Jan 2020
© 2020 Hui-Po Wang, Wen-Hsiao Peng and Wei-Jan Ko
 
Subjects
 
Keywords
Deep learningAdversarial autoencodersLearned priorsLatent factor models
 

Share

Open Access

This is published under the terms of the Creative Commons Attribution licence.

Downloaded: 2317 times

In this article:
I. INTRODUCTION 
II. RELATED WORK 
3. METHOD 
IV. EXPERIMENTS 
V. APPLICATION: TEXT-TO-IMAGE SYNTHESIS 
VI. CONCLUSION 

Abstract

Most deep latent factor models choose simple priors for simplicity, tractability, or not knowing what prior to use. Recent studies show that the choice of the prior may have a profound effect on the expressiveness of the model, especially when its generative network has limited capacity. In this paper, we propose to learn a proper prior from data for adversarial autoencoders (AAEs). We introduce the notion of code generators to transform manually selected simple priors into ones that can better characterize the data distribution. Experimental results show that the proposed model can generate better image quality and learn better disentangled representations than AAEs in both supervised and unsupervised settings. Lastly, we present its ability to do cross-domain translation in a text-to-image synthesis task.

DOI:10.1017/ATSIP.2019.25