APSIPA Transactions on Signal and Information Processing > Vol 11 > Issue 1

PAGER: Progressive Attribute-Guided Extendable Robust Image Generation

Zohreh Azizi, University of Southern California, USA, zazizi@usc.edu , C.-C. Jay Kuo, University of Southern California, USA
 
Suggested Citation
Zohreh Azizi and C.-C. Jay Kuo (2022), "PAGER: Progressive Attribute-Guided Extendable Robust Image Generation", APSIPA Transactions on Signal and Information Processing: Vol. 11: No. 1, e35. http://dx.doi.org/10.1561/116.00000034

Publication Date: 24 Nov 2022
© 2022 Z. Azizi and C.-C. J. Kuo
 
Subjects
 
Keywords
Image generationimage synthesisprogressive generationattribute-guided generationsuccessive subspace learning
 

Share

Open Access

This is published under the terms of CC BY-NC.

Downloaded: 974 times

In this article:
Introduction 
Related Work 
Proposed PAGER Method 
Experiments 
Comments on Extendability 
Conclusion and Future Work 
References 

Abstract

This work presents a generative modeling approach based on successive subspace learning. Unlike most generative models in the literature, our method does not utilize neural networks to analyze the underlying source distribution and synthesize images. The resulting method, called the progressive attribute-guided extendable robust image generative (PAGER) model, has advantages in mathematical transparency, progressive content generation, lower training time, robust performance with fewer training samples, and extendibility to conditional image generation. PAGER consists of three modules: core generator, resolution enhancer, and quality booster. The core generator learns the distribution of low-resolution images and performs unconditional image generation. The resolution enhancer increases image resolution via conditional generation. Finally, the quality booster adds finer details to generated images. Extensive experiments on MNIST, Fashion-MNIST, and CelebA datasets are conducted to demonstrate generative performance of PAGER.

DOI:10.1561/116.00000034