APSIPA Transactions on Signal and Information Processing > Vol 6 > Issue 1

Free-viewpoint image synthesis using superpixel segmentation

Mehrdad Panahpour Tehrani, Nagoya University, Japan, panahpour@nuee.nagoya-u.ac.jp , Tomoyuki Tezuka, KDDI Cooperation, Japan, Kazuyoshi Suzuki, Nagoya University, Japan, Keita Takahashi, Nagoya University, Japan, Toshiaki Fujii, Nagoya University, Japan
 
Suggested Citation
Mehrdad Panahpour Tehrani, Tomoyuki Tezuka, Kazuyoshi Suzuki, Keita Takahashi and Toshiaki Fujii (2017), "Free-viewpoint image synthesis using superpixel segmentation", APSIPA Transactions on Signal and Information Processing: Vol. 6: No. 1, e5. http://dx.doi.org/10.1017/ATSIP.2017.5

Publication Date: 13 Jun 2017
© 2017 Mehrdad Panahpour Tehrani, Tomoyuki Tezuka, Kazuyoshi Suzuki, Keita Takahashi and Toshiaki Fujii
 
Subjects
 
Keywords
Limited sampling densitySuperpixel segmentationNon-disocclusion hole3D warpingFree viewpoint video
 

Share

Open Access

This is published under the terms of the Creative Commons Attribution licence.

Downloaded: 2302 times

In this article:
I. INTRODUCTION 
II. PROPOSED METHOD 
III. EXPERIMENTAL VERIFICATION 
IV. CONCLUSION 

Abstract

A free-viewpoint image can be synthesized using color and depth maps of reference viewpoints, via depth-image-based rendering (DIBR). In this process, three-dimensional (3D) warping is generally used. A 3D warped image consists of disocclusion holes with missing pixels that correspond to occluded regions in the reference images, and non-disocclusion holes due to limited sampling density of the reference images. The non-disocclusion holes are those among scattered pixels of a same region or object. These holes are larger when the reference viewpoints and the free viewpoint images have a larger physical distance. Filling these holes has a crucial impact on the quality of free-viewpoint image. In this paper, we focus on free-viewpoint image synthesis that is precisely capable of filling the non-disocclusion holes caused by limited sampling density, using superpixel segmentation. In this approach, we proposed two criteria for segmenting depth and color data of each reference viewpoint. By these criteria, we can detect which neighboring pixels should be connected or kept isolated in each references image, before being warped. Polygons enclosed by the connected pixels, i.e. superpixel, are inpainted by k-means interpolation. Our superpixel approach has a high accuracy since we use both color and depth data to detect superpixels at the location of the reference viewpoint. Therefore, once a reference image that consists of superpixels is 3D warped to a virtual viewpoint, the non-disocclusion holes are significantly reduced. Experimental results verify the advantage of our approach and demonstrate high quality of synthesized image when the virtual viewpoint is physically far from the reference viewpoints.

DOI:10.1017/ATSIP.2017.5