APSIPA Transactions on Signal and Information Processing > Vol 9 > Issue 1

Vision and language: from visual perception to content creation

Industrial Technology Advances

Tao Mei, JD AI Research, China, tmei@jd.com , Wei Zhang, JD AI Research, China, Ting Yao, JD AI Research, China
 
Suggested Citation
Tao Mei, Wei Zhang and Ting Yao (2020), "Vision and language: from visual perception to content creation", APSIPA Transactions on Signal and Information Processing: Vol. 9: No. 1, e11. http://dx.doi.org/10.1017/ATSIP.2020.10

Publication Date: 30 Mar 2020
© 2020 Tao Mei, Wei Zhang and Ting Yao
 
Subjects
 
Keywords
Deep learningComputer visionArtificial intelligence
 

Share

Open Access

This is published under the terms of the Creative Commons Attribution licence.

Downloaded: 2569 times

In this article:
I. INTRODUCTION 
II. VISION TO LANGUAGE 
III. LANGUAGE TO VISION 
IV. CONCLUSION 

Abstract

Vision and language are two fundamental capabilities of human intelligence. Humans routinely perform tasks through the interactions between vision and language, supporting the uniquely human capacity to talk about what they see or hallucinate a picture on a natural-language description. The valid question of how language interacts with vision motivates us researchers to expand the horizons of computer vision area. In particular, “vision to language” is probably one of the most popular topics in the past 5 years, with a significant growth in both volume of publications and extensive applications, e.g. captioning, visual question answering, visual dialog, language navigation, etc. Such tasks boost visual perception with more comprehensive understanding and diverse linguistic representations. Going beyond the progresses made in “vision to language,” language can also contribute to vision understanding and offer new possibilities of visual content creation, i.e. “language to vision.” The process performs as a prism through which to create visual content conditioning on the language inputs. This paper reviews the recent advances along these two dimensions: “vision to language” and “language to vision.” More concretely, the former mainly focuses on the development of image/video captioning, as well as typical encoder–decoder structures and benchmarks, while the latter summarizes the technologies of visual content creation. The real-world deployment or services of vision and language are elaborated as well.

DOI:10.1017/ATSIP.2020.10