By Lingfei Wu, JD.COM Silicon Valley Research Center, USA, teddy.lfwu@gmail.com | Yu Chen, Rensselaer Polytechnic Institute, USA, hugochan2013@gmail.com | Kai Shen, Zhejiang University, China, shenkai@zju.edu.cn | Xiaojie Guo, JD.COM Silicon Valley Research Center, USA, xguo7@gmu.edu | Hanning Gao, Central China Normal University, China, ghnqwerty@gmail.com | Shucheng Li, Nanjing University, China, shuchengli@smail.nju.edu.cn | Jian Pei, Simon Fraser University, Canada, jpei@cs.sfu.ca | Bo Long, JD.COM, China, bo.long@jd.com
Deep learning has become the dominant approach in addressing various tasks in Natural Language Processing (NLP). Although text inputs are typically represented as a sequence of tokens, there is a rich variety of NLP problems that can be best expressed with a graph structure. As a result, there is a surge of interest in developing new deep learning techniques on graphs for a large number of NLP tasks. In this survey, we present a comprehensive overview on Graph Neural Networks (GNNs) for Natural Language Processing. We propose a new taxonomy of GNNs for NLP, which systematically organizes existing research of GNNs for NLP along three axes: graph construction, graph representation learning, and graph based encoder-decoder models. We further introduce a large number of NLP applications that exploits the power of GNNs and summarize the corresponding benchmark datasets, evaluation metrics, and open-source codes. Finally, we discuss various outstanding challenges for making the full use of GNNs for NLP as well as future research directions. To the best of our knowledge, this is the first comprehensive overview of Graph Neural Networks for Natural Language Processing.
Deep learning has become the dominant approach in addressing various tasks in Natural Language Processing (NLP). Although text inputs are typically represented as a sequence of tokens, there is a rich variety of NLP problems that can be best expressed with a graph structure. As a result, there is a surge of interest in developing new deep learning techniques on graphs for a large number of NLP tasks.
In this monograph, the authors present a comprehensive overview on Graph Neural Networks (GNNs) for Natural Language Processing. They propose a new taxonomy of GNNs for NLP, which systematically organizes existing research of GNNs for NLP along three axes: graph construction, graph representation learning, and graph based encoder-decoder models. They further introduce a large number of NLP applications that exploits the power of GNNs and summarize the corresponding benchmark datasets, evaluation metrics, and open-source codes. Finally, they discuss various outstanding challenges for making the full use of GNNs for NLP as well as future research directions.
This is the first comprehensive overview of Graph Neural Networks for Natural Language Processing. It provides students and researchers with a concise and accessible resource to quickly get up to speed with an important area of machine learning research.