By Yixing Fan, ICT, CAS, China, fanyixing@ict.ac.cn | Xiaohui Xie, Tsinghua University, China, xiexiaohui@mail.tsinghua.edu.cn | Yinqiong Cai, ICT, CAS, China, caiyinqiong18s@ict.ac.cn | Jia Chen, Tsinghua University, China, chenjia0831@gmail.com | Xinyu Ma, ICT, CAS, China, maxinyu17g@ict.ac.cn | Xiangsheng Li, Tsinghua University, China, lixsh6@gmail.com | Ruqing Zhang, ICT, CAS, China, zhangruqing@ict.ac.cn | Jiafeng Guo, ICT, CAS, China, guojiafeng@ict.ac.cn
The core of information retrieval (IR) is to identify relevant information from large-scale resources and return it as a ranked list to respond to user’s information need. In recent years, the resurgence of deep learning has greatly advanced this field and leads to a hot topic named NeuIR (i.e., neural information retrieval), especially the paradigm of pre-training methods (PTMs). Owing to sophisticated pre-training objectives and huge model size, pre-trained models can learn universal language representations from massive textual data, which are beneficial to the ranking task of IR. Recently, a large number of works, which are dedicated to the application of PTMs in IR, have been introduced to promote the retrieval performance. Considering the rapid progress of this direction, this survey aims to provide a systematic review of pre-training methods in IR. To be specific, we present an overview of PTMs applied in different components of an IR system, including the retrieval component, the re-ranking component, and other components. In addition, we also introduce PTMs specifically designed for IR, and summarize available datasets as well as benchmark leaderboards. Moreover, we discuss some open challenges and highlight several promising directions, with the hope of inspiring and facilitating more works on these topics for future research.
Information retrieval (IR) is a fundamental task in many real-world applications such as Web search, question answering systems, and digital libraries. The core of IR is to identify information resources relevant to user’s information need. Since there might be more than one relevant resource, the returned result is often organized as a ranked list of documents according to their relevance degree against the information need. The ranking property of IR makes it different from other tasks, and researchers have devoted substantial efforts to develop a variety of ranking models in IR.
In recent years, the resurgence of deep learning has greatly advanced this field and led to a hot topic named NeuIR (neural information retrieval), especially the paradigm of pre-training methods (PTMs). Owing to sophisticated pre-training objectives and huge model size, pre-trained models can learn universal language representations from massive textual data that are beneficial to the ranking task of IR. Considering the rapid progress of this direction, this survey provides a systematic review of PTMs in IR. The authors present an overview of PTMs applied in different components of an IR system, including the retrieval component and the re-ranking component. In addition, they introduce PTMs specifically designed for IR, and summarize available datasets as well as benchmark leaderboards. Lastly, they discuss some open challenges and highlight several promising directions with the hope of inspiring and facilitating more works on these topics for future research.