APSIPA Transactions on Signal and Information Processing > Vol 13 > Issue 5

Meta Soft Prompting and Learning

Jen-Tzung Chien, National Yang Ming Chiao Tung University, Taiwan, jtchien@nycu.edu.tw , Ming-Yen Chen, Industrial Technology Research Institute, Taiwan, Ching-hsien Lee, Industrial Technology Research Institute, Taiwan, Jing-Hao Xue, University College London, United Kingdom
 
Suggested Citation
Jen-Tzung Chien, Ming-Yen Chen, Ching-hsien Lee and Jing-Hao Xue (2024), "Meta Soft Prompting and Learning", APSIPA Transactions on Signal and Information Processing: Vol. 13: No. 5, e402. http://dx.doi.org/10.1561/116.20240010

Publication Date: 07 Oct 2024
© 2024 J.-T. Chien, M.-Y. Chen, C.-h. Lee and J.-H. Xue
 
Subjects
Information extraction,  Question answering,  Natural language processing for IR,  Data mining,  Classification and prediction,  Deep learning,  Bayesian learning,  Computational learning,  Optimization,  Probability and statistics
 
Keywords
Meta learningfew-shot learningsoft promptdomain adaptationlanguage model
 

Share

Open Access

This is published under the terms of CC BY-NC.

Downloaded: 94 times

In this article:
Introduction 
Fundamentals of Prompting and Adaptation 
Meta Soft Prompt for Few-Shot Learning 
Experiments 
Conclusions 
References 

Abstract

Traditionally, either applying the hard prompt for sentences by handcrafting the prompt templates or directly optimizing the soft or continuous prompt may not sufficiently generalize for unseen domain data. This paper presents a parameter efficient learning for domain-agnostic soft prompt which is developed for few-shot unsupervised domain adaptation. A pre-trained language model (PLM) is frozen and utilized to extract knowledge for unseen domains in various language understanding tasks. The meta learning and optimization over a set of trainable soft tokens is performed by minimizing the cross-entropy loss for masked language model from support and query data in source and target domains, respectively, where the masked tokens for text category and random masking are predicted. The meta soft prompt is learned through a doublylooped optimization for individual learners and a meta learner when implementing the unsupervised domain adaptation. The PLM is then closely adapted to compensate the domain shift in a target domain. The domain adaptation loss and the prompt-based classification loss are jointly minimized through meta learning. The experiments on multi-domain natural language understanding illustrate the merit of the proposed meta soft prompt in pre-trained language modeling under few-shot setting.

DOI:10.1561/116.20240010

Companion

APSIPA Transactions on Signal and Information Processing Special Issue - Invited Papers from APSIPA ASC 2023
See the other articles that are part of this special issue.