文件列表:
不要停止预训练?使基于提示的微调更加强大的学习者【英文版】.pdf |
下载文档 |
资源简介
>
英文标题:Don't Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner中文摘要:本研究探讨了传统的预训练语言模型在特定任务下加以微调是否能提高性能的假设,并提出了基于提示的持续预训练方法 (PCP)。实验证明,相较于传统方法,PCP 在 21 个基准测试中表现更好。英文摘要:Language models (LMs) trained on vast quantities of unlabelled data havegreatly advanced the field of natural language processing (NLP). In this study,we re-visit the widely accepted notion in NLP that continued pre-training LMson task-related texts improves the performance of fine-tuning (FT) indownstream tasks. Through
加载中...
已阅读到文档的结尾了