论文标题

探索嘈杂数据中前缀调谐的鲁棒性:财务情感分析中的案例研究

Exploring Robustness of Prefix Tuning in Noisy Data: A Case Study in Financial Sentiment Analysis

论文作者

Balakrishnan, Sudhandar, Fang, Yihao, Zhu, Xioadan

论文摘要

Bert,GPT和Roberta等基于变压器的模型的发明使研究人员和金融公司能够对这些强大的模型进行验证,并将其用于不同的下游任务以实现最先进的绩效。最近,已经引入了一种轻巧的替代方案(约为原始模型参数的0.1% - 3%),被引入了微调(称为前缀调谐)。此方法冻结了模型参数,并且仅更新前缀以实现与完整微调相当的性能。前缀调整使研究人员和金融从业人员能够以更少的参数获得类似的结果。在本文中,我们在面对嘈杂数据时探讨了前缀调整的鲁棒性。我们的实验表明,与前缀调整相比,微调对噪声更强大 - 后一种方法在大多数损坏的数据集中的性能显着下降,并且噪声水平升高。此外,与许多腐败方法中的微调相比,前缀调整在F1分数中具有很高的差异。我们强烈建议在将最先进的前缀调整方法应用于嘈杂数据时谨慎谨慎。

The invention of transformer-based models such as BERT, GPT, and RoBERTa has enabled researchers and financial companies to finetune these powerful models and use them in different downstream tasks to achieve state-of-the-art performance. Recently, a lightweight alternative (approximately 0.1% - 3% of the original model parameters) to fine-tuning, known as prefix tuning has been introduced. This method freezes the model parameters and only updates the prefix to achieve performance comparable to full fine-tuning. Prefix tuning enables researchers and financial practitioners to achieve similar results with much fewer parameters. In this paper, we explore the robustness of prefix tuning when facing noisy data. Our experiments demonstrate that fine-tuning is more robust to noise than prefix tuning -- the latter method faces a significant decrease in performance on most corrupted data sets with increasing noise levels. Furthermore, prefix tuning has high variances in the F1 scores compared to fine-tuning in many corruption methods. We strongly advocate that caution should be carefully taken when applying the state-of-the-art prefix tuning method to noisy data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源