高级检索
当前位置: 首页 > 详情页

Unified Instance and Knowledge Alignment Pretraining for Aspect-Based Sentiment Analysis

文献详情

资源类型:
WOS体系:

收录情况: ◇ SCIE

机构: [1]Research Center for Graphic Communication, Printing and Packaging, and Institute of Artificial Intelligence,Wuhan University,Wuhan 430072, China [2]School of Computer Science, National Engineering Research Center for Multimedia Software, Institute of Artificial Intelligence, and Hubei Key Laboratory of Multimedia and Network Communication Engineering,Wuhan University,Wuhan 430072, China [3]JD Explore Academy at JD.com, Beijing 100101, China [4]Affiliated Hospital, Kunming University of Science and Technology, Kunming 650032, China [5]School of Computer Science, Faculty of Engineering, The University of Sydney, Sydney, NSW 2006, Australia
出处:
ISSN:

关键词: Aspect-based sentiment analysis domain shift instance alignment knowledge alignment pretraining

摘要:
The goal of aspect-based sentiment analysis (ABSA) is to determine the sentiment polarity towards an aspect. Because of the expensive and limited amounts of labelled data, the pretraining strategy has become the de facto standard for ABSA. However, there always exists a severe domain shift between the pretraining and downstream ABSA datasets, which hinders effective knowledge transfer when directly fine-tuning, making the downstream task suboptimal. To mitigate this domain shift, we introduce a unified alignment pretraining framework into the vanilla pretrain-finetune pipeline, that has both instance- and knowledge-level alignments. Specifically, we first devise a novel coarse-to-fine retrieval sampling approach to select target domain-related instances from the large-scale pretraining dataset, thus aligning the instances between pretraining and the target domains (First Stage). Then, we introduce a knowledge guidance-based strategy to further bridge the domain gap at the knowledge level. In practice, we formulate the model pretrained on the sampled instances into a knowledge guidance model and a learner model. On the target dataset, we design an on-the-fly teacher-student joint fine-tuning approach to progressively transfer the knowledge from the knowledge guidance model to the learner model (Second Stage). Therefore, the learner model can maintain more domain-invariant knowledge when learning new knowledge from the target dataset. In the Third Stage, the learner model is finetuned to better adapt its learned knowledge to the target dataset. Extensive experiments and analyses on several ABSA benchmarks demonstrate the effectiveness and universality of our proposed pretraining framework.

基金:
语种:
被引次数:
WOS:
中科院(CAS)分区:
出版当年[2023]版:
大类 | 2 区 计算机科学
小类 | 1 区 声学 2 区 工程:电子与电气
最新[2025]版:
大类 | 2 区 计算机科学
小类 | 1 区 声学 2 区 工程:电子与电气
JCR分区:
出版当年[2022]版:
Q1 ACOUSTICS Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
最新[2023]版:
Q1 ACOUSTICS Q2 ENGINEERING, ELECTRICAL & ELECTRONIC

影响因子: 最新[2023版] 最新五年平均 出版当年[2022版] 出版当年五年平均 出版前一年[2021版] 出版后一年[2023版]

第一作者:
第一作者机构: [1]Research Center for Graphic Communication, Printing and Packaging, and Institute of Artificial Intelligence,Wuhan University,Wuhan 430072, China
共同第一作者:
通讯作者:
推荐引用方式(GB/T 7714):
APA:
MLA:

资源点击量:87920 今日访问量:0 总访问量:732 更新日期:2025-04-01 建议使用谷歌、火狐浏览器 常见问题

版权所有©2020 云南省第一人民医院 技术支持:重庆聚合科技有限公司 地址:云南省昆明市西山区金碧路157号