摘要
篇章级事件抽取一般将事件抽取任务分为候选实体识别、事件检测和论元识别3个子任务,然后采用级联的方式依次进行,这样的方式会造成误差传递;另外,现有的大多数模型在解码事件时,对事件数量的预测隐含在解码过程中,且只能按照预定义的事件顺序及预定义的角色顺序预测事件论元,使得先抽取的事件并没有考虑到后面抽取的事件。针对以上问题提出一种多任务联合的并行预测事件抽取框架。首先,使用预训练语言模型作为文档句子的编码器,检测文档中存在的事件类型,并使用结构化自注意力机制获取伪触发词特征,预测每种事件类型的事件数量;然后将伪触发词特征与候选论元特征进行交互,并行预测每个事件对应的事件论元,在大幅缩减模型训练时间的同时获得与基线模型相比更好的性能。最终事件抽取结果F1值为78%,事件类型检测子任务F1值为98.7%,事件数量预测子任务F1值为90.1%,实体识别子任务F1值为90.3%。
Document-level event extraction generally divides the task into three subtasks:candidate entity recognition,event detection,and argument recognition.The conventional approach involves sequentially performing these subtasks in a cascading manner,leading to error propagation.Additionally,most existing models implicitly predict the number of events during the decoding process and predict event arguments based on a predefined event and role order,so that the former extraction will not consider the latter extraction results.To address these issues,a multi-task joint and parallel event extraction framework is proposed in this paper.Firstly,a pre-trained language model is used as the encoder for document sentences.On this basis,the framework detects the types of events present in the document.It utilizes a structured self-attention mechanism to obtain pseudo-trigger word features and predicts the number of events for each event type.Subsequently,the pseudo-trigger word features are interacted with candidate argument features,and parallel prediction is performed to obtain various event arguments for each event,significantly reducing model training time while achieving performance comparable to the baseline model.The final F1 score for event extraction is 78%,with an F1 score of 98.7%for the event type detection subtask,90.1%for the event quantity prediction subtask,and 90.3%for the entity recognition subtask.
作者
秦海涛
线岩团
相艳
黄于欣
Qin Haitao;Xian Yantuan;Xiang Yan;Huang Yuxin(Faculty of Information Engineering and Automation,Kunming University of Science and Technology,Kunming 650500,China;Yunnan Key Laboratory of Artificial Intelligence,Kunming University of Science and Technology,Kunming 650500,China)
出处
《电子技术应用》
2024年第4期67-74,共8页
Application of Electronic Technique
基金
国家自然科学基金项目(62266028)
云南重大科技专项计划课题(202202AD080003)。
关键词
篇章级事件抽取
多任务联合
预训练语言模型
结构化自注意力机制
并行预测
document-level event extraction
multi-task joint
pre-trained language model
structured self-attention mechanism
parallel prediction