摘要
This paper focuses on end-to-end task-oriented dialogue systems,which jointly handle dialogue state tracking(DST)and response generation.Traditional methods usually adopt a supervised paradigm to learn DST from a manually labeled corpus.However,the annotation of the corpus is costly,time-consuming,and cannot cover a wide range of domains in the real world.To solve this problem,we propose a multi-span prediction network(MSPN)that performs unsupervised DST for end-to-end task-oriented dialogue.Specifically,MSPN contains a novel split-merge copy mechanism that captures long-term dependencies in dialogues to automatically extract multiple text spans as keywords.Based on these keywords,MSPN uses a semantic distance based clustering approach to obtain the values of each slot.In addition,we propose an ontology-based reinforcement learning approach,which employs the values of each slot to train MSPN to generate relevant values.Experimental results on single-domain and multi-domain task-oriented dialogue datasets show that MSPN achieves state-of-the-art performance with significant improvements.Besides,we construct a new Chinese dialogue dataset MeDial in the low-resource medical domain,which further demonstrates the adaptability of MSPN.
作者
刘庆斌
何世柱
刘操
刘康
赵军
Qing-Bin Liu;Shi-Zhu He;Cao Liu;Kang Liu;Jun Zhao(National Laboratory of Pattern Recognition,Institute of Automation,Chinese Academy of Sciences,Beijing 100190,China;School of Artificial Intelligence,University of Chinese Academy of Sciences,Beijing 100049,China;Beijing Sankuai Online Technology Company Limited,Beijing 100102,China)
基金
supported by the National Key Research and Development Program of China under Grant No.2020AAA0106400
the National Natural Science Foundation of China under Grant Nos.61922085 and 61976211
the Independent Research Project of National Laboratory of Pattern Recognition under Grant No.Z-2018013
the Key Research Program of Chinese Academy of Sciences(CAS)under Grant No.ZDBS-SSW-JSC006
the Youth Innovation Promotion Association CAS under Grant No.201912.