语义槽填充是对话系统中一项非常重要的任务,旨在为输入句子的每个单词标注正确的标签,其性能的好坏极大地影响着后续的对话管理模块。目前,使用深度学习方法解决该任务时,一般利用随机词向量或者预训练词向量作为模型的初始化词向量。...语义槽填充是对话系统中一项非常重要的任务,旨在为输入句子的每个单词标注正确的标签,其性能的好坏极大地影响着后续的对话管理模块。目前,使用深度学习方法解决该任务时,一般利用随机词向量或者预训练词向量作为模型的初始化词向量。但是,随机词向量存在不具备语义和语法信息的缺点;预训练词向量存在“一词一义”的缺点,无法为模型提供具备上下文依赖的词向量。针对该问题,提出了一种基于预训练模型BERT和长短期记忆网络的深度学习模型。该模型使用基于Transformer的双向编码表征模型(Bidirectional Encoder Representations from Transformers,BERT)产生具备上下文依赖的词向量,并将其作为双向长短期记忆网络(Bidirectional Long Short-Term Memory,BiLSTM)的输入,最后利用Softmax函数和条件随机场进行解码。将预训练模型BERT和BiLSTM网络作为整体进行训练,达到了提升语义槽填充任务性能的目的。在MIT Restaurant Corpus,MIT Movie Corpus和MIT Movie trivial Corpus 3个数据集上,所提模型得出了良好的结果,最大F1值分别为78.74%,87.60%和71.54%。实验结果表明,所提模型显著提升了语义槽填充任务的F1值。展开更多
Learning the representations of nodes in a network can benefit various analysis tasks such as node classification, link prediction, clustering, and anomaly detection. Such a representation learning problem is referred...Learning the representations of nodes in a network can benefit various analysis tasks such as node classification, link prediction, clustering, and anomaly detection. Such a representation learning problem is referred to as network embedding, and it has attracted significant attention in recent years. In this article, we briefly review the existing network embedding methods by two taxonomies. The technical taxonomy focuses on the specific techniques used and divides the existing network embedding methods into two stages, i.e., context construction and objective design. The non-technical taxonomy focuses on the problem setting aspect and categorizes existing work based on whether to preserve special network properties, to consider special network types, or to incorporate additional inputs. Finally, we summarize the main findings based on the two taxonomies, analyze their usefulness,and discuss future directions in this area.展开更多
Identifying semantic types for attributes in relations,known as attribute semantic type(AST)identification,plays an important role in many data analysis tasks,such as data cleaning,schema matching,and keyword search i...Identifying semantic types for attributes in relations,known as attribute semantic type(AST)identification,plays an important role in many data analysis tasks,such as data cleaning,schema matching,and keyword search in databases.However,due to a lack of unified naming standards across prevalent information systems(a.k.a.information islands),AST identification still remains as an open problem.To tackle this problem,we propose a context-aware method to figure out the ASTs for relations in this paper.We transform the AST identification into a multi-class classification problem and propose a schema context aware(SCA)model to learn the representation from a collection of relations associated with attribute values and schema context.Based on the learned representation,we predict the AST for a given attribute from an underlying relation,wherein the predicted AST is mapped to one of the labeled ASTs.To improve the performance for AST identification,especially for the case that the predicted semantic types of attributes are not included in the labeled ASTs,we then introduce knowledge base embeddings(a.k.a.KBVec)to enhance the above representation and construct a schema context aware model with knowledge base enhanced(SCA-KB)to get a stable and robust model.Extensive experiments based on real datasets demonstrate that our context-aware method outperforms the state-of-the-art approaches by a large margin,up to 6.14%and 25.17%in terms of macro average F1 score,and up to 0.28%and 9.56%in terms of weighted F1 score over high-quality and low-quality datasets respectively.展开更多
人群密度估计在智能安全防范领域具有重要的应用价值。针对人群密度估计在二维图像中视角变化呈现较大差异、特征空间信息丢失、尺度特征和人群特征提取困难等问题,提出了一种多特征信息融合的人群密度估计方法。该方法通过注意力机制...人群密度估计在智能安全防范领域具有重要的应用价值。针对人群密度估计在二维图像中视角变化呈现较大差异、特征空间信息丢失、尺度特征和人群特征提取困难等问题,提出了一种多特征信息融合的人群密度估计方法。该方法通过注意力机制引导的空间注意力透视(Perspective of spatial attention,PSA)方法,对图像多视角信息进行了有效信息编码,获取了特征图的空间全局上下文信息,弱化了视角变化带来的影响;而后通过多尺度信息聚合(Multi-Scale Information Aggregation,MSIA)方法,利用多尺度非对称卷积与不同膨胀率的空洞卷积进行了有效融合,获取了较为全面的图像尺度及特征信息。最终通过细致语义特征嵌入融合的方式,补充了高层特征图的空间信息及低层特征图的语义信息,并使上下文信息与尺度信息相互补充,提高了模型的准确度与鲁棒性。采用ShanghaiTech、Mall、Worldexpo’10数据集进行了实验验证,实验结果表明,所提方法的性能较其他对比方法有一定的提升。展开更多
Most word embedding models have the following problems:(1)In the models based on bag-of-words contexts,the structural relations of sentences are completely neglected;(2)Each word uses a single embedding,which makes th...Most word embedding models have the following problems:(1)In the models based on bag-of-words contexts,the structural relations of sentences are completely neglected;(2)Each word uses a single embedding,which makes the model indiscriminative for polysemous words;(3)Word embedding easily tends to contextual structure similarity of sentences.To solve these problems,we propose an easy-to-use representation algorithm of syntactic word embedding(SWE).The main procedures are:(1)A polysemous tagging algorithm is used for polysemous representation by the latent Dirichlet allocation(LDA)algorithm;(2)Symbols‘+’and‘-’are adopted to indicate the directions of the dependency syntax;(3)Stopwords and their dependencies are deleted;(4)Dependency skip is applied to connect indirect dependencies;(5)Dependency-based contexts are inputted to a word2vec model.Experimental results show that our model generates desirable word embedding in similarity evaluation tasks.Besides,semantic and syntactic features can be captured from dependency-based syntactic contexts,exhibiting less topical and more syntactic similarity.We conclude that SWE outperforms single embedding learning models.展开更多
文摘语义槽填充是对话系统中一项非常重要的任务,旨在为输入句子的每个单词标注正确的标签,其性能的好坏极大地影响着后续的对话管理模块。目前,使用深度学习方法解决该任务时,一般利用随机词向量或者预训练词向量作为模型的初始化词向量。但是,随机词向量存在不具备语义和语法信息的缺点;预训练词向量存在“一词一义”的缺点,无法为模型提供具备上下文依赖的词向量。针对该问题,提出了一种基于预训练模型BERT和长短期记忆网络的深度学习模型。该模型使用基于Transformer的双向编码表征模型(Bidirectional Encoder Representations from Transformers,BERT)产生具备上下文依赖的词向量,并将其作为双向长短期记忆网络(Bidirectional Long Short-Term Memory,BiLSTM)的输入,最后利用Softmax函数和条件随机场进行解码。将预训练模型BERT和BiLSTM网络作为整体进行训练,达到了提升语义槽填充任务性能的目的。在MIT Restaurant Corpus,MIT Movie Corpus和MIT Movie trivial Corpus 3个数据集上,所提模型得出了良好的结果,最大F1值分别为78.74%,87.60%和71.54%。实验结果表明,所提模型显著提升了语义槽填充任务的F1值。
文摘Learning the representations of nodes in a network can benefit various analysis tasks such as node classification, link prediction, clustering, and anomaly detection. Such a representation learning problem is referred to as network embedding, and it has attracted significant attention in recent years. In this article, we briefly review the existing network embedding methods by two taxonomies. The technical taxonomy focuses on the specific techniques used and divides the existing network embedding methods into two stages, i.e., context construction and objective design. The non-technical taxonomy focuses on the problem setting aspect and categorizes existing work based on whether to preserve special network properties, to consider special network types, or to incorporate additional inputs. Finally, we summarize the main findings based on the two taxonomies, analyze their usefulness,and discuss future directions in this area.
基金supported by the National Key Research and Development Program of China under Grant No.2020YFB2104100the National Natural Science Foundation of China under Grant Nos.61972403 and U1711261the Fundamental Research Funds for the Central Universities of China,the Research Funds of Renmin University of China,and Tencent Rhino-Bird Joint Research Program.
文摘Identifying semantic types for attributes in relations,known as attribute semantic type(AST)identification,plays an important role in many data analysis tasks,such as data cleaning,schema matching,and keyword search in databases.However,due to a lack of unified naming standards across prevalent information systems(a.k.a.information islands),AST identification still remains as an open problem.To tackle this problem,we propose a context-aware method to figure out the ASTs for relations in this paper.We transform the AST identification into a multi-class classification problem and propose a schema context aware(SCA)model to learn the representation from a collection of relations associated with attribute values and schema context.Based on the learned representation,we predict the AST for a given attribute from an underlying relation,wherein the predicted AST is mapped to one of the labeled ASTs.To improve the performance for AST identification,especially for the case that the predicted semantic types of attributes are not included in the labeled ASTs,we then introduce knowledge base embeddings(a.k.a.KBVec)to enhance the above representation and construct a schema context aware model with knowledge base enhanced(SCA-KB)to get a stable and robust model.Extensive experiments based on real datasets demonstrate that our context-aware method outperforms the state-of-the-art approaches by a large margin,up to 6.14%and 25.17%in terms of macro average F1 score,and up to 0.28%and 9.56%in terms of weighted F1 score over high-quality and low-quality datasets respectively.
文摘人群密度估计在智能安全防范领域具有重要的应用价值。针对人群密度估计在二维图像中视角变化呈现较大差异、特征空间信息丢失、尺度特征和人群特征提取困难等问题,提出了一种多特征信息融合的人群密度估计方法。该方法通过注意力机制引导的空间注意力透视(Perspective of spatial attention,PSA)方法,对图像多视角信息进行了有效信息编码,获取了特征图的空间全局上下文信息,弱化了视角变化带来的影响;而后通过多尺度信息聚合(Multi-Scale Information Aggregation,MSIA)方法,利用多尺度非对称卷积与不同膨胀率的空洞卷积进行了有效融合,获取了较为全面的图像尺度及特征信息。最终通过细致语义特征嵌入融合的方式,补充了高层特征图的空间信息及低层特征图的语义信息,并使上下文信息与尺度信息相互补充,提高了模型的准确度与鲁棒性。采用ShanghaiTech、Mall、Worldexpo’10数据集进行了实验验证,实验结果表明,所提方法的性能较其他对比方法有一定的提升。
基金Project supported by the National Natural Science Foundation of China(Nos.61663041 and 61763041)the Program for Changjiang Scholars and Innovative Research Team in Universities,China(No.IRT_15R40)+2 种基金the Research Fund for the Chunhui Program of Ministry of Education of China(No.Z2014022)the Natural Science Foundation of Qinghai Province,China(No.2014-ZJ-721)the Fundamental Research Funds for the Central Universities,China(No.2017TS045)
文摘Most word embedding models have the following problems:(1)In the models based on bag-of-words contexts,the structural relations of sentences are completely neglected;(2)Each word uses a single embedding,which makes the model indiscriminative for polysemous words;(3)Word embedding easily tends to contextual structure similarity of sentences.To solve these problems,we propose an easy-to-use representation algorithm of syntactic word embedding(SWE).The main procedures are:(1)A polysemous tagging algorithm is used for polysemous representation by the latent Dirichlet allocation(LDA)algorithm;(2)Symbols‘+’and‘-’are adopted to indicate the directions of the dependency syntax;(3)Stopwords and their dependencies are deleted;(4)Dependency skip is applied to connect indirect dependencies;(5)Dependency-based contexts are inputted to a word2vec model.Experimental results show that our model generates desirable word embedding in similarity evaluation tasks.Besides,semantic and syntactic features can be captured from dependency-based syntactic contexts,exhibiting less topical and more syntactic similarity.We conclude that SWE outperforms single embedding learning models.