命名实体识别是农业文本信息抽取的重要环节,针对实体识别过程中局部上下文特征缺失、字向量表征单一、罕见实体识别率低等问题,提出一种融合BERT(Bidirectional Encoder Representations from Transformers,转换器的双向编码器表征量)...命名实体识别是农业文本信息抽取的重要环节,针对实体识别过程中局部上下文特征缺失、字向量表征单一、罕见实体识别率低等问题,提出一种融合BERT(Bidirectional Encoder Representations from Transformers,转换器的双向编码器表征量)字级特征与外部词典特征的命名实体识别方法。通过BERT预训练模型,融合左右两侧语境信息,增强字的语义表示,缓解一词多义的问题;自建农业领域词典,引入双向最大匹配策略,获取分布式词典特征表示,提高模型对罕见或未知实体的识别准确率;利用双向长短时记忆(Bi-directional Long-short Term Memory,BiLSTM)网络获取序列特征矩阵,并通过条件随机场(Conditional Random Field,CRF)模型生成全局最优序列。结合领域专家知识,构建农业语料集,包含5295条标注语料,5类农业实体。模型在语料集上准确率为94.84%、召回率为95.23%、F_(1)值为95.03%。研究结果表明,该方法能够有效识别农业领域命名实体,识别精准度优于其他模型,具有明显的优势。展开更多
Localizing discriminative object parts(e.g.,bird head)is crucial for fine-grained classification tasks,especially for the more challenging fine-grained few-shot scenario.Previous work always relies on the learned obje...Localizing discriminative object parts(e.g.,bird head)is crucial for fine-grained classification tasks,especially for the more challenging fine-grained few-shot scenario.Previous work always relies on the learned object parts in a unified manner,where they attend the same object parts(even with common attention weights)for different few-shot episodic tasks.In this paper,we propose that it should adaptively capture the task-specific object parts that require attention for each few-shot task,since the parts that can distinguish different tasks are naturally different.Specifically for a few-shot task,after obtaining part-level deep features,we learn a task-specific part-based dictionary for both aligning and reweighting part features in an episode.Then,part-level categorical prototypes are generated based on the part features of support data,which are later employed by calculating distances to classify query data for evaluation.To retain the discriminative ability of the part-level representations(i.e.,part features and part prototypes),we design an optimal transport solution that also utilizes query data in a transductive way to optimize the aforementioned distance calculation for the final predictions.Extensive experiments on five fine-grained benchmarks show the superiority of our method,especially for the 1-shot setting,gaining 0.12%,8.56%and 5.87%improvements over state-of-the-art methods on CUB,Stanford Dogs,and Stanford Cars,respectively.展开更多
Unconstrained face images are interfered by many factors such as illumination,posture,expression,occlusion,age,accessories and so on,resulting in the randomness of the noise pollution implied in the original samples.I...Unconstrained face images are interfered by many factors such as illumination,posture,expression,occlusion,age,accessories and so on,resulting in the randomness of the noise pollution implied in the original samples.In order to improve the sample quality,a weighted block cooperative sparse representation algorithm is proposed based on visual saliency dictionary.First,the algorithm uses the biological visual attention mechanism to quickly and accurately obtain the face salient target and constructs the visual salient dictionary.Then,a block cooperation framework is presented to perform sparse coding for different local structures of human face,and the weighted regular term is introduced in the sparse representation process to enhance the identification of information hidden in the coding coefficients.Finally,by synthesising the sparse representation results of all visual salient block dictionaries,the global coding residual is obtained and the class label is given.The experimental results on four databases,that is,AR,extended Yale B,LFW and PubFig,indicate that the combination of visual saliency dictionary,block cooperative sparse representation and weighted constraint coding can effectively enhance the accuracy of sparse representation of the samples to be tested and improve the performance of unconstrained face recognition.展开更多
文摘命名实体识别是农业文本信息抽取的重要环节,针对实体识别过程中局部上下文特征缺失、字向量表征单一、罕见实体识别率低等问题,提出一种融合BERT(Bidirectional Encoder Representations from Transformers,转换器的双向编码器表征量)字级特征与外部词典特征的命名实体识别方法。通过BERT预训练模型,融合左右两侧语境信息,增强字的语义表示,缓解一词多义的问题;自建农业领域词典,引入双向最大匹配策略,获取分布式词典特征表示,提高模型对罕见或未知实体的识别准确率;利用双向长短时记忆(Bi-directional Long-short Term Memory,BiLSTM)网络获取序列特征矩阵,并通过条件随机场(Conditional Random Field,CRF)模型生成全局最优序列。结合领域专家知识,构建农业语料集,包含5295条标注语料,5类农业实体。模型在语料集上准确率为94.84%、召回率为95.23%、F_(1)值为95.03%。研究结果表明,该方法能够有效识别农业领域命名实体,识别精准度优于其他模型,具有明显的优势。
基金supported by National Natural Science Foundation of China(No.62272231)Natural Science Foundation of Jiangsu Province of China(No.BK 20210340)+2 种基金National Key R&D Program of China(No.2021YFA1001100)the Fundamental Research Funds for the Central Universities,China(No.NJ2022028)CAAI-Huawei MindSpore Open Fund,China.
文摘Localizing discriminative object parts(e.g.,bird head)is crucial for fine-grained classification tasks,especially for the more challenging fine-grained few-shot scenario.Previous work always relies on the learned object parts in a unified manner,where they attend the same object parts(even with common attention weights)for different few-shot episodic tasks.In this paper,we propose that it should adaptively capture the task-specific object parts that require attention for each few-shot task,since the parts that can distinguish different tasks are naturally different.Specifically for a few-shot task,after obtaining part-level deep features,we learn a task-specific part-based dictionary for both aligning and reweighting part features in an episode.Then,part-level categorical prototypes are generated based on the part features of support data,which are later employed by calculating distances to classify query data for evaluation.To retain the discriminative ability of the part-level representations(i.e.,part features and part prototypes),we design an optimal transport solution that also utilizes query data in a transductive way to optimize the aforementioned distance calculation for the final predictions.Extensive experiments on five fine-grained benchmarks show the superiority of our method,especially for the 1-shot setting,gaining 0.12%,8.56%and 5.87%improvements over state-of-the-art methods on CUB,Stanford Dogs,and Stanford Cars,respectively.
基金Natural Science Foundation of Jiangsu Province,Grant/Award Number:BK20170765National Natural Science Foundation of China,Grant/Award Number:61703201+1 种基金Future Network Scientific Research Fund Project,Grant/Award Number:FNSRFP2021YB26Science Foundation of Nanjing Institute of Technology,Grant/Award Numbers:ZKJ202002,ZKJ202003,and YKJ202019。
文摘Unconstrained face images are interfered by many factors such as illumination,posture,expression,occlusion,age,accessories and so on,resulting in the randomness of the noise pollution implied in the original samples.In order to improve the sample quality,a weighted block cooperative sparse representation algorithm is proposed based on visual saliency dictionary.First,the algorithm uses the biological visual attention mechanism to quickly and accurately obtain the face salient target and constructs the visual salient dictionary.Then,a block cooperation framework is presented to perform sparse coding for different local structures of human face,and the weighted regular term is introduced in the sparse representation process to enhance the identification of information hidden in the coding coefficients.Finally,by synthesising the sparse representation results of all visual salient block dictionaries,the global coding residual is obtained and the class label is given.The experimental results on four databases,that is,AR,extended Yale B,LFW and PubFig,indicate that the combination of visual saliency dictionary,block cooperative sparse representation and weighted constraint coding can effectively enhance the accuracy of sparse representation of the samples to be tested and improve the performance of unconstrained face recognition.