期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
制造网格环境中基于Grid-PLM的产品数据交换
1
作者 罗东华 陈德焜 罗倩 《机械》 2005年第2期29-31,共3页
介绍了制造网格环境下结构化的产品数据交换的机制。主要包括产品数据交换的分析,网格服务的阐述以及数据交换机制的建立。通过介绍基于OGSA的Grid-PLM模型,应用WebServices技术,实现异地异构系统之间的产品数据交换。
关键词 OGSA gplm 产品数据交换
下载PDF
Vision Enhanced Generative Pre-trained Language Model for Multimodal Sentence Summarization
2
作者 Liqiang Jing Yiren Li +3 位作者 Junhao Xu Yongcan Yu Pei Shen Xuemeng Song 《Machine Intelligence Research》 EI CSCD 2023年第2期289-298,共10页
Multimodal sentence summarization(MMSS)is a new yet challenging task that aims to generate a concise summary of a long sentence and its corresponding image.Although existing methods have gained promising success in MM... Multimodal sentence summarization(MMSS)is a new yet challenging task that aims to generate a concise summary of a long sentence and its corresponding image.Although existing methods have gained promising success in MMSS,they overlook the powerful generation ability of generative pre-trained language models(GPLMs),which have shown to be effective in many text generation tasks.To fill this research gap,we propose to using GPLMs to promote the performance of MMSS.Notably,adopting GPLMs to solve MMSS inevitably faces two challenges:1)What fusion strategy should we use to inject visual information into GPLMs properly?2)How to keep the GPLM′s generation ability intact to the utmost extent when the visual feature is injected into the GPLM.To address these two challenges,we propose a vision enhanced generative pre-trained language model for MMSS,dubbed as Vision-GPLM.In Vision-GPLM,we obtain features of visual and textual modalities with two separate encoders and utilize a text decoder to produce a summary.In particular,we utilize multi-head attention to fuse the features extracted from visual and textual modalities to inject the visual feature into the GPLM.Meanwhile,we train Vision-GPLM in two stages:the vision-oriented pre-training stage and fine-tuning stage.In the vision-oriented pre-training stage,we particularly train the visual encoder by the masked language model task while the other components are frozen,aiming to obtain homogeneous representations of text and image.In the fine-tuning stage,we train all the components of Vision-GPLM by the MMSS task.Extensive experiments on a public MMSS dataset verify the superiority of our model over existing baselines. 展开更多
关键词 Multimodal sentence summarization(MMSS) generative pre-trained language model(gplm) natural language generation deep learning artificial intelligence
原文传递
隐式Runge-Kutta方法求解多延迟微分方程的GPL_m-稳定性(英文)
3
作者 项家祥 崔义娟 丛玉豪 《上海师范大学学报(自然科学版)》 2008年第2期111-116,共6页
研究了用IRK方法求解多延时微分方程数值解的稳定性,对于线性模型方程,分析并证明了IRK方法是GPLm-稳定的当且仅当它是L稳定的.
关键词 延时微分方程 gplm-稳定性 隐式Rungc-Kutta方法
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部