Recurrent neural networks (RNN) have been very successful in handling sequence data. However, understanding RNN and finding the best practices for RNN learning is a difficult task, partly because there are many comp...Recurrent neural networks (RNN) have been very successful in handling sequence data. However, understanding RNN and finding the best practices for RNN learning is a difficult task, partly because there are many competing and complex hidden units, such as the long short-term memory (LSTM) and the gated recurrent unit (GRU). We propose a gated unit for RNN, named as minimal gated unit (MCU), since it only contains one gate, which is a minimal design among all gated hidden units. The design of MCU benefits from evaluation results on LSTM and GRU in the literature. Experiments on various sequence data show that MCU has comparable accuracy with GRU, but has a simpler structure, fewer parameters, and faster training. Hence, MGU is suitable in RNN's applications. Its simple architecture also means that it is easier to evaluate and tune, and in principle it is easier to study MGU's properties theoretically and empirically.展开更多
我国现行的三年制医学高职教育"2+1"模式是在20世纪80年代以来与五年制医学本科教育"4+1"模式同期产生的,学校学习2年,医院实习1年,理论与实践教学严重脱节,不能适应新形势下高职医学教育的需要。MGU(1+2m)人才培...我国现行的三年制医学高职教育"2+1"模式是在20世纪80年代以来与五年制医学本科教育"4+1"模式同期产生的,学校学习2年,医院实习1年,理论与实践教学严重脱节,不能适应新形势下高职医学教育的需要。MGU(1+2m)人才培养模式是近年来我校深化教学改革,突出实践能力培养,在"以就业为导向、以能力培养为核心,大力推行工学结合"的职教理念指导下,根据专业培养目标和基层医疗卫生岗位对人才知识、能力、素质的需求,对三年教学进行全程设计、分段实施,在校学习1年,医院学习2年,其中第3年实习期间到乡镇卫生院实习1个月(mouth,简写"m"),适合面向基层(Meet the Grass-roots Units,简称"MGU")培养高素质应用型卫生人才的培养模式,简称"MGU(1+2m)"人才培养模式。展开更多
非平稳多变量时间序列(non-stationary multivariate time series, NSMTS)预测目前仍是一个具有挑战性的任务.基于循环神经网络的深度学习模型,尤其是基于长短期记忆(long short-term memory, LSTM)和门循环单元(gated recurrent unit, ...非平稳多变量时间序列(non-stationary multivariate time series, NSMTS)预测目前仍是一个具有挑战性的任务.基于循环神经网络的深度学习模型,尤其是基于长短期记忆(long short-term memory, LSTM)和门循环单元(gated recurrent unit, GRU)的神经网络已获得了令人印象深刻的预测性能.尽管LSTM结构上较为复杂,却并不总是在性能上占优.最近提出的最小门单元(minimal gated unit, MGU)神经网络具有更简单的结构,并在图像处理和一些序列处理问题中能够提升训练效率.更为关键的是,实验中我们发现该门单元可以高效运用于NSMTS的预测,并达到了与基于LSTM和GRU的神经网络相当的预测性能.然而,基于这3类门单元的神经网络中,没有任何一类总能保证性能上的优势.为此提出了一种线性混合门单元(MIX gated unit, MIXGU),试图利用该单元动态调整GRU和MGU的混合权重,以便在训练期间为网络中的每个MIXGU获得更优的混合结构.实验结果表明,与基于单一门单元的神经网络相比,混合2类门单元的MIXGU神经网络具有更优的预测性能.展开更多
基金supported by National Natural Science Foundation of China(Nos.61422203 and 61333014)National Key Basic Research Program of China(No.2014CB340501)
文摘Recurrent neural networks (RNN) have been very successful in handling sequence data. However, understanding RNN and finding the best practices for RNN learning is a difficult task, partly because there are many competing and complex hidden units, such as the long short-term memory (LSTM) and the gated recurrent unit (GRU). We propose a gated unit for RNN, named as minimal gated unit (MCU), since it only contains one gate, which is a minimal design among all gated hidden units. The design of MCU benefits from evaluation results on LSTM and GRU in the literature. Experiments on various sequence data show that MCU has comparable accuracy with GRU, but has a simpler structure, fewer parameters, and faster training. Hence, MGU is suitable in RNN's applications. Its simple architecture also means that it is easier to evaluate and tune, and in principle it is easier to study MGU's properties theoretically and empirically.
文摘我国现行的三年制医学高职教育"2+1"模式是在20世纪80年代以来与五年制医学本科教育"4+1"模式同期产生的,学校学习2年,医院实习1年,理论与实践教学严重脱节,不能适应新形势下高职医学教育的需要。MGU(1+2m)人才培养模式是近年来我校深化教学改革,突出实践能力培养,在"以就业为导向、以能力培养为核心,大力推行工学结合"的职教理念指导下,根据专业培养目标和基层医疗卫生岗位对人才知识、能力、素质的需求,对三年教学进行全程设计、分段实施,在校学习1年,医院学习2年,其中第3年实习期间到乡镇卫生院实习1个月(mouth,简写"m"),适合面向基层(Meet the Grass-roots Units,简称"MGU")培养高素质应用型卫生人才的培养模式,简称"MGU(1+2m)"人才培养模式。
文摘非平稳多变量时间序列(non-stationary multivariate time series, NSMTS)预测目前仍是一个具有挑战性的任务.基于循环神经网络的深度学习模型,尤其是基于长短期记忆(long short-term memory, LSTM)和门循环单元(gated recurrent unit, GRU)的神经网络已获得了令人印象深刻的预测性能.尽管LSTM结构上较为复杂,却并不总是在性能上占优.最近提出的最小门单元(minimal gated unit, MGU)神经网络具有更简单的结构,并在图像处理和一些序列处理问题中能够提升训练效率.更为关键的是,实验中我们发现该门单元可以高效运用于NSMTS的预测,并达到了与基于LSTM和GRU的神经网络相当的预测性能.然而,基于这3类门单元的神经网络中,没有任何一类总能保证性能上的优势.为此提出了一种线性混合门单元(MIX gated unit, MIXGU),试图利用该单元动态调整GRU和MGU的混合权重,以便在训练期间为网络中的每个MIXGU获得更优的混合结构.实验结果表明,与基于单一门单元的神经网络相比,混合2类门单元的MIXGU神经网络具有更优的预测性能.