摘要
多元医学信号的典型代表有多模态睡眠图和多通道脑电图等,采用无监督深度学习表征多元医学信号是目前健康信息学领域中的一个研究热点。为了解决现有模型没有充分结合医学信号多元时序结构特点的问题,该文提出了一种无监督的多级上下文深度卷积自编码器(mCtx-CAE)。首先改进传统卷积神经网络结构,提出一种多元卷积自编码模块,以提取信号片段内的多元上下文特征;其次,提出采用语义学习技术对信号片段间的时序信息进行自编码,进一步提取时序上下文特征;最后通过共享特征表示设计目标函数,训练端到端的多级上下文自编码器。实验结果表明,该文所提模型在两种应用于不同医疗场景下的多模态和多通道数据集(UCD和CHB-MIT)上表现均优于其它无监督特征学习方法,能有效提高多元医学信号的融合特征表达能力,对提高临床时序数据的分析效率有着重要意义。
Learning unsupervised representations from multivariate medical signals, such as multi-modality polysomnography and multi-channel electroencephalogram, has gained increasing attention in health informatics. In order to solve the problem that the existing models do not fully incorporate the characteristics of the multivariate-temporal structure of medical signals, an unsupervised multi-Context deep Convolutional AutoEncoder(mCtx-CAE) is proposed in this paper. Firstly, by modifying traditional convolutional neural networks, a multivariate convolutional autoencoder is proposed to extract multivariate context features within signal segments. Secondly, semantic learning is adopted to auto-encode temporal information among signal segments, to further extract temporal context features. Finally, an end-to-end multi-context autoencoder is trained by designing objective function based on shared feature representation. Experimental results conducted on two public benchmark datasets(UCD and CHB-MIT) show that the proposed model outperforms the state-of-the-art unsupervised feature learning methods in different medical tasks, demonstrating the effectiveness of the learned fusional features in clinical settings.
作者
袁野
贾克斌
刘鹏宇
YUAN Ye;JIA Kebin;LIU Pengyu(Faculty of Information Technology,Beijing University of Technology,Beijing 100124,China;Beijing Key Laboratory of Computational Intelligence and Intelligent System,Beijing University of Technology,Beijing 100124,China)
出处
《电子与信息学报》
EI
CSCD
北大核心
2020年第2期371-378,共8页
Journal of Electronics & Information Technology
基金
国家自然科学基金(81871394)
先进信息网络北京实验室基金(040000546618017)~~
关键词
多元医学信号
自编码器
上下文学习
卷积神经网络
深度学习
Multivariate medical signals
Autoencoders
Context learning
Convolutional neural networks
Deep learning