提出一种基于降噪自编码神经网络事件相关电位分析方法,首先建立3层神经网络结构,利用降噪自编码对神经网络进行初始化,实现了降噪自编码深度学习模型的无监督学习.从无标签数据中自动学习数据特征,通过优化模型训练得到的权值作为神经...提出一种基于降噪自编码神经网络事件相关电位分析方法,首先建立3层神经网络结构,利用降噪自编码对神经网络进行初始化,实现了降噪自编码深度学习模型的无监督学习.从无标签数据中自动学习数据特征,通过优化模型训练得到的权值作为神经网络初始化参数.其次,经过有标签的样本进行网络参数的微调即可完成对神经网络的训练,该方法有效解决了神经网络训练中因随机选择初始化参数,而导致网络易陷入局部极小的缺陷.最后,利用上述神经网络对第3届脑机接口竞赛数据集Data set Ⅱ(事件相关电位脑电信号)进行分类分析.实验结果表明:利用降噪自编码迭代2500次训练神经网络模型,在受试者A和受试者B样本数据叠加5次、10次、15次3种情况下获得的分类准确率分别为73.4%, 87.4%和97.2%.该最高准确率优于其他分类方法,比竞赛第1名联合支持向量机(SVM)分类器(ESVM)提高了0.7%,为事件相关电位脑电信号提供了一种深度学习分析方法.展开更多
In this paper we will discuss novel algorithms to develop the brain-computer interface (BCI) system in speller application based on single-trial classification of electroencephalogram (EEG) signal. The idea is to empl...In this paper we will discuss novel algorithms to develop the brain-computer interface (BCI) system in speller application based on single-trial classification of electroencephalogram (EEG) signal. The idea is to employ proper methods for reducing the number of channels and optimizing feature vectors. Removal unnecessary channels and reducing feature dimension result in cost decrement, time saving and improve the BCI implementation eventually. Optimal channels will be gotten after two stages sifting. In the first stage, the channels reduced up to 30% based on channels of the important event related potential (ERP) components and in the next stage, optimal channels were extracted by backward forward selection (BFS) algorithm. Also we will show that suitable single-trial analysis requires applying proper feature vector that was constructed by recognizing important ERP components, so as to propose an algorithm to distinguish less important features in feature vectors. F-Score criteria used to recognize effective features which created more discrimination between different classes and feature vectors were reconstructed based on effective features. Our algorithm has tested on dataset II of BCI competition III. The results show that we achieve accuracy up to 31% in single-trial, which is better than the performance of winner who is in this competition (about 25.5%). Also we use simple classifier and few channels to compute output performances while more complicated classifier and all channels are used by them.展开更多
文摘提出一种基于降噪自编码神经网络事件相关电位分析方法,首先建立3层神经网络结构,利用降噪自编码对神经网络进行初始化,实现了降噪自编码深度学习模型的无监督学习.从无标签数据中自动学习数据特征,通过优化模型训练得到的权值作为神经网络初始化参数.其次,经过有标签的样本进行网络参数的微调即可完成对神经网络的训练,该方法有效解决了神经网络训练中因随机选择初始化参数,而导致网络易陷入局部极小的缺陷.最后,利用上述神经网络对第3届脑机接口竞赛数据集Data set Ⅱ(事件相关电位脑电信号)进行分类分析.实验结果表明:利用降噪自编码迭代2500次训练神经网络模型,在受试者A和受试者B样本数据叠加5次、10次、15次3种情况下获得的分类准确率分别为73.4%, 87.4%和97.2%.该最高准确率优于其他分类方法,比竞赛第1名联合支持向量机(SVM)分类器(ESVM)提高了0.7%,为事件相关电位脑电信号提供了一种深度学习分析方法.
文摘In this paper we will discuss novel algorithms to develop the brain-computer interface (BCI) system in speller application based on single-trial classification of electroencephalogram (EEG) signal. The idea is to employ proper methods for reducing the number of channels and optimizing feature vectors. Removal unnecessary channels and reducing feature dimension result in cost decrement, time saving and improve the BCI implementation eventually. Optimal channels will be gotten after two stages sifting. In the first stage, the channels reduced up to 30% based on channels of the important event related potential (ERP) components and in the next stage, optimal channels were extracted by backward forward selection (BFS) algorithm. Also we will show that suitable single-trial analysis requires applying proper feature vector that was constructed by recognizing important ERP components, so as to propose an algorithm to distinguish less important features in feature vectors. F-Score criteria used to recognize effective features which created more discrimination between different classes and feature vectors were reconstructed based on effective features. Our algorithm has tested on dataset II of BCI competition III. The results show that we achieve accuracy up to 31% in single-trial, which is better than the performance of winner who is in this competition (about 25.5%). Also we use simple classifier and few channels to compute output performances while more complicated classifier and all channels are used by them.