近年来,随着信息技术的不断发展,各种数据呈现爆炸式的增长,传统的机器学习算法只有当测试数据与训练数据分布类似时,学习算法才能取得较好的性能,换句话说,它们不能在动态环境中连续自适应地学习,然而,这种自适应学习的能力却是任何智...近年来,随着信息技术的不断发展,各种数据呈现爆炸式的增长,传统的机器学习算法只有当测试数据与训练数据分布类似时,学习算法才能取得较好的性能,换句话说,它们不能在动态环境中连续自适应地学习,然而,这种自适应学习的能力却是任何智能系统都具备的特性.深度神经网络在许多应用中显示出最好的学习能力,然而,使用该方法对数据进行增量更新学习时,会面临灾难性的干扰或遗忘问题,导致模型在学习新任务之后忘记如何解决旧任务.连续学习(continual learning,CL)的研究使这一问题得到缓解.连续学习是模拟大脑学习的过程,按照一定的顺序对连续非独立同分布的(independently and identically distributed,IID)流数据进行学习,进而根据任务的执行结果对模型进行增量式更新.连续学习的意义在于高效地转化和利用已经学过的知识来完成新任务的学习,并且能够极大程度地降低遗忘带来的问题.连续学习研究对智能计算系统自适应地适应环境改变具有重要的意义.基于此,系统综述了连续学习的研究进展,首先概述了连续学习的定义,介绍了无遗忘学习、弹性权重整合和梯度情景记忆3种典型的连续学习模型,并对连续学习存在的关键问题及解决方法进行了介绍,之后又对基于正则化、动态结构和记忆回放互补学习系统的3类连续学习模型进行了分类和阐述,并在最后指明了连续学习进一步研究中需要解决的问题以及未来可能的发展方向.展开更多
As an effective patch-based denoising method, non-local means (NLM) method achieves favorable denoising performance over its local counterparts and has drawn wide attention in image processing community. The in, ple...As an effective patch-based denoising method, non-local means (NLM) method achieves favorable denoising performance over its local counterparts and has drawn wide attention in image processing community. The in, plementation of NLM can formally be decomposed into two sequential steps, i.e., computing the weights and using the weights to compute the weighted means. In the first step, the weights can be obtained by solving a regularized optimization. And in the second step, the means can be obtained by solving a weighted least squares problem. Motivated by such observations, we establish a two-step regularization framework for NLM in this paper. Meanwhile, using the fl-amework, we reinterpret several non-local filters in the unified view. Further, taking the framework as a design platform, we develop a novel non-local median filter for removing salt-pepper noise with encouraging experimental results.展开更多
Ensemble learning is a wildly concerned issue.Traditional ensemble techniques are always adopted to seek better results with labeled data and base classifiers.They fail to address the ensemble task where only unlabele...Ensemble learning is a wildly concerned issue.Traditional ensemble techniques are always adopted to seek better results with labeled data and base classifiers.They fail to address the ensemble task where only unlabeled data are available.A label propagation based ensemble(LPBE) approach is proposed to further combine base classification results with unlabeled data.First,a graph is constructed by taking unlabeled data as vertexes,and the weights in the graph are calculated by correntropy function.Average prediction results are gained from base classifiers,and then propagated under a regularization framework and adaptively enhanced over the graph.The proposed approach is further enriched when small labeled data are available.The proposed algorithms are evaluated on several UCI benchmark data sets.Results of simulations show that the proposed algorithms achieve satisfactory performance compared with existing ensemble methods.展开更多
文摘近年来,随着信息技术的不断发展,各种数据呈现爆炸式的增长,传统的机器学习算法只有当测试数据与训练数据分布类似时,学习算法才能取得较好的性能,换句话说,它们不能在动态环境中连续自适应地学习,然而,这种自适应学习的能力却是任何智能系统都具备的特性.深度神经网络在许多应用中显示出最好的学习能力,然而,使用该方法对数据进行增量更新学习时,会面临灾难性的干扰或遗忘问题,导致模型在学习新任务之后忘记如何解决旧任务.连续学习(continual learning,CL)的研究使这一问题得到缓解.连续学习是模拟大脑学习的过程,按照一定的顺序对连续非独立同分布的(independently and identically distributed,IID)流数据进行学习,进而根据任务的执行结果对模型进行增量式更新.连续学习的意义在于高效地转化和利用已经学过的知识来完成新任务的学习,并且能够极大程度地降低遗忘带来的问题.连续学习研究对智能计算系统自适应地适应环境改变具有重要的意义.基于此,系统综述了连续学习的研究进展,首先概述了连续学习的定义,介绍了无遗忘学习、弹性权重整合和梯度情景记忆3种典型的连续学习模型,并对连续学习存在的关键问题及解决方法进行了介绍,之后又对基于正则化、动态结构和记忆回放互补学习系统的3类连续学习模型进行了分类和阐述,并在最后指明了连续学习进一步研究中需要解决的问题以及未来可能的发展方向.
基金supported by the National Natural Science Foundation of China under Grant No.61300154the Natural Science Foundations of Shandong Province of China under Grant Nos.NZR2010FL011+2 种基金ZR2012FQ005,Jiangsu Qing Lan Projectsthe Fundamental Research Funds for the Central Universities of China under Grant No.NZ2013306the Natural Science Foundation of Liaocheng University under Grant No.318011408
文摘As an effective patch-based denoising method, non-local means (NLM) method achieves favorable denoising performance over its local counterparts and has drawn wide attention in image processing community. The in, plementation of NLM can formally be decomposed into two sequential steps, i.e., computing the weights and using the weights to compute the weighted means. In the first step, the weights can be obtained by solving a regularized optimization. And in the second step, the means can be obtained by solving a weighted least squares problem. Motivated by such observations, we establish a two-step regularization framework for NLM in this paper. Meanwhile, using the fl-amework, we reinterpret several non-local filters in the unified view. Further, taking the framework as a design platform, we develop a novel non-local median filter for removing salt-pepper noise with encouraging experimental results.
基金Project (20121101004) supported by the Major Science and Technology Program of Shanxi Province,ChinaProject (20130321004-01) supported by the Key Technologies R&D Program of Shanxi Province,China+2 种基金Project (2013M530896) supported by the Postdoctoral Science Foundation of ChinaProject (2014021022-6) supported by the Shanxi Provincial Science Foundation for Youths,ChinaProject (80010302010053) supported by the Shanxi Characteristic Discipline Fund,China
文摘Ensemble learning is a wildly concerned issue.Traditional ensemble techniques are always adopted to seek better results with labeled data and base classifiers.They fail to address the ensemble task where only unlabeled data are available.A label propagation based ensemble(LPBE) approach is proposed to further combine base classification results with unlabeled data.First,a graph is constructed by taking unlabeled data as vertexes,and the weights in the graph are calculated by correntropy function.Average prediction results are gained from base classifiers,and then propagated under a regularization framework and adaptively enhanced over the graph.The proposed approach is further enriched when small labeled data are available.The proposed algorithms are evaluated on several UCI benchmark data sets.Results of simulations show that the proposed algorithms achieve satisfactory performance compared with existing ensemble methods.