摘要
在Bayesian网络推理中,对节点做参数学习是必不可少的。但在学习过程中,常常会出现证据丢失,导致参数收敛速度减慢,同时影响参数学习的精确度,甚至给参数收敛带来困难。针对这样的问题,本文提出一种证据丢失参数模型,并推导出包含学习率的EM更新算法。收敛性能的理论分析和仿真试验结果两方面均表明,新算法与传统处理算法相比,在不降低参数估计精度的前提下,具有更快的收敛速度,为保证不完备证据条件下可信高效的Bayes-ian网络参数学习提供了一条可行的解决途径。
To Infer in a Bayesian network, parameter learning for a given network node is obviously necessary. But during the course of parameter learning, evidence loss would happen from time to time and therefore slow down the parameter convergence, influence the accuracy of parameter learning, and even cause no parameter convergence. Aiming at this question, this paper proposes a parameter model under evidence loss and deduce an EM updating algorithm which contains learning rate. Compared with the traditional algorithms, both of the converging performance analysis and simulation testing results show that new algorithm has much quicker convergence rate without degrading the accuracy of parameter estimation. New algorithmprovides a feasible way to ensure a trusted and efficient Bayesian network parameter learning under the situation of evidence loss.
出处
《计算机科学》
CSCD
北大核心
2008年第1期171-175,共5页
Computer Science
基金
电子科学基金(No.51415010101DZ02)