摘要
在实际动态系统中,经典无启发知识的激励学习算法收敛非常慢,因此必须采用某种偏差技术加速激励学习的收敛速度.已有激励学习偏差算法,通常先验地给出启发知识,这与激励学习的思想相矛盾.通过在初次激励学习获得的策略知识中,先抽取满足条件的规划知识,然后将规划知识作为启发知识,进一步指导后继激励学习.实验结果显示这种学习技术能有效加快算法收敛速度,并适用于动态复杂环境.
The classical reinforcement learning which has no prior knowledge learn very slowly in practice. So adapt some kinds of bias technology to speed the convergence of reinforcement learning. The plan rule satisfied conditions is extracted by means of reinforcement learning's policy. Then using this plan rule as the prior knowledge of the bias, direct the latter reinforcement learning further. The experiment proves the validity and the convergence of this method.
出处
《复旦学报(自然科学版)》
CAS
CSCD
北大核心
2004年第5期681-684,共4页
Journal of Fudan University:Natural Science
基金
国家自然科学基金资助项目(60103012)
国家重点研究发展规划973资助项目(2002CB312002)
江苏省创新人才资助项目(BK2003409)