摘要
分层强化学习是为了解决强化学习"维数灾"的问题而提出的。Options的分层结构使Agent能更高效地完成学习任务,但通常Options的学习是在同一个状态空间的,Agent学习到的知识不能用于其他相似任务的状态空间。文章提出了迁移强化学习的框架,根据相关任务的一些共同特点,通过共享实现知识迁移;引入了代理空间的概念,该空间是相关任务的特征集,可以在以后的任务中重复利用。实验结果表明通过使用代理空间Options实现了知识迁移,显著提高了相关工作的性能。
Hierarchical reinforcement learning is put forward to solve reinforcement learning “curse of dimensionality” problem .The struture of Options makes agent complete tasks efficiently ,but usually the learning of Options is in the same state space ,and the knowledge Agent learned can not be used for other similar tasks .This paper presents the framework of transfer in reinforcement learning according to some common features of related tasks ,which helps achieve knowledge transfer .The concept of agent-space being introduced ,this space is a set of features of related tasks ,which can be reused in future tasks .Experimental results show that achieving knowledge transfer through agent-space improves the performance of related work significantly .
出处
《广东石油化工学院学报》
2014年第4期18-21,共4页
Journal of Guangdong University of Petrochemical Technology
基金
国家自然科学基金项目"云计算中虚拟机资源与应用系统参数的协同自适应配置研究"(61272382)
关键词
分层强化学习
代理空间
知识迁移
Hierarchical reinforcement learning
Agent-space
Knowledge transfer