摘要
为了克服传统的最优化算法面对复杂、非线性描述的多目标最优潮流时无法满足电力系统实时调度运行的这一缺点,提出了一种基于半马尔可夫决策过程的多步Q(λ)学习算法,该算法不依赖于对象模型,将最优潮流问题中的约束、动作和目标转换成算法中的状态、动作与奖励,通过不断的试错、回溯、迭代来动态寻找最优的动作.将该算法在多个IEEE标准算例中与其他算法进行比较,取得了良好的效果,验证了多步Q(λ)学习算法在处理多目标最优潮流问题时的可行性和有效性.
As the conventional optimization algorithms of power flow cannot meet the requirements of real-time scheduling of power system with complex and nonlinear descriptional multi-objective optimal power flow(OPF),this paper presents a multi-step Q(λ) learning algorithm based on the semi-Markov decision process.This algorithm,independent of any accurate model,converts the constraints,actions and targets of the optimal power flow to the status,actions and rewards of the algorithm,and dynamically finds the optimal action by continuous fault testing,retrospecting and iteration.By comparing comparison of the proposed algorithm with other algorithms in several IEEE standard examples,it is found that the Q(λ) learning algorithm is feasible and effective in dealing with multi-objective OPF problems.
出处
《华南理工大学学报(自然科学版)》
EI
CAS
CSCD
北大核心
2010年第10期139-145,共7页
Journal of South China University of Technology(Natural Science Edition)
基金
国家自然科学基金资助项目(50807016)
广东省自然科学基金资助项目(9151064101000049)
中央高校基本科研业务费专项资金资助项目(2009ZM0251)
关键词
电力系统
最优潮流
Q(λ)学习算法
多目标优化
强化学习
electric power system
optimal power flow
Q(λ) learning algorithm
multi-objective optimization
reinforcement learning