In this paper we discuss policy iteration methods for approximate solution of a finite-state discounted Markov decision problem, with a focus on feature-based aggregation methods and their connection with deep reinfor...In this paper we discuss policy iteration methods for approximate solution of a finite-state discounted Markov decision problem, with a focus on feature-based aggregation methods and their connection with deep reinforcement learning schemes. We introduce features of the states of the original problem, and we formulate a smaller "aggregate" Markov decision problem, whose states relate to the features. We discuss properties and possible implementations of this type of aggregation, including a new approach to approximate policy iteration. In this approach the policy improvement operation combines feature-based aggregation with feature construction using deep neural networks or other calculations. We argue that the cost function of a policy may be approximated much more accurately by the nonlinear function of the features provided by aggregation, than by the linear function of the features provided by neural networkbased reinforcement learning, thereby potentially leading to more effective policy improvement.展开更多
We consider the classical policy iteration method of dynamic programming(DP),where approximations and simulation are used to deal with the curse of dimensionality.We survey a number of issues:convergence and rate of c...We consider the classical policy iteration method of dynamic programming(DP),where approximations and simulation are used to deal with the curse of dimensionality.We survey a number of issues:convergence and rate of convergence of approximate policy evaluation methods,singularity and susceptibility to simulation noise of policy evaluation,exploration issues,constrained and enhanced policy iteration,policy oscillation and chattering,and optimistic and distributed policy iteration.Our discussion of policy evaluation is couched in general terms and aims to unify the available methods in the light of recent research developments and to compare the two main policy evaluation approaches:projected equations and temporal differences(TD),and aggregation.In the context of these approaches,we survey two different types of simulation-based algorithms:matrix inversion methods,such as least-squares temporal difference(LSTD),and iterative methods,such as least-squares policy evaluation(LSPE) and TD(λ),and their scaled variants.We discuss a recent method,based on regression and regularization,which recti?es the unreliability of LSTD for nearly singular projected Bellman equations.An iterative version of this method belongs to the LSPE class of methods and provides the connecting link between LSTD and LSPE.Our discussion of policy improvement focuses on the role of policy oscillation and its effect on performance guarantees.We illustrate that policy evaluation when done by the projected equation/TD approach may lead to policy oscillation,but when done by aggregation it does not.This implies better error bounds and more regular performance for aggregation,at the expense of some loss of generality in cost function representation capability.Hard aggregation provides the connecting link between projected equation/TD-based and aggregation-based policy evaluation,and is characterized by favorable error bounds.展开更多
This paper introduces a model-free reinforcement learning technique that is used to solve a class of dynamic games known as dynamic graphical games. The graphical game results from to make all the agents synchronize t...This paper introduces a model-free reinforcement learning technique that is used to solve a class of dynamic games known as dynamic graphical games. The graphical game results from to make all the agents synchronize to the state of a command multi-agent dynamical systems, where pinning control is used generator or a leader agent. Novel coupled Bellman equations and Hamiltonian functions are developed for the dynamic graphical games. The Hamiltonian mechanics are used to derive the necessary conditions for optimality. The solution for the dynamic graphical game is given in terms of the solution to a set of coupled Hamilton-Jacobi-Bellman equations developed herein. Nash equilibrium solution for the graphical game is given in terms of the solution to the underlying coupled Hamilton-Jacobi-Bellman equations. An online model-free policy iteration algorithm is developed to learn the Nash solution for the dynamic graphical game. This algorithm does not require any knowledge of the agents' dynamics. A proof of convergence for this multi-agent learning algorithm is given under mild assumption about the inter-connectivity properties of the graph. A gradient descent technique with critic network structures is used to implement the policy iteration algorithm to solve the graphical game online in real-time.展开更多
文摘In this paper we discuss policy iteration methods for approximate solution of a finite-state discounted Markov decision problem, with a focus on feature-based aggregation methods and their connection with deep reinforcement learning schemes. We introduce features of the states of the original problem, and we formulate a smaller "aggregate" Markov decision problem, whose states relate to the features. We discuss properties and possible implementations of this type of aggregation, including a new approach to approximate policy iteration. In this approach the policy improvement operation combines feature-based aggregation with feature construction using deep neural networks or other calculations. We argue that the cost function of a policy may be approximated much more accurately by the nonlinear function of the features provided by aggregation, than by the linear function of the features provided by neural networkbased reinforcement learning, thereby potentially leading to more effective policy improvement.
基金supported by the National Science Foundation (No.ECCS-0801549)the LANL Information Science and Technology Institutethe Air Force (No.FA9550-10-1-0412)
文摘We consider the classical policy iteration method of dynamic programming(DP),where approximations and simulation are used to deal with the curse of dimensionality.We survey a number of issues:convergence and rate of convergence of approximate policy evaluation methods,singularity and susceptibility to simulation noise of policy evaluation,exploration issues,constrained and enhanced policy iteration,policy oscillation and chattering,and optimistic and distributed policy iteration.Our discussion of policy evaluation is couched in general terms and aims to unify the available methods in the light of recent research developments and to compare the two main policy evaluation approaches:projected equations and temporal differences(TD),and aggregation.In the context of these approaches,we survey two different types of simulation-based algorithms:matrix inversion methods,such as least-squares temporal difference(LSTD),and iterative methods,such as least-squares policy evaluation(LSPE) and TD(λ),and their scaled variants.We discuss a recent method,based on regression and regularization,which recti?es the unreliability of LSTD for nearly singular projected Bellman equations.An iterative version of this method belongs to the LSPE class of methods and provides the connecting link between LSTD and LSPE.Our discussion of policy improvement focuses on the role of policy oscillation and its effect on performance guarantees.We illustrate that policy evaluation when done by the projected equation/TD approach may lead to policy oscillation,but when done by aggregation it does not.This implies better error bounds and more regular performance for aggregation,at the expense of some loss of generality in cost function representation capability.Hard aggregation provides the connecting link between projected equation/TD-based and aggregation-based policy evaluation,and is characterized by favorable error bounds.
基金supported by the Deanship of Scientific Research at King Fahd University of Petroleum & Minerals Project(No.JF141002)the National Science Foundation(No.ECCS-1405173)+3 种基金the Office of Naval Research(Nos.N000141310562,N000141410718)the U.S. Army Research Office(No.W911NF-11-D-0001)the National Natural Science Foundation of China(No.61120106011)the Project 111 from the Ministry of Education of China(No.B08015)
文摘This paper introduces a model-free reinforcement learning technique that is used to solve a class of dynamic games known as dynamic graphical games. The graphical game results from to make all the agents synchronize to the state of a command multi-agent dynamical systems, where pinning control is used generator or a leader agent. Novel coupled Bellman equations and Hamiltonian functions are developed for the dynamic graphical games. The Hamiltonian mechanics are used to derive the necessary conditions for optimality. The solution for the dynamic graphical game is given in terms of the solution to a set of coupled Hamilton-Jacobi-Bellman equations developed herein. Nash equilibrium solution for the graphical game is given in terms of the solution to the underlying coupled Hamilton-Jacobi-Bellman equations. An online model-free policy iteration algorithm is developed to learn the Nash solution for the dynamic graphical game. This algorithm does not require any knowledge of the agents' dynamics. A proof of convergence for this multi-agent learning algorithm is given under mild assumption about the inter-connectivity properties of the graph. A gradient descent technique with critic network structures is used to implement the policy iteration algorithm to solve the graphical game online in real-time.