This paper studies the problem of optimal parallel tracking control for continuous-time general nonlinear systems.Unlike existing optimal state feedback control,the control input of the optimal parallel control is int...This paper studies the problem of optimal parallel tracking control for continuous-time general nonlinear systems.Unlike existing optimal state feedback control,the control input of the optimal parallel control is introduced into the feedback system.However,due to the introduction of control input into the feedback system,the optimal state feedback control methods can not be applied directly.To address this problem,an augmented system and an augmented performance index function are proposed firstly.Thus,the general nonlinear system is transformed into an affine nonlinear system.The difference between the optimal parallel control and the optimal state feedback control is analyzed theoretically.It is proven that the optimal parallel control with the augmented performance index function can be seen as the suboptimal state feedback control with the traditional performance index function.Moreover,an adaptive dynamic programming(ADP)technique is utilized to implement the optimal parallel tracking control using a critic neural network(NN)to approximate the value function online.The stability analysis of the closed-loop system is performed using the Lyapunov theory,and the tracking error and NN weights errors are uniformly ultimately bounded(UUB).Also,the optimal parallel controller guarantees the continuity of the control input under the circumstance that there are finite jump discontinuities in the reference signals.Finally,the effectiveness of the developed optimal parallel control method is verified in two cases.展开更多
为了解决交通高峰时段城市区域路网过大的交通需求引起的路网通行效率下降以及区域内部交通流分布的异质性产生的道路资源浪费等问题.本文提出了基于区域路网固有属性宏观基本图(Macroscopic fundamental diagram,MFD)的过饱和区域控制...为了解决交通高峰时段城市区域路网过大的交通需求引起的路网通行效率下降以及区域内部交通流分布的异质性产生的道路资源浪费等问题.本文提出了基于区域路网固有属性宏观基本图(Macroscopic fundamental diagram,MFD)的过饱和区域控制优化模型,建立了边界控制信号和内部控制信号目标函数的双层规划优化,进一步设计了基于BP(Back propagation)神经网络的自适应动态规划(Adaptive dynamic programming,ADP)模型,对建立的双层规划区域交通信号进行求解,实例仿真结果验证了本文方法的有效性.通过本文的研究分析,对城市区域交通的需求管控、拥堵政策制定等城市区域交通管理具有一定的指导意义.展开更多
In this paper, a data-based fault tolerant control(FTC) scheme is investigated for unknown continuous-time(CT)affine nonlinear systems with actuator faults. First, a neural network(NN) identifier based on particle swa...In this paper, a data-based fault tolerant control(FTC) scheme is investigated for unknown continuous-time(CT)affine nonlinear systems with actuator faults. First, a neural network(NN) identifier based on particle swarm optimization(PSO) is constructed to model the unknown system dynamics. By utilizing the estimated system states, the particle swarm optimized critic neural network(PSOCNN) is employed to solve the Hamilton-Jacobi-Bellman equation(HJBE) more efficiently.Then, a data-based FTC scheme, which consists of the NN identifier and the fault compensator, is proposed to achieve actuator fault tolerance. The stability of the closed-loop system under actuator faults is guaranteed by the Lyapunov stability theorem. Finally, simulations are provided to demonstrate the effectiveness of the developed method.展开更多
The core task of tracking control is to make the controlled plant track a desired trajectory.The traditional performance index used in previous studies cannot eliminate completely the tracking error as the number of t...The core task of tracking control is to make the controlled plant track a desired trajectory.The traditional performance index used in previous studies cannot eliminate completely the tracking error as the number of time steps increases.In this paper,a new cost function is introduced to develop the value-iteration-based adaptive critic framework to solve the tracking control problem.Unlike the regulator problem,the iterative value function of tracking control problem cannot be regarded as a Lyapunov function.A novel stability analysis method is developed to guarantee that the tracking error converges to zero.The discounted iterative scheme under the new cost function for the special case of linear systems is elaborated.Finally,the tracking performance of the present scheme is demonstrated by numerical results and compared with those of the traditional approaches.展开更多
In this paper,an adaptive dynamic programming(ADP)strategy is investigated for discrete-time nonlinear systems with unknown nonlinear dynamics subject to input saturation.To save the communication resources between th...In this paper,an adaptive dynamic programming(ADP)strategy is investigated for discrete-time nonlinear systems with unknown nonlinear dynamics subject to input saturation.To save the communication resources between the controller and the actuators,stochastic communication protocols(SCPs)are adopted to schedule the control signal,and therefore the closed-loop system is essentially a protocol-induced switching system.A neural network(NN)-based identifier with a robust term is exploited for approximating the unknown nonlinear system,and a set of switch-based updating rules with an additional tunable parameter of NN weights are developed with the help of the gradient descent.By virtue of a novel Lyapunov function,a sufficient condition is proposed to achieve the stability of both system identification errors and the update dynamics of NN weights.Then,a value iterative ADP algorithm in an offline way is proposed to solve the optimal control of protocol-induced switching systems with saturation constraints,and the convergence is profoundly discussed in light of mathematical induction.Furthermore,an actor-critic NN scheme is developed to approximate the control law and the proposed performance index function in the framework of ADP,and the stability of the closed-loop system is analyzed in view of the Lyapunov theory.Finally,the numerical simulation results are presented to demonstrate the effectiveness of the proposed control scheme.展开更多
Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and ...Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence.展开更多
基金supported in part by the National Key Reseanch and Development Program of China(2018AAA0101502,2018YFB1702300)in part by the National Natural Science Foundation of China(61722312,61533019,U1811463,61533017)in part by the Intel Collaborative Research Institute for Intelligent and Automated Connected Vehicles。
文摘This paper studies the problem of optimal parallel tracking control for continuous-time general nonlinear systems.Unlike existing optimal state feedback control,the control input of the optimal parallel control is introduced into the feedback system.However,due to the introduction of control input into the feedback system,the optimal state feedback control methods can not be applied directly.To address this problem,an augmented system and an augmented performance index function are proposed firstly.Thus,the general nonlinear system is transformed into an affine nonlinear system.The difference between the optimal parallel control and the optimal state feedback control is analyzed theoretically.It is proven that the optimal parallel control with the augmented performance index function can be seen as the suboptimal state feedback control with the traditional performance index function.Moreover,an adaptive dynamic programming(ADP)technique is utilized to implement the optimal parallel tracking control using a critic neural network(NN)to approximate the value function online.The stability analysis of the closed-loop system is performed using the Lyapunov theory,and the tracking error and NN weights errors are uniformly ultimately bounded(UUB).Also,the optimal parallel controller guarantees the continuity of the control input under the circumstance that there are finite jump discontinuities in the reference signals.Finally,the effectiveness of the developed optimal parallel control method is verified in two cases.
基金Supported by National High Technology Research and Development Program of China (863 Program) (2006AA04Z183), National Nat- ural Science Foundation of China (60621001, 60534010, 60572070, 60774048, 60728307), and the Program for Changjiang Scholars and Innovative Research Groups of China (60728307, 4031002)
基金supported in part by National Natural Science Foundation of China(61533017,61273140,61304079,61374105,61379099,61233001)Fundamental Research Funds for the Central Universities(FRF-TP-15-056A3)the Open Research Project from SKLMCCS(20150104)
基金supported in part by the National Natural ScienceFoundation of China(61533017,61973330,61773075,61603387)the Early Career Development Award of SKLMCCS(20180201)the State Key Laboratory of Synthetical Automation for Process Industries(2019-KF-23-03)。
文摘In this paper, a data-based fault tolerant control(FTC) scheme is investigated for unknown continuous-time(CT)affine nonlinear systems with actuator faults. First, a neural network(NN) identifier based on particle swarm optimization(PSO) is constructed to model the unknown system dynamics. By utilizing the estimated system states, the particle swarm optimized critic neural network(PSOCNN) is employed to solve the Hamilton-Jacobi-Bellman equation(HJBE) more efficiently.Then, a data-based FTC scheme, which consists of the NN identifier and the fault compensator, is proposed to achieve actuator fault tolerance. The stability of the closed-loop system under actuator faults is guaranteed by the Lyapunov stability theorem. Finally, simulations are provided to demonstrate the effectiveness of the developed method.
基金This work was supported in part by Beijing Natural Science Foundation(JQ19013)the National Key Research and Development Program of China(2021ZD0112302)the National Natural Science Foundation of China(61773373).
文摘The core task of tracking control is to make the controlled plant track a desired trajectory.The traditional performance index used in previous studies cannot eliminate completely the tracking error as the number of time steps increases.In this paper,a new cost function is introduced to develop the value-iteration-based adaptive critic framework to solve the tracking control problem.Unlike the regulator problem,the iterative value function of tracking control problem cannot be regarded as a Lyapunov function.A novel stability analysis method is developed to guarantee that the tracking error converges to zero.The discounted iterative scheme under the new cost function for the special case of linear systems is elaborated.Finally,the tracking performance of the present scheme is demonstrated by numerical results and compared with those of the traditional approaches.
基金supported in part by the Australian Research Council Discovery Early Career Researcher Award(DE200101128)Australian Research Council(DP190101557)。
文摘In this paper,an adaptive dynamic programming(ADP)strategy is investigated for discrete-time nonlinear systems with unknown nonlinear dynamics subject to input saturation.To save the communication resources between the controller and the actuators,stochastic communication protocols(SCPs)are adopted to schedule the control signal,and therefore the closed-loop system is essentially a protocol-induced switching system.A neural network(NN)-based identifier with a robust term is exploited for approximating the unknown nonlinear system,and a set of switch-based updating rules with an additional tunable parameter of NN weights are developed with the help of the gradient descent.By virtue of a novel Lyapunov function,a sufficient condition is proposed to achieve the stability of both system identification errors and the update dynamics of NN weights.Then,a value iterative ADP algorithm in an offline way is proposed to solve the optimal control of protocol-induced switching systems with saturation constraints,and the convergence is profoundly discussed in light of mathematical induction.Furthermore,an actor-critic NN scheme is developed to approximate the control law and the proposed performance index function in the framework of ADP,and the stability of the closed-loop system is analyzed in view of the Lyapunov theory.Finally,the numerical simulation results are presented to demonstrate the effectiveness of the proposed control scheme.
基金supported in part by the National Natural Science Foundation of China(62222301, 62073085, 62073158, 61890930-5, 62021003)the National Key Research and Development Program of China (2021ZD0112302, 2021ZD0112301, 2018YFC1900800-5)Beijing Natural Science Foundation (JQ19013)。
文摘Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence.