Mobile Edge Computing(MEC)is one of the most promising techniques for next-generation wireless communication systems.In this paper,we study the problem of dynamic caching,computation offloading,and resource allocation...Mobile Edge Computing(MEC)is one of the most promising techniques for next-generation wireless communication systems.In this paper,we study the problem of dynamic caching,computation offloading,and resource allocation in cache-assisted multi-user MEC systems with stochastic task arrivals.There are multiple computationally intensive tasks in the system,and each Mobile User(MU)needs to execute a task either locally or remotely in one or more MEC servers by offloading the task data.Popular tasks can be cached in MEC servers to avoid duplicates in offloading.The cached contents can be either obtained through user offloading,fetched from a remote cloud,or fetched from another MEC server.The objective is to minimize the long-term average of a cost function,which is defined as a weighted sum of energy consumption,delay,and cache contents’fetching costs.The weighting coefficients associated with the different metrics in the objective function can be adjusted to balance the tradeoff among them.The optimum design is performed with respect to four decision parameters:whether to cache a given task,whether to offload a given uncached task,how much transmission power should be used during offloading,and how much MEC resources to be allocated for executing a task.We propose to solve the problems by developing a dynamic scheduling policy based on Deep Reinforcement Learning(DRL)with the Deep Deterministic Policy Gradient(DDPG)method.A new decentralized DDPG algorithm is developed to obtain the optimum designs for multi-cell MEC systems by leveraging on the cooperations among neighboring MEC servers.Simulation results demonstrate that the proposed algorithm outperforms other existing strategies,such as Deep Q-Network(DQN).展开更多
Unmanned Aerial Vehicles(UAVs)play a vital role in military warfare.In a variety of battlefield mission scenarios,UAVs are required to safely fly to designated locations without human intervention.Therefore,finding a ...Unmanned Aerial Vehicles(UAVs)play a vital role in military warfare.In a variety of battlefield mission scenarios,UAVs are required to safely fly to designated locations without human intervention.Therefore,finding a suitable method to solve the UAV Autonomous Motion Planning(AMP)problem can improve the success rate of UAV missions to a certain extent.In recent years,many studies have used Deep Reinforcement Learning(DRL)methods to address the AMP problem and have achieved good results.From the perspective of sampling,this paper designs a sampling method with double-screening,combines it with the Deep Deterministic Policy Gradient(DDPG)algorithm,and proposes the Relevant Experience Learning-DDPG(REL-DDPG)algorithm.The REL-DDPG algorithm uses a Prioritized Experience Replay(PER)mechanism to break the correlation of continuous experiences in the experience pool,finds the experiences most similar to the current state to learn according to the theory in human education,and expands the influence of the learning process on action selection at the current state.All experiments are applied in a complex unknown simulation environment constructed based on the parameters of a real UAV.The training experiments show that REL-DDPG improves the convergence speed and the convergence result compared to the state-of-the-art DDPG algorithm,while the testing experiments show the applicability of the algorithm and investigate the performance under different parameter conditions.展开更多
The ever-changing battlefield environment requires the use of robust and adaptive technologies integrated into a reliable platform. Unmanned combat aerial vehicles(UCAVs) aim to integrate such advanced technologies wh...The ever-changing battlefield environment requires the use of robust and adaptive technologies integrated into a reliable platform. Unmanned combat aerial vehicles(UCAVs) aim to integrate such advanced technologies while increasing the tactical capabilities of combat aircraft. As a research object, common UCAV uses the neural network fitting strategy to obtain values of attack areas. However, this simple strategy cannot cope with complex environmental changes and autonomously optimize decision-making problems. To solve the problem, this paper proposes a new deep deterministic policy gradient(DDPG) strategy based on deep reinforcement learning for the attack area fitting of UCAVs in the future battlefield. Simulation results show that the autonomy and environmental adaptability of UCAVs in the future battlefield will be improved based on the new DDPG algorithm and the training process converges quickly. We can obtain the optimal values of attack areas in real time during the whole flight with the well-trained deep network.展开更多
A new online scheduling algorithm is proposed for photovoltaic(PV)systems with battery-assisted energy storage systems(BESS).The stochastic nature of renewable energy sources necessitates the employment of BESS to bal...A new online scheduling algorithm is proposed for photovoltaic(PV)systems with battery-assisted energy storage systems(BESS).The stochastic nature of renewable energy sources necessitates the employment of BESS to balance energy supplies and demands under uncertain weather conditions.The proposed online scheduling algorithm aims at minimizing the overall energy cost by performing actions such as load shifting and peak shaving through carefully scheduled BESS charging/discharging activities.The scheduling algorithm is developed by using deep deterministic policy gradient(DDPG),a deep reinforcement learning(DRL)algorithm that can deal with continuous state and action spaces.One of the main contributions of this work is a new DDPG reward function,which is designed based on the unique behaviors of energy systems.The new reward function can guide the scheduler to learn the appropriate behaviors of load shifting and peak shaving through a balanced process of exploration and exploitation.The new scheduling algorithm is tested through case studies using real world data,and the results indicate that it outperforms existing algorithms such as Deep Q-learning.The online algorithm can efficiently learn the behaviors of optimum non-casual off-line algorithms.展开更多
This paper proposes an improved decision-making method based on deep reinforcement learning to address on-ramp merging challenges in highway autonomous driving.A novel safety indicator,time difference to merging(TDTM)...This paper proposes an improved decision-making method based on deep reinforcement learning to address on-ramp merging challenges in highway autonomous driving.A novel safety indicator,time difference to merging(TDTM),is introduced,which is used in conjunction with the classic time to collision(TTC)indicator to evaluate driving safety and assist the merging vehicle in finding a suitable gap in traffic,thereby enhancing driving safety.The training of an autonomous driving agent is performed using the Deep Deterministic Policy Gradient(DDPG)algorithm.An action-masking mechanism is deployed to prevent unsafe actions during the policy exploration phase.The proposed DDPG+TDTM+TTC solution is tested in on-ramp merging scenarios with different driving speeds in SUMO and achieves a success rate of 99.96%without significantly impacting traffic efficiency on the main road.The results demonstrate that DDPG+TDTM+TTC achieved a higher on-ramp merging success rate of 99.96%compared to DDPG+TTC and DDPG.展开更多
Modeling a system in engineering applications is a time-consuming and labor-intensive task,as system parameters may change with temperature,component aging,etc.In this paper,a novel data-driven model-free optimal cont...Modeling a system in engineering applications is a time-consuming and labor-intensive task,as system parameters may change with temperature,component aging,etc.In this paper,a novel data-driven model-free optimal controller based on deep deterministic policy gradient(DDPG)is proposed to address the problem of continuous-time leader-following multi-agent consensus.To deal with the problem of the dimensional explosion of state space and action space,two different types of neural nets are utilized to fit them instead of the time-consuming state iteration process.With minimal energy consumption,the proposed controller achieves consensus only based on the consensus error and does not require any initial admissible policies.Besides,the controller is self-learning,which means it can achieve optimal control by learning in real time as the system parameters change.Finally,the proofs of convergence and stability,as well as some simulation experiments,are provided to verify the algorithm’s effectiveness.展开更多
文摘Mobile Edge Computing(MEC)is one of the most promising techniques for next-generation wireless communication systems.In this paper,we study the problem of dynamic caching,computation offloading,and resource allocation in cache-assisted multi-user MEC systems with stochastic task arrivals.There are multiple computationally intensive tasks in the system,and each Mobile User(MU)needs to execute a task either locally or remotely in one or more MEC servers by offloading the task data.Popular tasks can be cached in MEC servers to avoid duplicates in offloading.The cached contents can be either obtained through user offloading,fetched from a remote cloud,or fetched from another MEC server.The objective is to minimize the long-term average of a cost function,which is defined as a weighted sum of energy consumption,delay,and cache contents’fetching costs.The weighting coefficients associated with the different metrics in the objective function can be adjusted to balance the tradeoff among them.The optimum design is performed with respect to four decision parameters:whether to cache a given task,whether to offload a given uncached task,how much transmission power should be used during offloading,and how much MEC resources to be allocated for executing a task.We propose to solve the problems by developing a dynamic scheduling policy based on Deep Reinforcement Learning(DRL)with the Deep Deterministic Policy Gradient(DDPG)method.A new decentralized DDPG algorithm is developed to obtain the optimum designs for multi-cell MEC systems by leveraging on the cooperations among neighboring MEC servers.Simulation results demonstrate that the proposed algorithm outperforms other existing strategies,such as Deep Q-Network(DQN).
基金co-supported by the National Natural Science Foundation of China(Nos.62003267,61573285)the Aeronautical Science Foundation of China(ASFC)(No.20175553027)Natural Science Basic Research Plan in Shaanxi Province of China(No.2020JQ-220)。
文摘Unmanned Aerial Vehicles(UAVs)play a vital role in military warfare.In a variety of battlefield mission scenarios,UAVs are required to safely fly to designated locations without human intervention.Therefore,finding a suitable method to solve the UAV Autonomous Motion Planning(AMP)problem can improve the success rate of UAV missions to a certain extent.In recent years,many studies have used Deep Reinforcement Learning(DRL)methods to address the AMP problem and have achieved good results.From the perspective of sampling,this paper designs a sampling method with double-screening,combines it with the Deep Deterministic Policy Gradient(DDPG)algorithm,and proposes the Relevant Experience Learning-DDPG(REL-DDPG)algorithm.The REL-DDPG algorithm uses a Prioritized Experience Replay(PER)mechanism to break the correlation of continuous experiences in the experience pool,finds the experiences most similar to the current state to learn according to the theory in human education,and expands the influence of the learning process on action selection at the current state.All experiments are applied in a complex unknown simulation environment constructed based on the parameters of a real UAV.The training experiments show that REL-DDPG improves the convergence speed and the convergence result compared to the state-of-the-art DDPG algorithm,while the testing experiments show the applicability of the algorithm and investigate the performance under different parameter conditions.
基金supported by the Key Laboratory of Defense Science and Technology Foundation of Luoyang Electro-optical Equipment Research Institute(6142504200108)。
文摘The ever-changing battlefield environment requires the use of robust and adaptive technologies integrated into a reliable platform. Unmanned combat aerial vehicles(UCAVs) aim to integrate such advanced technologies while increasing the tactical capabilities of combat aircraft. As a research object, common UCAV uses the neural network fitting strategy to obtain values of attack areas. However, this simple strategy cannot cope with complex environmental changes and autonomously optimize decision-making problems. To solve the problem, this paper proposes a new deep deterministic policy gradient(DDPG) strategy based on deep reinforcement learning for the attack area fitting of UCAVs in the future battlefield. Simulation results show that the autonomy and environmental adaptability of UCAVs in the future battlefield will be improved based on the new DDPG algorithm and the training process converges quickly. We can obtain the optimal values of attack areas in real time during the whole flight with the well-trained deep network.
基金supported in part by the U.S National Science Foundation(NSF)(No.ECCS-1711087)NSF Center for Infrastructure Trustworthiness in Energy Systems(CITES).
文摘A new online scheduling algorithm is proposed for photovoltaic(PV)systems with battery-assisted energy storage systems(BESS).The stochastic nature of renewable energy sources necessitates the employment of BESS to balance energy supplies and demands under uncertain weather conditions.The proposed online scheduling algorithm aims at minimizing the overall energy cost by performing actions such as load shifting and peak shaving through carefully scheduled BESS charging/discharging activities.The scheduling algorithm is developed by using deep deterministic policy gradient(DDPG),a deep reinforcement learning(DRL)algorithm that can deal with continuous state and action spaces.One of the main contributions of this work is a new DDPG reward function,which is designed based on the unique behaviors of energy systems.The new reward function can guide the scheduler to learn the appropriate behaviors of load shifting and peak shaving through a balanced process of exploration and exploitation.The new scheduling algorithm is tested through case studies using real world data,and the results indicate that it outperforms existing algorithms such as Deep Q-learning.The online algorithm can efficiently learn the behaviors of optimum non-casual off-line algorithms.
基金supported by the National Natural Science Foundation of China(Grant No.52272421)the Shenzhen Fundamental Research Fund(Grant No.JCYJ20190808142613246).
文摘This paper proposes an improved decision-making method based on deep reinforcement learning to address on-ramp merging challenges in highway autonomous driving.A novel safety indicator,time difference to merging(TDTM),is introduced,which is used in conjunction with the classic time to collision(TTC)indicator to evaluate driving safety and assist the merging vehicle in finding a suitable gap in traffic,thereby enhancing driving safety.The training of an autonomous driving agent is performed using the Deep Deterministic Policy Gradient(DDPG)algorithm.An action-masking mechanism is deployed to prevent unsafe actions during the policy exploration phase.The proposed DDPG+TDTM+TTC solution is tested in on-ramp merging scenarios with different driving speeds in SUMO and achieves a success rate of 99.96%without significantly impacting traffic efficiency on the main road.The results demonstrate that DDPG+TDTM+TTC achieved a higher on-ramp merging success rate of 99.96%compared to DDPG+TTC and DDPG.
基金supported by the Tianjin Natural Science Foundation of China(Grant No.20JCYBJC01060)the National Natural Science Foundation of China(Grant Nos.62103203 and 61973175)the Fundamental Research Funds for the Central Universities,Nankai University(Grant No.63221218)。
文摘Modeling a system in engineering applications is a time-consuming and labor-intensive task,as system parameters may change with temperature,component aging,etc.In this paper,a novel data-driven model-free optimal controller based on deep deterministic policy gradient(DDPG)is proposed to address the problem of continuous-time leader-following multi-agent consensus.To deal with the problem of the dimensional explosion of state space and action space,two different types of neural nets are utilized to fit them instead of the time-consuming state iteration process.With minimal energy consumption,the proposed controller achieves consensus only based on the consensus error and does not require any initial admissible policies.Besides,the controller is self-learning,which means it can achieve optimal control by learning in real time as the system parameters change.Finally,the proofs of convergence and stability,as well as some simulation experiments,are provided to verify the algorithm’s effectiveness.