Reinforcement learning(RL) algorithms have been demonstrated to solve a variety of continuous control tasks. However,the training efficiency and performance of such methods limit further applications. In this paper, w...Reinforcement learning(RL) algorithms have been demonstrated to solve a variety of continuous control tasks. However,the training efficiency and performance of such methods limit further applications. In this paper, we propose an off-policy heterogeneous actor-critic(HAC) algorithm, which contains soft Q-function and ordinary Q-function. The soft Q-function encourages the exploration of a Gaussian policy, and the ordinary Q-function optimizes the mean of the Gaussian policy to improve the training efficiency. Experience replay memory is another vital component of off-policy RL methods. We propose a new sampling technique that emphasizes recently experienced transitions to boost the policy training. Besides, we integrate HAC with hindsight experience replay(HER) to deal with sparse reward tasks, which are common in the robotic manipulation domain. Finally, we evaluate our methods on a series of continuous control benchmark tasks and robotic manipulation tasks. The experimental results show that our method outperforms prior state-of-the-art methods in terms of training efficiency and performance, which validates the effectiveness of our method.展开更多
For large-scale radio frequency identification(RFID) indoor positioning system, the positioning scale is relatively large, with less labeled data and more unlabeled data, and it is easily affected by multipath and whi...For large-scale radio frequency identification(RFID) indoor positioning system, the positioning scale is relatively large, with less labeled data and more unlabeled data, and it is easily affected by multipath and white noise. An RFID positioning algorithm based on semi-supervised actor-critic co-training(SACC) was proposed to solve this problem. In this research, the positioning is regarded as Markov decision-making process. Firstly, the actor-critic was combined with random actions and the unlabeled best received signal arrival intensity(RSSI) data was selected by co-training of the semi-supervised. Secondly, the actor and the critic were updated by employing Kronecker-factored approximation calculate(K-FAC) natural gradient. Finally, the target position was obtained by co-locating with labeled RSSI data and the selected unlabeled RSSI data. The proposed method reduced the cost of indoor positioning significantly by decreasing the number of labeled data. Meanwhile, with the increase of the positioning targets, the actor could quickly select unlabeled RSSI data and updates the location model. Experiment shows that, compared with other RFID indoor positioning algorithms, such as twin delayed deep deterministic policy gradient(TD3), deep deterministic policy gradient(DDPG), and actor-critic using Kronecker-factored trust region(ACKTR), the proposed method decreased the average positioning error respectively by 50.226%, 41.916%, and 25.004%. Meanwhile, the positioning stability was improved by 23.430%, 28.518%, and 38.631%.展开更多
基金supported by National Key Research and Development Program of China(NO.2018AAA0103003)National Natural Science Foundation of China(NO.61773378)+1 种基金Basic Research Program(NO.JCKY*******B029)Strategic Priority Research Program of Chinese Academy of Science(NO.XDB32050100).
文摘Reinforcement learning(RL) algorithms have been demonstrated to solve a variety of continuous control tasks. However,the training efficiency and performance of such methods limit further applications. In this paper, we propose an off-policy heterogeneous actor-critic(HAC) algorithm, which contains soft Q-function and ordinary Q-function. The soft Q-function encourages the exploration of a Gaussian policy, and the ordinary Q-function optimizes the mean of the Gaussian policy to improve the training efficiency. Experience replay memory is another vital component of off-policy RL methods. We propose a new sampling technique that emphasizes recently experienced transitions to boost the policy training. Besides, we integrate HAC with hindsight experience replay(HER) to deal with sparse reward tasks, which are common in the robotic manipulation domain. Finally, we evaluate our methods on a series of continuous control benchmark tasks and robotic manipulation tasks. The experimental results show that our method outperforms prior state-of-the-art methods in terms of training efficiency and performance, which validates the effectiveness of our method.
基金the National Natural Science Foundation of China(61761004)the Natural Science Foundation of Guangxi Province,China(2019GXNSFAA245045)。
文摘For large-scale radio frequency identification(RFID) indoor positioning system, the positioning scale is relatively large, with less labeled data and more unlabeled data, and it is easily affected by multipath and white noise. An RFID positioning algorithm based on semi-supervised actor-critic co-training(SACC) was proposed to solve this problem. In this research, the positioning is regarded as Markov decision-making process. Firstly, the actor-critic was combined with random actions and the unlabeled best received signal arrival intensity(RSSI) data was selected by co-training of the semi-supervised. Secondly, the actor and the critic were updated by employing Kronecker-factored approximation calculate(K-FAC) natural gradient. Finally, the target position was obtained by co-locating with labeled RSSI data and the selected unlabeled RSSI data. The proposed method reduced the cost of indoor positioning significantly by decreasing the number of labeled data. Meanwhile, with the increase of the positioning targets, the actor could quickly select unlabeled RSSI data and updates the location model. Experiment shows that, compared with other RFID indoor positioning algorithms, such as twin delayed deep deterministic policy gradient(TD3), deep deterministic policy gradient(DDPG), and actor-critic using Kronecker-factored trust region(ACKTR), the proposed method decreased the average positioning error respectively by 50.226%, 41.916%, and 25.004%. Meanwhile, the positioning stability was improved by 23.430%, 28.518%, and 38.631%.