This paper discusses the modeling method of time series with neural network. In order to improve the adaptability of direct multi-step prediction models, this paper proposes a method of combining the temporal differen...This paper discusses the modeling method of time series with neural network. In order to improve the adaptability of direct multi-step prediction models, this paper proposes a method of combining the temporal differences methods with back-propagation algorithm for updating the parameters continuously on the basis of recent data. This method can make the neural network model fit the recent characteristic of the time series as close as possible, therefore improves the prediction accuracy. We built models and made predictions for the sunspot series. The prediction results of adaptive modeling method are better than that of non-adaptive modeling methods.展开更多
时间差分算法(Temporal difference methods,TD)是一类模型无关的强化学习算法.该算法拥有较低的方差和可以在线(On-line)学习的优点,得到了广泛的应用.但对于一种给定的TD算法,往往只能通过调整步长参数或其他超参数来加速收敛,这也就...时间差分算法(Temporal difference methods,TD)是一类模型无关的强化学习算法.该算法拥有较低的方差和可以在线(On-line)学习的优点,得到了广泛的应用.但对于一种给定的TD算法,往往只能通过调整步长参数或其他超参数来加速收敛,这也就造成了加速TD算法收敛的方法匮乏.针对此问题提出了一种利用蒙特卡洛算法(Monte Carlo methods,MC)来加速TD算法收敛的方法(Accelerate TD by MC,ATDMC).该方法不仅可以适用于绝大部分的TD算法,而且不需要改变在线学习的方式.为了证明方法的有效性,分别在同策略(On-policy)评估、异策略(Off-policy)评估和控制(Control)三个方面进行了实验.实验结果表明ATDMC方法可以有效地加速各类TD算法.展开更多
Key challenges for 5G and Beyond networks relate with the requirements for exceptionally low latency, high reliability, and extremely high data rates. The Ultra-Reliable Low Latency Communication (URLLC) use case is t...Key challenges for 5G and Beyond networks relate with the requirements for exceptionally low latency, high reliability, and extremely high data rates. The Ultra-Reliable Low Latency Communication (URLLC) use case is the trickiest to support and current research is focused on physical or MAC layer solutions, while proposals focused on the network layer using Machine Learning (ML) and Artificial Intelligence (AI) algorithms running on base stations and User Equipment (UE) or Internet of Things (IoT) devices are in early stages. In this paper, we describe the operation rationale of the most recent relevant ML algorithms and techniques, and we propose and validate ML algorithms running on both cells (base stations/gNBs) and UEs or IoT devices to handle URLLC service control. One ML algorithm runs on base stations to evaluate latency demands and offload traffic in case of need, while another lightweight algorithm runs on UEs and IoT devices to rank cells with the best URLLC service in real-time to indicate the best one cell for a UE or IoT device to camp. We show that the interplay of these algorithms leads to good service control and eventually optimal load allocation, under slow load mobility. .展开更多
文摘This paper discusses the modeling method of time series with neural network. In order to improve the adaptability of direct multi-step prediction models, this paper proposes a method of combining the temporal differences methods with back-propagation algorithm for updating the parameters continuously on the basis of recent data. This method can make the neural network model fit the recent characteristic of the time series as close as possible, therefore improves the prediction accuracy. We built models and made predictions for the sunspot series. The prediction results of adaptive modeling method are better than that of non-adaptive modeling methods.
文摘时间差分算法(Temporal difference methods,TD)是一类模型无关的强化学习算法.该算法拥有较低的方差和可以在线(On-line)学习的优点,得到了广泛的应用.但对于一种给定的TD算法,往往只能通过调整步长参数或其他超参数来加速收敛,这也就造成了加速TD算法收敛的方法匮乏.针对此问题提出了一种利用蒙特卡洛算法(Monte Carlo methods,MC)来加速TD算法收敛的方法(Accelerate TD by MC,ATDMC).该方法不仅可以适用于绝大部分的TD算法,而且不需要改变在线学习的方式.为了证明方法的有效性,分别在同策略(On-policy)评估、异策略(Off-policy)评估和控制(Control)三个方面进行了实验.实验结果表明ATDMC方法可以有效地加速各类TD算法.
文摘Key challenges for 5G and Beyond networks relate with the requirements for exceptionally low latency, high reliability, and extremely high data rates. The Ultra-Reliable Low Latency Communication (URLLC) use case is the trickiest to support and current research is focused on physical or MAC layer solutions, while proposals focused on the network layer using Machine Learning (ML) and Artificial Intelligence (AI) algorithms running on base stations and User Equipment (UE) or Internet of Things (IoT) devices are in early stages. In this paper, we describe the operation rationale of the most recent relevant ML algorithms and techniques, and we propose and validate ML algorithms running on both cells (base stations/gNBs) and UEs or IoT devices to handle URLLC service control. One ML algorithm runs on base stations to evaluate latency demands and offload traffic in case of need, while another lightweight algorithm runs on UEs and IoT devices to rank cells with the best URLLC service in real-time to indicate the best one cell for a UE or IoT device to camp. We show that the interplay of these algorithms leads to good service control and eventually optimal load allocation, under slow load mobility. .