期刊文献+

A Collaborative Machine Learning Scheme for Traffic Allocation and Load Balancing for URLLC Service in 5G and Beyond

A Collaborative Machine Learning Scheme for Traffic Allocation and Load Balancing for URLLC Service in 5G and Beyond
下载PDF
导出
摘要 Key challenges for 5G and Beyond networks relate with the requirements for exceptionally low latency, high reliability, and extremely high data rates. The Ultra-Reliable Low Latency Communication (URLLC) use case is the trickiest to support and current research is focused on physical or MAC layer solutions, while proposals focused on the network layer using Machine Learning (ML) and Artificial Intelligence (AI) algorithms running on base stations and User Equipment (UE) or Internet of Things (IoT) devices are in early stages. In this paper, we describe the operation rationale of the most recent relevant ML algorithms and techniques, and we propose and validate ML algorithms running on both cells (base stations/gNBs) and UEs or IoT devices to handle URLLC service control. One ML algorithm runs on base stations to evaluate latency demands and offload traffic in case of need, while another lightweight algorithm runs on UEs and IoT devices to rank cells with the best URLLC service in real-time to indicate the best one cell for a UE or IoT device to camp. We show that the interplay of these algorithms leads to good service control and eventually optimal load allocation, under slow load mobility. . Key challenges for 5G and Beyond networks relate with the requirements for exceptionally low latency, high reliability, and extremely high data rates. The Ultra-Reliable Low Latency Communication (URLLC) use case is the trickiest to support and current research is focused on physical or MAC layer solutions, while proposals focused on the network layer using Machine Learning (ML) and Artificial Intelligence (AI) algorithms running on base stations and User Equipment (UE) or Internet of Things (IoT) devices are in early stages. In this paper, we describe the operation rationale of the most recent relevant ML algorithms and techniques, and we propose and validate ML algorithms running on both cells (base stations/gNBs) and UEs or IoT devices to handle URLLC service control. One ML algorithm runs on base stations to evaluate latency demands and offload traffic in case of need, while another lightweight algorithm runs on UEs and IoT devices to rank cells with the best URLLC service in real-time to indicate the best one cell for a UE or IoT device to camp. We show that the interplay of these algorithms leads to good service control and eventually optimal load allocation, under slow load mobility. .
作者 Andreas G. Papidas George C. Polyzos Andreas G. Papidas;George C. Polyzos(Mobile Multimedia Laboratory, Department of Informatics, School of Information Sciences and Technology, Athens University of Economics and Business, Athens, Greece;School of Data Science, The Chinese University of Hong Kong, Shenzhen, China;Mobile Multimedia Laboratory, Athens University of Economics and Business, Athens, Greece)
出处 《Journal of Computer and Communications》 2023年第11期197-207,共11页 电脑和通信(英文)
关键词 5G and B5G Networks Ultra Reliable Low Latency Communications (URLLC) Machine Learning (ML) for 5G Temporal Difference Methods (TDM) Monte Carlo Methods Policy Gradient Methods 5G and B5G Networks Ultra Reliable Low Latency Communications (URLLC) Machine Learning (ML) for 5G Temporal Difference Methods (TDM) Monte Carlo Methods Policy Gradient Methods
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部