This paper studies a federated edge learning system,in which an edge server coordinates a set of edge devices to train a shared machine learning(ML)model based on their locally distributed data samples.During the dist...This paper studies a federated edge learning system,in which an edge server coordinates a set of edge devices to train a shared machine learning(ML)model based on their locally distributed data samples.During the distributed training,we exploit the joint communication and computation design for improving the system energy efficiency,in which both the communication resource allocation for global ML-parameters aggregation and the computation resource allocation for locally updating ML-parameters are jointly optimized.In particular,we consider two transmission protocols for edge devices to upload ML-parameters to edge server,based on the non-orthogonal multiple access(NOMA)and time division multiple access(TDMA),respectively.Under both protocols,we minimize the total energy consumption at all edge devices over a particular finite training duration subject to a given training accuracy,by jointly optimizing the transmission power and rates at edge devices for uploading ML-parameters and their central processing unit(CPU)frequencies for local update.We propose efficient algorithms to solve the formulated energy minimization problems by using the techniques from convex optimization.Numerical results show that as compared to other benchmark schemes,our proposed joint communication and computation design significantly can improve the energy efficiency of the federated edge learning system,by properly balancing the energy tradeoff between communication and computation.展开更多
Open Air Interface(OAI)alliance recently introduced a new disaggregated Open Radio Access Networks(O-RAN)framework for next generation telecommunications and networks.This disaggregated architecture is open,automated,...Open Air Interface(OAI)alliance recently introduced a new disaggregated Open Radio Access Networks(O-RAN)framework for next generation telecommunications and networks.This disaggregated architecture is open,automated,software defined,virtual,and supports the latest advanced technologies like Artificial Intelligence(AI)Machine Learning(AI/ML).This novel intelligent architecture enables programmers to design and customize automated applications according to the business needs and to improve quality of service in fifth generation(5G)and Beyond 5G(B5G).Its disaggregated and multivendor nature gives the opportunity to new startups and small vendors to participate and provide cheap hardware software solutions to keep the market competitive.This paper presents the disaggregated and programmable O-RAN architecture focused on automation,AI/ML services,and applications with Flexible Radio access network Intelligent Controller(FRIC).We schematically demonstrate the reinforcement learning,external applications(xApps),and automation steps to implement this disaggregated O-RAN architecture.The idea of this research paper is to implement an AI/ML enabled automation system for software defined disaggregated O-RAN,which monitors,manages,and performs AI/ML-related services,including the model deployment,optimization,inference,and training.展开更多
Formany years,researchers have explored power allocation(PA)algorithms driven bymodels in wireless networks where multiple-user communications with interference are present.Nowadays,data-driven machine learning method...Formany years,researchers have explored power allocation(PA)algorithms driven bymodels in wireless networks where multiple-user communications with interference are present.Nowadays,data-driven machine learning methods have become quite popular in analyzing wireless communication systems,which among them deep reinforcement learning(DRL)has a significant role in solving optimization issues under certain constraints.To this purpose,in this paper,we investigate the PA problem in a k-user multiple access channels(MAC),where k transmitters(e.g.,mobile users)aim to send an independent message to a common receiver(e.g.,base station)through wireless channels.To this end,we first train the deep Q network(DQN)with a deep Q learning(DQL)algorithm over the simulation environment,utilizing offline learning.Then,the DQN will be used with the real data in the online training method for the PA issue by maximizing the sumrate subjected to the source power.Finally,the simulation results indicate that our proposedDQNmethod provides better performance in terms of the sumrate compared with the available DQL training approaches such as fractional programming(FP)and weighted minimum mean squared error(WMMSE).Additionally,by considering different user densities,we show that our proposed DQN outperforms benchmark algorithms,thereby,a good generalization ability is verified over wireless multi-user communication systems.展开更多
In this paper,we propose the Two-way Deep Reinforcement Learning(DRL)-Based resource allocation algorithm,which solves the problem of resource allocation in the cognitive downlink network based on the underlay mode.Se...In this paper,we propose the Two-way Deep Reinforcement Learning(DRL)-Based resource allocation algorithm,which solves the problem of resource allocation in the cognitive downlink network based on the underlay mode.Secondary users(SUs)in the cognitive network are multiplexed by a new Power Domain Sparse Code Multiple Access(PD-SCMA)scheme,and the physical resources of the cognitive base station are virtualized into two types of slices:enhanced mobile broadband(eMBB)slice and ultrareliable low latency communication(URLLC)slice.We design the Double Deep Q Network(DDQN)network output the optimal codebook assignment scheme and simultaneously use the Deep Deterministic Policy Gradient(DDPG)network output the optimal power allocation scheme.The objective is to jointly optimize the spectral efficiency of the system and the Quality of Service(QoS)of SUs.Simulation results show that the proposed algorithm outperforms the CNDDQN algorithm and modified JEERA algorithm in terms of spectral efficiency and QoS satisfaction.Additionally,compared with the Power Domain Non-orthogonal Multiple Access(PD-NOMA)slices and the Sparse Code Multiple Access(SCMA)slices,the PD-SCMA slices can dramatically enhance spectral efficiency and increase the number of accessible users.展开更多
As the demands of massive connections and vast coverage rapidly grow in the next wireless communication networks, rate splitting multiple access(RSMA) is considered to be the new promising access scheme since it can p...As the demands of massive connections and vast coverage rapidly grow in the next wireless communication networks, rate splitting multiple access(RSMA) is considered to be the new promising access scheme since it can provide higher efficiency with limited spectrum resources. In this paper, combining spectrum splitting with rate splitting, we propose to allocate resources with traffic offloading in hybrid satellite terrestrial networks. A novel deep reinforcement learning method is adopted to solve this challenging non-convex problem. However, the neverending learning process could prohibit its practical implementation. Therefore, we introduce the switch mechanism to avoid unnecessary learning. Additionally, the QoS constraint in the scheme can rule out unsuccessful transmission. The simulation results validates the energy efficiency performance and the convergence speed of the proposed algorithm.展开更多
随着物联网(IoT, internet of things)基站的部署愈发密集,网络干扰管控的重要性愈发凸显。物联网中,设备常采用随机接入,以分布式的方式接入信道。在海量设备的物联网场景中,节点之间可能会出现严重的干扰,导致网络的吞吐量性能严重下...随着物联网(IoT, internet of things)基站的部署愈发密集,网络干扰管控的重要性愈发凸显。物联网中,设备常采用随机接入,以分布式的方式接入信道。在海量设备的物联网场景中,节点之间可能会出现严重的干扰,导致网络的吞吐量性能严重下降。为了解决随机接入网络中的干扰管控问题,考虑基于协作接收的多基站时隙Aloha网络,利用强化学习工具,设计自适应传输算法,实现干扰管控,优化网络的吞吐量性能,并提高网络的公平性。首先,设计了基于Q-学习的自适应传输算法,通过仿真验证了该算法面对不同网络流量时均能保障较高的网络吞吐量性能。其次,为了提高网络的公平性,采用惩罚函数法改进自适应传输算法,并通过仿真验证了面向公平性优化后的算法能够大幅提高网络的公平性,并保障网络的吞吐性能。展开更多
As Information,Communications,and Data Technology(ICDT)are deeply integrated,the research of 6G gradually rises.Meanwhile,federated learning(FL)as a distributed artificial intelligence(AI)framework is generally believ...As Information,Communications,and Data Technology(ICDT)are deeply integrated,the research of 6G gradually rises.Meanwhile,federated learning(FL)as a distributed artificial intelligence(AI)framework is generally believed to be the most promising solution to achieve“Native AI”in 6G.While the adoption of energy as a metric in AI and wireless networks is emerging,most studies still focused on obtaining high levels of accuracy,with little consideration on new features of future networks and their possible impact on energy consumption.To address this issue,this article focuses on green concerns in FL over 6G.We first analyze and summarize major energy consumption challenges caused by technical characteristics of FL and the dynamical heterogeneity of 6G networks,and model the energy consumption in FL over 6G from aspects of computation and communication.We classify and summarize the basic ways to reduce energy,and present several feasible green designs for FL-based 6G network architecture from three perspectives.According to the simulation results,we provide a useful guideline to researchers that different schemes should be used to achieve the minimum energy consumption at a reasonable cost of learning accuracy for different network scenarios and service requirements in FL-based 6G network.展开更多
The popularity of the Internet of Things(IoT)has enabled a large number of vulnerable devices to connect to the Internet,bringing huge security risks.As a network-level security authentication method,device fingerprin...The popularity of the Internet of Things(IoT)has enabled a large number of vulnerable devices to connect to the Internet,bringing huge security risks.As a network-level security authentication method,device fingerprint based on machine learning has attracted considerable attention because it can detect vulnerable devices in complex and heterogeneous access phases.However,flexible and diversified IoT devices with limited resources increase dif-ficulty of the device fingerprint authentication method executed in IoT,because it needs to retrain the model network to deal with incremental features or types.To address this problem,a device fingerprinting mechanism based on a Broad Learning System(BLS)is proposed in this paper.The mechanism firstly characterizes IoT devices by traffic analysis based on the identifiable differences of the traffic data of IoT devices,and extracts feature parameters of the traffic packets.A hierarchical hybrid sampling method is designed at the preprocessing phase to improve the imbalanced data distribution and reconstruct the fingerprint dataset.The complexity of the dataset is reduced using Principal Component Analysis(PCA)and the device type is identified by training weights using BLS.The experimental results show that the proposed method can achieve state-of-the-art accuracy and spend less training time than other existing methods.展开更多
This study presents a layered generalization ensemble model for next generation radio mobiles,focusing on supervised channel estimation approaches.Channel estimation typically involves the insertion of pilot symbols w...This study presents a layered generalization ensemble model for next generation radio mobiles,focusing on supervised channel estimation approaches.Channel estimation typically involves the insertion of pilot symbols with a well-balanced rhythm and suitable layout.The model,called Stacked Generalization for Channel Estimation(SGCE),aims to enhance channel estimation performance by eliminating pilot insertion and improving throughput.The SGCE model incorporates six machine learning methods:random forest(RF),gradient boosting machine(GB),light gradient boosting machine(LGBM),support vector regression(SVR),extremely randomized tree(ERT),and extreme gradient boosting(XGB).By generating meta-data from five models(RF,GB,LGBM,SVR,and ERT),we ensure accurate channel coefficient predictions using the XGB model.To validate themodeling performance,we employ the leave-one-out cross-validation(LOOCV)approach,where each observation serves as the validation set while the remaining observations act as the training set.SGCE performances’results demonstrate higher mean andmedian accuracy compared to the separatedmodel.SGCE achieves an average accuracy of 98.4%,precision of 98.1%,and the highest F1-score of 98.5%,accurately predicting channel coefficients.Furthermore,our proposedmethod outperforms prior traditional and intelligent techniques in terms of throughput and bit error rate.SGCE’s superior performance highlights its efficacy in optimizing channel estimation.It can effectively predict channel coefficients and contribute to enhancing the overall efficiency of radio mobile systems.Through extensive experimentation and evaluation,we demonstrate that SGCE improved performance in channel estimation,surpassing previous techniques.Accordingly,SGCE’s capabilities have significant implications for optimizing channel estimation in modern communication systems.展开更多
Massive machine type communication aims to support the connection of massive devices,which is still an important scenario in 6G.In this paper,a novel cluster-based massive access method is proposed for massive multipl...Massive machine type communication aims to support the connection of massive devices,which is still an important scenario in 6G.In this paper,a novel cluster-based massive access method is proposed for massive multiple input multiple output systems.By exploiting the angular domain characteristics,devices are separated into multiple clusters with a learned cluster-specific dictionary,which enhances the identification of active devices.For detected active devices whose data recovery fails,power domain nonorthogonal multiple access with successive interference cancellation is employed to recover their data via re-transmission.Simulation results show that the proposed scheme and algorithm achieve improved performance on active user detection and data recovery.展开更多
The Multi-access Edge Cloud(MEC) networks extend cloud computing services and capabilities to the edge of the networks. By bringing computation and storage capabilities closer to end-users and connected devices, MEC n...The Multi-access Edge Cloud(MEC) networks extend cloud computing services and capabilities to the edge of the networks. By bringing computation and storage capabilities closer to end-users and connected devices, MEC networks can support a wide range of applications. MEC networks can also leverage various types of resources, including computation resources, network resources, radio resources,and location-based resources, to provide multidimensional resources for intelligent applications in 5/6G.However, tasks generated by users often consist of multiple subtasks that require different types of resources. It is a challenging problem to offload multiresource task requests to the edge cloud aiming at maximizing benefits due to the heterogeneity of resources provided by devices. To address this issue,we mathematically model the task requests with multiple subtasks. Then, the problem of task offloading of multi-resource task requests is proved to be NP-hard. Furthermore, we propose a novel Dual-Agent Deep Reinforcement Learning algorithm with Node First and Link features(NF_L_DA_DRL) based on the policy network, to optimize the benefits generated by offloading multi-resource task requests in MEC networks. Finally, simulation results show that the proposed algorithm can effectively improve the benefit of task offloading with higher resource utilization compared with baseline algorithms.展开更多
Due to the recent trend of software intelligence in the Fourth Industrial Revolution,deep learning has become a mainstream workload for modern computer systems.Since the data size of deep learning increasingly grows,m...Due to the recent trend of software intelligence in the Fourth Industrial Revolution,deep learning has become a mainstream workload for modern computer systems.Since the data size of deep learning increasingly grows,managing the limited memory capacity efficiently for deep learning workloads becomes important.In this paper,we analyze memory accesses in deep learning workloads and find out some unique characteristics differentiated from traditional workloads.First,when comparing instruction and data accesses,data access accounts for 96%–99%of total memory accesses in deep learning workloads,which is quite different from traditional workloads.Second,when comparing read and write accesses,write access dominates,accounting for 64%–80%of total memory accesses.Third,although write access makes up the majority of memory accesses,it shows a low access bias of 0.3 in the Zipf parameter.Fourth,in predicting re-access,recency is important in read access,but frequency provides more accurate information in write access.Based on these observations,we introduce a Non-Volatile Random Access Memory(NVRAM)-accelerated memory architecture for deep learning workloads,and present a new memory management policy for this architecture.By considering the memory access characteristics of deep learning workloads,the proposed policy improves memory performance by 64.3%on average compared to the CLOCK policy.展开更多
基金the National Key R&D Program of China under Grant 2018YFB1800800Guangdong Province Key Area R&D Program under Grant 2018B030338001the Natural Science Foundation of China under Grant U2001208。
文摘This paper studies a federated edge learning system,in which an edge server coordinates a set of edge devices to train a shared machine learning(ML)model based on their locally distributed data samples.During the distributed training,we exploit the joint communication and computation design for improving the system energy efficiency,in which both the communication resource allocation for global ML-parameters aggregation and the computation resource allocation for locally updating ML-parameters are jointly optimized.In particular,we consider two transmission protocols for edge devices to upload ML-parameters to edge server,based on the non-orthogonal multiple access(NOMA)and time division multiple access(TDMA),respectively.Under both protocols,we minimize the total energy consumption at all edge devices over a particular finite training duration subject to a given training accuracy,by jointly optimizing the transmission power and rates at edge devices for uploading ML-parameters and their central processing unit(CPU)frequencies for local update.We propose efficient algorithms to solve the formulated energy minimization problems by using the techniques from convex optimization.Numerical results show that as compared to other benchmark schemes,our proposed joint communication and computation design significantly can improve the energy efficiency of the federated edge learning system,by properly balancing the energy tradeoff between communication and computation.
文摘Open Air Interface(OAI)alliance recently introduced a new disaggregated Open Radio Access Networks(O-RAN)framework for next generation telecommunications and networks.This disaggregated architecture is open,automated,software defined,virtual,and supports the latest advanced technologies like Artificial Intelligence(AI)Machine Learning(AI/ML).This novel intelligent architecture enables programmers to design and customize automated applications according to the business needs and to improve quality of service in fifth generation(5G)and Beyond 5G(B5G).Its disaggregated and multivendor nature gives the opportunity to new startups and small vendors to participate and provide cheap hardware software solutions to keep the market competitive.This paper presents the disaggregated and programmable O-RAN architecture focused on automation,AI/ML services,and applications with Flexible Radio access network Intelligent Controller(FRIC).We schematically demonstrate the reinforcement learning,external applications(xApps),and automation steps to implement this disaggregated O-RAN architecture.The idea of this research paper is to implement an AI/ML enabled automation system for software defined disaggregated O-RAN,which monitors,manages,and performs AI/ML-related services,including the model deployment,optimization,inference,and training.
文摘Formany years,researchers have explored power allocation(PA)algorithms driven bymodels in wireless networks where multiple-user communications with interference are present.Nowadays,data-driven machine learning methods have become quite popular in analyzing wireless communication systems,which among them deep reinforcement learning(DRL)has a significant role in solving optimization issues under certain constraints.To this purpose,in this paper,we investigate the PA problem in a k-user multiple access channels(MAC),where k transmitters(e.g.,mobile users)aim to send an independent message to a common receiver(e.g.,base station)through wireless channels.To this end,we first train the deep Q network(DQN)with a deep Q learning(DQL)algorithm over the simulation environment,utilizing offline learning.Then,the DQN will be used with the real data in the online training method for the PA issue by maximizing the sumrate subjected to the source power.Finally,the simulation results indicate that our proposedDQNmethod provides better performance in terms of the sumrate compared with the available DQL training approaches such as fractional programming(FP)and weighted minimum mean squared error(WMMSE).Additionally,by considering different user densities,we show that our proposed DQN outperforms benchmark algorithms,thereby,a good generalization ability is verified over wireless multi-user communication systems.
基金supported by the National Natural Science Foundation of China(Grant No.61971057).
文摘In this paper,we propose the Two-way Deep Reinforcement Learning(DRL)-Based resource allocation algorithm,which solves the problem of resource allocation in the cognitive downlink network based on the underlay mode.Secondary users(SUs)in the cognitive network are multiplexed by a new Power Domain Sparse Code Multiple Access(PD-SCMA)scheme,and the physical resources of the cognitive base station are virtualized into two types of slices:enhanced mobile broadband(eMBB)slice and ultrareliable low latency communication(URLLC)slice.We design the Double Deep Q Network(DDQN)network output the optimal codebook assignment scheme and simultaneously use the Deep Deterministic Policy Gradient(DDPG)network output the optimal power allocation scheme.The objective is to jointly optimize the spectral efficiency of the system and the Quality of Service(QoS)of SUs.Simulation results show that the proposed algorithm outperforms the CNDDQN algorithm and modified JEERA algorithm in terms of spectral efficiency and QoS satisfaction.Additionally,compared with the Power Domain Non-orthogonal Multiple Access(PD-NOMA)slices and the Sparse Code Multiple Access(SCMA)slices,the PD-SCMA slices can dramatically enhance spectral efficiency and increase the number of accessible users.
文摘As the demands of massive connections and vast coverage rapidly grow in the next wireless communication networks, rate splitting multiple access(RSMA) is considered to be the new promising access scheme since it can provide higher efficiency with limited spectrum resources. In this paper, combining spectrum splitting with rate splitting, we propose to allocate resources with traffic offloading in hybrid satellite terrestrial networks. A novel deep reinforcement learning method is adopted to solve this challenging non-convex problem. However, the neverending learning process could prohibit its practical implementation. Therefore, we introduce the switch mechanism to avoid unnecessary learning. Additionally, the QoS constraint in the scheme can rule out unsuccessful transmission. The simulation results validates the energy efficiency performance and the convergence speed of the proposed algorithm.
文摘随着物联网(IoT, internet of things)基站的部署愈发密集,网络干扰管控的重要性愈发凸显。物联网中,设备常采用随机接入,以分布式的方式接入信道。在海量设备的物联网场景中,节点之间可能会出现严重的干扰,导致网络的吞吐量性能严重下降。为了解决随机接入网络中的干扰管控问题,考虑基于协作接收的多基站时隙Aloha网络,利用强化学习工具,设计自适应传输算法,实现干扰管控,优化网络的吞吐量性能,并提高网络的公平性。首先,设计了基于Q-学习的自适应传输算法,通过仿真验证了该算法面对不同网络流量时均能保障较高的网络吞吐量性能。其次,为了提高网络的公平性,采用惩罚函数法改进自适应传输算法,并通过仿真验证了面向公平性优化后的算法能够大幅提高网络的公平性,并保障网络的吞吐性能。
基金supported by the National Key Research and Development Program of China(Grant No.2020YFB1806804)the U.S.National Science Foundation(Grant US CNS-1801925,CNS-2029569,and CNS-2107057)。
文摘As Information,Communications,and Data Technology(ICDT)are deeply integrated,the research of 6G gradually rises.Meanwhile,federated learning(FL)as a distributed artificial intelligence(AI)framework is generally believed to be the most promising solution to achieve“Native AI”in 6G.While the adoption of energy as a metric in AI and wireless networks is emerging,most studies still focused on obtaining high levels of accuracy,with little consideration on new features of future networks and their possible impact on energy consumption.To address this issue,this article focuses on green concerns in FL over 6G.We first analyze and summarize major energy consumption challenges caused by technical characteristics of FL and the dynamical heterogeneity of 6G networks,and model the energy consumption in FL over 6G from aspects of computation and communication.We classify and summarize the basic ways to reduce energy,and present several feasible green designs for FL-based 6G network architecture from three perspectives.According to the simulation results,we provide a useful guideline to researchers that different schemes should be used to achieve the minimum energy consumption at a reasonable cost of learning accuracy for different network scenarios and service requirements in FL-based 6G network.
基金supported by National Key R&D Program of China(2019YFB2102303)National Natural Science Foundation of China(NSFC61971014,NSFC11675199)Young Backbone Teacher Training Program of Henan Colleges and Universities(2021GGJS170).
文摘The popularity of the Internet of Things(IoT)has enabled a large number of vulnerable devices to connect to the Internet,bringing huge security risks.As a network-level security authentication method,device fingerprint based on machine learning has attracted considerable attention because it can detect vulnerable devices in complex and heterogeneous access phases.However,flexible and diversified IoT devices with limited resources increase dif-ficulty of the device fingerprint authentication method executed in IoT,because it needs to retrain the model network to deal with incremental features or types.To address this problem,a device fingerprinting mechanism based on a Broad Learning System(BLS)is proposed in this paper.The mechanism firstly characterizes IoT devices by traffic analysis based on the identifiable differences of the traffic data of IoT devices,and extracts feature parameters of the traffic packets.A hierarchical hybrid sampling method is designed at the preprocessing phase to improve the imbalanced data distribution and reconstruct the fingerprint dataset.The complexity of the dataset is reduced using Principal Component Analysis(PCA)and the device type is identified by training weights using BLS.The experimental results show that the proposed method can achieve state-of-the-art accuracy and spend less training time than other existing methods.
基金This research project was funded by the Deanship of Scientific Research,Princess Nourah bint Abdulrahman University,through the Program of Research Project Funding After Publication,grant No(43-PRFA-P-58).
文摘This study presents a layered generalization ensemble model for next generation radio mobiles,focusing on supervised channel estimation approaches.Channel estimation typically involves the insertion of pilot symbols with a well-balanced rhythm and suitable layout.The model,called Stacked Generalization for Channel Estimation(SGCE),aims to enhance channel estimation performance by eliminating pilot insertion and improving throughput.The SGCE model incorporates six machine learning methods:random forest(RF),gradient boosting machine(GB),light gradient boosting machine(LGBM),support vector regression(SVR),extremely randomized tree(ERT),and extreme gradient boosting(XGB).By generating meta-data from five models(RF,GB,LGBM,SVR,and ERT),we ensure accurate channel coefficient predictions using the XGB model.To validate themodeling performance,we employ the leave-one-out cross-validation(LOOCV)approach,where each observation serves as the validation set while the remaining observations act as the training set.SGCE performances’results demonstrate higher mean andmedian accuracy compared to the separatedmodel.SGCE achieves an average accuracy of 98.4%,precision of 98.1%,and the highest F1-score of 98.5%,accurately predicting channel coefficients.Furthermore,our proposedmethod outperforms prior traditional and intelligent techniques in terms of throughput and bit error rate.SGCE’s superior performance highlights its efficacy in optimizing channel estimation.It can effectively predict channel coefficients and contribute to enhancing the overall efficiency of radio mobile systems.Through extensive experimentation and evaluation,we demonstrate that SGCE improved performance in channel estimation,surpassing previous techniques.Accordingly,SGCE’s capabilities have significant implications for optimizing channel estimation in modern communication systems.
基金supported by Natural Science Foundation of China(62122012,62221001)the Beijing Natural Science Foundation(L202019,L211012)the Fundamental Research Funds for the Central Universities(2022JBQY004)。
文摘Massive machine type communication aims to support the connection of massive devices,which is still an important scenario in 6G.In this paper,a novel cluster-based massive access method is proposed for massive multiple input multiple output systems.By exploiting the angular domain characteristics,devices are separated into multiple clusters with a learned cluster-specific dictionary,which enhances the identification of active devices.For detected active devices whose data recovery fails,power domain nonorthogonal multiple access with successive interference cancellation is employed to recover their data via re-transmission.Simulation results show that the proposed scheme and algorithm achieve improved performance on active user detection and data recovery.
基金supported in part by the National Natural Science Foundation of China under Grants 62201105,62331017,and 62075024in part by the Natural Science Foundation of Chongqing under Grant cstc2021jcyj-msxmX0404+1 种基金in part by the Chongqing Municipal Education Commission under Grant KJQN202100643in part by Guangdong Basic and Applied Basic Research Foundation under Grant 2022A1515110056.
文摘The Multi-access Edge Cloud(MEC) networks extend cloud computing services and capabilities to the edge of the networks. By bringing computation and storage capabilities closer to end-users and connected devices, MEC networks can support a wide range of applications. MEC networks can also leverage various types of resources, including computation resources, network resources, radio resources,and location-based resources, to provide multidimensional resources for intelligent applications in 5/6G.However, tasks generated by users often consist of multiple subtasks that require different types of resources. It is a challenging problem to offload multiresource task requests to the edge cloud aiming at maximizing benefits due to the heterogeneity of resources provided by devices. To address this issue,we mathematically model the task requests with multiple subtasks. Then, the problem of task offloading of multi-resource task requests is proved to be NP-hard. Furthermore, we propose a novel Dual-Agent Deep Reinforcement Learning algorithm with Node First and Link features(NF_L_DA_DRL) based on the policy network, to optimize the benefits generated by offloading multi-resource task requests in MEC networks. Finally, simulation results show that the proposed algorithm can effectively improve the benefit of task offloading with higher resource utilization compared with baseline algorithms.
基金supported in part by the NRF(National Research Foundation of Korea)Grant(No.2019R1A2C1009275)by the Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by theKorean government(MSIT)(No.2021-0-02068,Artificial Intelligence Innovation Hub).
文摘Due to the recent trend of software intelligence in the Fourth Industrial Revolution,deep learning has become a mainstream workload for modern computer systems.Since the data size of deep learning increasingly grows,managing the limited memory capacity efficiently for deep learning workloads becomes important.In this paper,we analyze memory accesses in deep learning workloads and find out some unique characteristics differentiated from traditional workloads.First,when comparing instruction and data accesses,data access accounts for 96%–99%of total memory accesses in deep learning workloads,which is quite different from traditional workloads.Second,when comparing read and write accesses,write access dominates,accounting for 64%–80%of total memory accesses.Third,although write access makes up the majority of memory accesses,it shows a low access bias of 0.3 in the Zipf parameter.Fourth,in predicting re-access,recency is important in read access,but frequency provides more accurate information in write access.Based on these observations,we introduce a Non-Volatile Random Access Memory(NVRAM)-accelerated memory architecture for deep learning workloads,and present a new memory management policy for this architecture.By considering the memory access characteristics of deep learning workloads,the proposed policy improves memory performance by 64.3%on average compared to the CLOCK policy.