摘要
与传统机器学习相比,联邦学习有效解决了用户数据隐私和安全保护等问题,但是海量节点与云服务器间进行大量模型交换,会产生较高的通信成本,因此基于云-边-端的分层联邦学习受到了越来越多的重视。在分层联邦学习中,移动节点之间可采用D2D、机会通信等方式进行模型协作训练,边缘服务器执行局部模型聚合,云服务器执行全局模型聚合。为了提升模型的收敛速率,研究人员对面向分层联邦学习的网络传输优化技术展开了研究。文中介绍了分层联邦学习的概念及算法原理,总结了引起网络通信开销的关键挑战,归纳分析了选择合适节点、增强本地计算、减少本地模型更新上传数、压缩模型更新、分散训练和面向参数聚合传输这6种网络传输优化方法。最后,总结并探讨了未来的研究方向。
Compared with traditional machine learning,federated learning effectively solves the problems of user data privacy and security protection,but a large number of model exchanges between massive nodes and cloud servers will produce high communication costs.Therefore,cloud-edge-side layered federated learning has received more and more attention.In hierarchical federated learning,D2D and opportunity communication can be used for model cooperation training among mobile nodes.Edge server performs local model aggregation,while cloud server performs global model aggregation.In order to improve the convergence rate of the model,the network transmission optimization technique for hierarchical federated learning is studied.This paper introduces the concept and algorithm principle of hierarchical federated learning,summarizes the key challenges that cause network communication overhead,summarizes and analyzes six network transmission optimization methods,such as selecting appropriate nodes,enhancing local computing,reducing the upload number of local model updates,compressing model updates decentralized training and parameter aggregation oriented transimission.Finally,the future research direction is summarized and discussed.
作者
邹赛兰
李卓
陈昕
ZOU Sai-lan;LI Zhuo;CHEN Xin(Beijing Key Laboratory of Internet Culture and Digital Dissemination Research(Beijing Information Science&Technology University),Beijing 100101,China;School of Computer Science,Beijing Information Science&Technology University,Beijing 100101,China)
出处
《计算机科学》
CSCD
北大核心
2022年第12期5-16,共12页
Computer Science
基金
国家自然科学基金(61872044)
北京市青年拔尖人才项目。
关键词
分层联邦学习
传输优化
通信开销
节点选择
模型压缩
Hierarchical federated learning
Transmission optimization
Communication overhead
Node selection
Model compression