摘要
结合有限时间共识算法及一阶加速算法重球法提出分布式有限时间重球法。本算法的优点为可以保证所有节点在每个周期都达到共识,同时达到与集中式重球法相同阶数的收敛速率。通过数值仿真将该算法与其他分布式优化算法应用于机器学习问题上,展现了该算法的优良性能。
Based on the finite-time-consensus algorithm and the heavy-ball algorithm which is a first order accelerate algorithm,a distributed optimization algorithm is proposed.The algorithm can achieve consensus after every periodic updates.The non-ergodic convergence rate is at the same order of the centralized heavy-ball algorithm.In addition,the numerical examples compare our algorithm with other state-of-art distributed optimization algorithms on machine learning problems and show the competitive performance.
作者
曲志海
陆疌
QU Zhihai;LU Jie(School of Information Science and Technology,ShanghaiTech University,Shanghai 201210,China;Shanghai Institute of Microsystem and Information Technology,Chinese Academy of Sciences,Shanghai 200050,China;University of Chinese Academy of Sciences,Beijing 100049,China)
出处
《中国科学院大学学报(中英文)》
CSCD
北大核心
2022年第1期127-133,共7页
Journal of University of Chinese Academy of Sciences
基金
国家自然科学基金(61603254)资助。
关键词
分布式优化
算法设计
有限时间共识算法
重球法
distributed optimization
algorithm design
finite-time-consensus algorithm
heavy-ball algorithm