摘要
传统联邦学习训练模型时假定所有参与方可信,但实际场景存在恶意参与方或恶意攻击模型,现有的联邦学习算法面对投毒攻击时,存在模型性能严重下降的问题。针对模型投毒问题,本文提出一种基于联邦平均(federated averaging,Fedavg)与异常检测的联邦检测算法——FedavgCof,该算法考虑到所有参与方之间的差异对比,在中心服务器和本地模型之间添加异常检测层,通过基于聚类的本地异常检测因子(cluster-based local outlier factor,COF)异常检测算法剔除影响模型性能的异常参数,提升模型鲁棒性。实验结果表明,虽然新型投毒方式攻击性更强,但是FedavgCof能够有效防御投毒攻击,降低模型性能损失,提高模型抗投毒攻击能力,相较于Median和模型清洗算法平均提升精度达到10%以上,大幅提升了模型的安全性。
The traditional federated learning and training model assumes that all participants can be trusted.However,there are malicious participants or malicious attack model in the actual scenario,and the existing federated learning algorithm has the problem of seriously declining model performance in the poisoning attack.For the problem of model poisoning,this paper proposes a federal detection algorithm FedavgCof based on federated averaging(Fedavg)and anomaly detection.The algorithm considers the difference among all participants,adds anomaly detection layer between the central server and the local model,eliminates the cluster-based local outlier factor(COF)anomaly parameters that affect performance of the model,and improves robustness of the model.The experimental results show that although the new poisoning method is more aggressive,FedavgCof can effectively prevent poisoning attack,reduce the loss of model performance,and improve the model's anti-poisoning attack ability.Compared with Median and model cleaning algorithm,the average accuracy is more than 10%,which greatly improves security of the model.
作者
王壮壮
杨佳鹏
俎毓伟
陈丽芳
周旭
WANG Zhuangzhuang;YANG Jiapeng;ZU Yuwei;CHEN Lifang;ZHOU Xu(College of Sciences,North China University of Science and Technology,Tangshan 063210,China)
出处
《应用科技》
CAS
2024年第2期127-134,共8页
Applied Science and Technology
基金
河北省自然科学基金项目(F2018209374).
关键词
联邦学习
聚合方式
投毒攻击
异常检测
数据孤岛
拜占庭容错算法
联邦平均
中心服务器
federated learning
aggregation mode
poisoning attack
outliners detection
data silos
Byzantine faulttolerant algorithm
federated averaging
central server