摘要
在联邦学习中,交换模型参数或梯度信息通常被视作是安全的。但近期研究表明,模型参数或者梯度信息也会导致训练数据的泄露。基于保护客户端数据安全的目的,提出了一种基于生成模型的联邦学习算法。为了验证该算法的有效性,在DermaMNIST数据集上进行了仿真实验,采用梯度泄露攻击对算法进行验证。实验结果表明,提出的基于生成模型的联邦学习算法与联邦学习经典算法在准确率上仅仅相差0.02%,并且通过MSE、PSNR、SSIM等评价指标可以判断出该算法可以有效地保护数据隐私。
In federated learning,exchanging model parameters or gradient information is generally considered safe.However,recent studies have shown that model parameters or gradient information can also lead to the leakage of training data.To protect client data security,this paper proposes a federated learning algorithm based on generative model.In order to verify the effectiveness of the proposed algorithm,the simulation experiment was carried out on the DermaMNIST dataset,and the gradient leakage attack was used to verify the algorithm.Experimental results show that the proposed federated learning algorithm based on generative model is only 0.02%different from the classical federated learning algorithm,and can be judged by MSE,PSNR,SSIM and other evaluation indicators so that the algorithm can effectively protect data privacy.
作者
缪昊洋
高谭芮
汤影
MIAO Haoyang;GAO Tanrui;TANG Ying(College of Computer Science and Cyber Security,Chengdu University of Technology,Chengdu 610000,China)
出处
《电子设计工程》
2023年第24期81-84,89,共5页
Electronic Design Engineering
关键词
生成模型
联邦学习
半监督生成对抗网络
隐私保护
梯度泄露攻击
generative model
federated learning
semi⁃supervised generative adversarial network
privacy preserving
gradient leakage attack