摘要
生成式对抗网络因使用真实样本迭代训练判别器存在隐私泄露风险,为此已有工作基于差分隐私实现生成式对抗网络的隐私保护。因此,很有必要系统地综述目前差分隐私生成式对抗网络的研究成果。首先,概述和分析重复使用差分隐私的累积隐私预算估计方法,以及介绍生成式对抗网络及其常见变式。其次,总结和分析生成式对抗网络的隐私威胁模型及其评价指标。然后,针对存在的隐私攻击模型,归纳和分析差分隐私生成式对抗网络框架,并对比分析其方法与评价指标;同时,概括和分析差分隐私联邦生成式对抗网络框架,并比较分析其方法与评价指标。最后,分析目前工作存在的问题,并对差分隐私生成式对抗网络的未来研究进行展望。
There is a risk of privacy leakage because of the real samples to be used to iteratively training discriminator of generative adversarial networks.To this end,existing work has been achieved the privacy protection of generative adversarial networks based on differential privacy.Therefore,it is necessary to systematically review the current research results of differential private generative adversarial networks.Firstly,this survey presents and analyzes the cumulative privacy budget estimation approaches of reusing differential privacy,and introduces generative adversarial networks and its common variants.Secondly,this survey summarizes and analyzes the privacy threat model of generative adversarial networks and its evaluation metrics.Thirdly,this survey induces and analyzes the differential private generative adversarial networks framework to the existing privacy attack models,and makes a comparative analysis for their approaches and evaluation metrics;At the same time,this survey generalizes and analyzes the differential private federated generative adversarial networks framework,and compares and analyzes their approaches and evaluation metrics.Finally,this survey analyzes the existing problems in the current work,and discusses the future work.
作者
牛翠翠
潘正芝
刘海
NIU Cuicui;PAN Zhengzhi;LIU Hai(Department of Culture and Tourism,Guizhou Light Industry Technical College,Guiyang,Guizhou 550025,China;College of Computer Science and Technology,Guizhou University,Guiyang,Guizhou 550025,China)
出处
《贵州师范大学学报(自然科学版)》
CAS
2022年第4期84-99,120,共17页
Journal of Guizhou Normal University:Natural Sciences
基金
贵州轻工业职业技术学院项目(20QY04)
国家自然科学基金项目(62002081)
中国博士后科学基金项目(2019M663907XB)。
关键词
生成式对抗网络
联邦学习
隐私攻击
差分隐私
评价指标
generative adversarial networks
federated learning
privacy attack
differential privacy
evaluation metrics