摘要
深度学习优化算法在对数据进行训练时容易导致隐私泄露,卷积神经网络在进行隐私计算时会因为计算每个样本的梯度而带来巨大的内存开销,针对以上问题,提出一种结合混合重影剪裁的差分隐私动态学习率边界算法.将AdaBound优化算法与差分隐私相结合,缓解了算法在训练时的极端学习率和不稳定现象,减少了在反向传播过程中因为加入噪声而对模型收敛速度产生的影响.在卷积层上使用混合重影剪裁,简化了更新中对于梯度的直接计算所带来的开销成本,可以有效地训练差分隐私模型.最后,通过仿真实验,与其他经典的差分隐私算法进行对比,实验表明,算法实现了在相同隐私预算下更高的准确率,具有更优的性能,对模型的隐私保护效果更好.
Deep learning optimization algorithms were prone to privacy leakage when training on data,and convolutional neural networks incurred a huge memory overhead when performing privacy calculations due to the calculation of the gradient of each sample.To address the above problems,a dynamic learning rate bounding algorithm for differential privacy combined with hybrid re-shading clipping was proposed.Combining the AdaBound optimization algorithm with differential privacy alleviated the extreme learning rate and instability of the algorithm during training,and reduced the impact on the model convergence speed due to the addition of noise during backpropagation.The use of hybrid re-shading clipping on the convolutional layer simplified the overhead cost of direct computation of gradient in the update,which could effectively train the differential privacy models.Simulation experiments were conducted to compare with other classical differential privacy algorithms,which showed that the algorithm achieved higher accuracy under the same privacy budget,with better performance and better privacy protection for the model.
作者
钱振
QIAN Zhen(School of Mathematics and Big Data,Anhui University of Science and Technology,Huainan 232001,China)
出处
《哈尔滨商业大学学报(自然科学版)》
CAS
2024年第2期186-192,共7页
Journal of Harbin University of Commerce:Natural Sciences Edition
基金
安徽省科技带头人及后备人选(No.2019h211)。
关键词
差分隐私
深度学习
随机梯度下降
图像分类
自适应算法
学习率剪裁
differential privacy
deep learning
stochastic gradient descent
image classification
adaptive algorithms
learning rate clipping