摘要
在深度学习中图像分类任务研究里发现,对抗攻击现象给深度学习模型的安全应用带来了严峻挑战,引发了研究人员的广泛关注。首先,围绕深度学习中用于生成对抗扰动的对抗攻击技术,对图像分类任务中重要的白盒对抗攻击算法进行了详细介绍,同时分析了各个攻击算法的优缺点;然后,分别从移动终端、人脸识别和自动驾驶三个现实中的应用场景出发,介绍了白盒对抗攻击技术的应用现状;此外,选择了一些典型的白盒对抗攻击算法针对不同的目标模型进行了对比实验并分析了实验结果;最后,对白盒对抗攻击技术进行了总结,并展望了其有价值的研究方向。
In the research of image classification tasks in deep learning,the phenomenon of adversarial attacks brings severe challenges to the secure application of deep learning models,which arouses widespread attention of researchers.Firstly,around the adversarial attack technologies for generating the adversarial perturbations,the important white-box adversarial attack algorithms in the image classification tasks were introduced in detail, and the advantages and disadvantages of different attack algorithms were analyzed. Then,from three realistic application scenarios:mobile application,face recognition and autonomous driving,the application status of the white-box adversarial attack technologies was illustrated. Additionally,some typical white-box adversarial attack algorithms were selected to perform experiments on different target models,and the experimental results were analyzed. Finally,the white-box adversarial attack technologies were summarized,and their valuable research directions were prospected.
作者
魏佳璇
杜世康
于志轩
张瑞生
WEI Jiaxuan;DU Shikang;YU Zhixuan;ZHANG Ruisheng(School of Information Science and Engineering,Lanzhou University,Lanzhou Gansu 730000,China;The First Hospital of Lanzhou University,Lanzhou Gansu 730000,China)
出处
《计算机应用》
CSCD
北大核心
2022年第9期2732-2741,共10页
journal of Computer Applications
基金
甘肃省自然科学基金资助项目(20YF8FA080)。
关键词
对抗样本
白盒对抗攻击
深度学习
图像分类
人工智能安全
adversarial example
white-box adversarial attack
deep learning
image classification
artificial intelligence security