摘要
为了验证Grad-CAM解释方法的脆弱性,提出了一种基于对抗补丁的Grad-CAM攻击方法。通过在CNN分类损失函数后添加对Grad-CAM类激活图的约束项,可以针对性地优化出一个对抗补丁并合成对抗图像。该对抗图像可在分类结果保持不变的情况下,使Grad-CAM解释结果偏向补丁区域,实现对解释结果的攻击。同时,通过在数据集上的批次训练及增加扰动范数约束,提升了对抗补丁的泛化性和多场景可用性。在ILSVRC2012数据集上的实验结果表明,与现有方法相比,所提方法能够在保持模型分类精度的同时,更简单有效地攻击Grad-CAM解释结果。
To verify the fragility of the Grad-CAM,a Grad-CAM attack method based on adversarial patch was proposed.By adding a constraint to the Grad-CAM in the classification loss function,an adversarial patch could be optimized and the adversarial image could be synthesized.The adversarial image guided the Grad-CAM interpretation result towards the patch area while the classification result remains unchanged,so as to attack the interpretations.Meanwhile,through batch-training on the dataset and increasing perturbation norm constraint,the generalization and the multi-scene usability of the adversarial patch were improved.Experimental results on the ILSVRC2012 dataset show that compared with the existing methods,the proposed method can attack the interpretation results of the Grad-CAM more simply and effectively while maintaining the classification accuracy.
作者
司念文
张文林
屈丹
常禾雨
李盛祥
牛铜
SI Nianwen;ZHANG Wenlin;QU Dan;CHANG Heyu;LI Shengxiang;NIU Tong(Department of Information System Engineering,Information Engineering University,Zhengzhou 450001,China;Department of Cryptogram Engineering,Information Engineering University,Zhengzhou 450001,China)
出处
《通信学报》
EI
CSCD
北大核心
2021年第3期23-35,共13页
Journal on Communications
基金
国家自然科学基金资助项目(No.61673395)。
关键词
卷积神经网络
可解释性
对抗补丁
类激活图
显著图
convolutional neural network
interpretability
adversarial patch
class activation map
saliency map