摘要
The performance of deep learning on many tasks has been impressive.However,recent studies have shown that deep learning systems are vulnerable to small specifically crafted perturbations imperceptible to humans.Images with such perturbations are called adversarial examples.They have been proven to be an indisputable threat to deep neural networks(DNNs)based applications,but DNNs have yet to be fully elucidated,consequently preventing the development of efficient defenses against adversarial examples.This study proposes a two-stream architecture to protect convolutional neural networks(CNNs)from attacks by adversarial examples.Our model applies the idea of“two-stream”used in the security field.Thus,it successfully defends different kinds of attack methods because of differences in“high-resolution”and“low-resolution”networks in feature extraction.This study experimentally demonstrates that our two-stream architecture is difficult to be defeated with state-of-the-art attacks.Our two-stream architecture is also robust to adversarial examples built by currently known attacking algorithms.
基金
supported by the Ph.D.Programs Foundation of Ministry of Education of China under Grant No.20130185130001.