Investigating the Effect of Classifier Depth on Semantic Adversarial Attacks
Date
2020-05
Authors
Schoeberle, Luke
Major Professor
Advisor
Committee Member
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Deep neural networks form the backbone of artificial intelligence systems, but recent research has demonstrated their susceptibility to adversarial inputs, which raises questions about their usefulness in real-world scenarios. In particular, recent works have illustrated their vulnerabilities to semantic adversarial attacks, but the reasoning behind these vulnerabilities remains unclear. In this work, we investigate the effects of classifier depth and architecture on the effectiveness of semantic attacks on deep neural network classifiers. Specifically, we compare the results of semantic attacks on varying depths of deep residual network (ResNet) and simple convolutional neural network (CNN) gender classifiers trained on the CelebA dataset. In terms of architecture, we find that ResNet classifiers are more susceptible to semantic attacks than simple CNN classifiers, which may be influenced by increased gradient backpropagation in the ResNet architecture. In terms of depth, we find that ResNet classifiers of any depth have similar vulnerabilities to these attacks, while CNN classifiers with more convolutional layers are less susceptible to semantic attacks. Through these experiments, it is clear that both classifier architecture and classifier depth (in certain architectures) influence the success rate of semantic adversarial attacks.
Series Number
Journal Issue
Is Version Of
Versions
Series
Academic or Administrative Unit
Type
Presentation