Synthesizing Adversarial Examples for Neural Networks

Date
2019-01-01
Authors
Rasineni, Hasitha
Journal Title
Journal ISSN
Volume Title
Publisher
Altmetrics
Authors
Research Projects
Organizational Units
Journal Issue
Series
Abstract

As machine learning is being integrated into more and more systems, such as autonomous vehicles or medical devices, they are also becoming entry points for attacks. Many sate-of-the-art neural networks have been proved, to be vulnerable to adversarial examples. These failures of machine learning models demonstrate that even simple algorithms can behave very differently from what their designers intend to. In order to close this gap between what designers intend to and how algorithms behave, there is a huge need for preventing adversarial examples to improve the credibility of the model. This study focuses on synthesizing adversarial examples using two different white box attacks - Fast Gradient Sign Method (FGSM) and Expectation Over Transfromation (EOT) Method.

Description
Keywords
Adversarial Examples, Neural Networks, Machine Learning, Adversarial Attacks, white-box attacks, FGSM, EOT
Citation
DOI
Source