Synthesizing Adversarial Examples for Neural Networks

Thumbnail Image
Date
2019-01-01
Authors
Rasineni, Hasitha
Major Professor
Zhengdao Wang
Advisor
Committee Member
Journal Title
Journal ISSN
Volume Title
Publisher
Authors
Research Projects
Organizational Units
Journal Issue
Is Version Of
Versions
Series
Department
Electrical and Computer Engineering
Abstract

As machine learning is being integrated into more and more systems, such as autonomous vehicles or medical devices, they are also becoming entry points for attacks. Many sate-of-the-art neural networks have been proved, to be vulnerable to adversarial examples. These failures of machine learning models demonstrate that even simple algorithms can behave very differently from what their designers intend to. In order to close this gap between what designers intend to and how algorithms behave, there is a huge need for preventing adversarial examples to improve the credibility of the model. This study focuses on synthesizing adversarial examples using two different white box attacks - Fast Gradient Sign Method (FGSM) and Expectation Over Transfromation (EOT) Method.

Comments
Description
Keywords
Citation
DOI
Source
Subject Categories
Copyright
Tue Jan 01 00:00:00 UTC 2019