Synthesizing Adversarial Examples for Neural Networks Rasineni, Hasitha
dc.contributor.department Electrical and Computer Engineering
dc.contributor.majorProfessor Zhengdao Wang 2020-01-07T20:21:50.000 2020-06-30T01:34:54Z 2020-06-30T01:34:54Z Tue Jan 01 00:00:00 UTC 2019 2019-01-01
dc.description.abstract <p>As machine learning is being integrated into more and more systems, such as autonomous vehicles or medical devices, they are also becoming entry points for attacks. Many sate-of-the-art neural networks have been proved, to be vulnerable to adversarial examples. These failures of machine learning models demonstrate that even simple algorithms can behave very differently from what their designers intend to. In order to close this gap between what designers intend to and how algorithms behave, there is a huge need for preventing adversarial examples to improve the credibility of the model. This study focuses on synthesizing adversarial examples using two different white box attacks - Fast Gradient Sign Method (FGSM) and Expectation Over Transfromation (EOT) Method.</p>
dc.format.mimetype PDF
dc.identifier archive/
dc.identifier.articleid 1481
dc.identifier.contextkey 15936063
dc.identifier.s3bucket isulib-bepress-aws-west
dc.identifier.submissionpath creativecomponents/419
dc.source.bitstream archive/|||Sat Jan 15 00:11:39 UTC 2022
dc.subject.disciplines Signal Processing
dc.subject.keywords Adversarial Examples
dc.subject.keywords Neural Networks
dc.subject.keywords Machine Learning
dc.subject.keywords Adversarial Attacks
dc.subject.keywords white-box attacks
dc.subject.keywords FGSM
dc.subject.keywords EOT
dc.title Synthesizing Adversarial Examples for Neural Networks
dc.type article
dc.type.genre creativecomponent
dspace.entity.type Publication
relation.isOrgUnitOfPublication a75a044c-d11e-44cd-af4f-dab1d83339ff Electrical Engineering creativecomponent
Original bundle
Now showing 1 - 1 of 1
861.82 KB
Adobe Portable Document Format