Rethinking adversarial robustness in the context of the right to be forgotten

dc.contributor.advisor Huai, Mengdi
dc.contributor.advisor Sikder, Amit K
dc.contributor.advisor Huang, Xiaoqiu
dc.contributor.author Zhao, Chenxu
dc.contributor.department Department of Computer Science
dc.date.accessioned 2025-06-25T22:43:14Z
dc.date.available 2025-06-25T22:43:14Z
dc.date.issued 2025-05
dc.date.updated 2025-06-25T22:43:15Z
dc.description.abstract The past few years have seen an intense research interest in the practical needs of the right to be forgotten, which has motivated researchers to develop machine unlearning methods to unlearn a fraction of training data and its lineage. While existing machine unlearning methods prioritize the protection of individuals' private data, they overlook investigating the unlearned models' susceptibility to adversarial attacks and security breaches. In this work, we uncover a novel security vulnerability of machine unlearning based on the insight that adversarial vulnerabilities can be bolstered, especially for adversarially robust models. To exploit this observed vulnerability, we propose a novel attack called Adversarial Unlearning Attack (AdvUA), which aims to generate a small fraction of malicious unlearning requests during the unlearning process. AdvUA causes a significant reduction of adversarial robustness in the unlearned model compared to the original model, providing an entirely new capability for adversaries that is infeasible in conventional machine learning pipelines. Notably, we also show that AdvUA can effectively enhance model stealing attacks by extracting additional decision boundary information, further emphasizing the breadth and significance of our research. We also conduct both theoretical analysis and computational complexity of AdvUA. Extensive numerical studies are performed to demonstrate the effectiveness and efficiency of the proposed attack.
dc.format.mimetype PDF
dc.identifier.uri https://dr.lib.iastate.edu/handle/20.500.12876/JwjbE8Vw
dc.language.iso en
dc.language.rfc3066 en
dc.subject.disciplines Computer science en_US
dc.subject.keywords Adversarial attack en_US
dc.subject.keywords Machine unlearning en_US
dc.title Rethinking adversarial robustness in the context of the right to be forgotten
dc.type thesis en_US
dc.type.genre thesis en_US
dspace.entity.type Publication
relation.isOrgUnitOfPublication f7be4eb9-d1d0-4081-859b-b15cee251456
thesis.degree.discipline Computer science en_US
thesis.degree.grantor Iowa State University en_US
thesis.degree.level thesis $
thesis.degree.name Master of Science en_US
File
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Zhao_iastate_0097M_22213.pdf
Size:
1.29 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
0 B
Format:
Item-specific license agreed upon to submission
Description: