Skip to yearly menu bar Skip to main content


Poster

Re-thinking Data Availability Attacks Against Deep Neural Networks

Bin Fang · Bo Li · Shuang Wu · Shouhong Ding · Ran Yi · Lizhuang Ma

Arch 4A-E Poster #251
[ ] [ Paper PDF ]
[ Poster
Thu 20 Jun 10:30 a.m. PDT — noon PDT

Abstract: The unauthorized use of personal data for commercial purposes and the covert acquisition of private data for training machine learning models continue to raise concerns. To address these issues, researchers have proposed availability attacks that aim to render data unexploitable. However, many availability attack methods can be easily disrupted by adversarial training. Although some robust methods can resist adversarial training, their protective effects are limited. In this paper, we re-examine the existing availability attack methods and propose a novel two-stage min-max-min optimization paradigm to generate robust unlearnable noise. The inner min stage is utilized to generate unlearnable noise, while the outer min-max stage simulates the training process of the poisoned model. Additionally, we formulate the attack effects and use it to constrain the optimization objective. Comprehensive experiments have revealed that the noise generated by our method can lead to a decline in test accuracy for adversarially trained poisoned models by up to approximately $30\%$, in comparison to state-of-the-art methods.

Chat is not available.