Adversarial Stylometry in the Wild: Transferable Lexical Substitution Attacks on Author Profiling

Chris Emmery, Ákos Kádár, Grzegorz Chrupała

Sentiment Analysis, Stylistic Analysis, and Argument Mining Long paper Paper

Gather-1B: Apr 21, Gather-1B: Apr 21 (13:00-15:00 UTC) [Join Gather Meeting]

You can open the pre-recorded video in separate windows.

Abstract: Written language contains stylistic cues that can be exploited to automatically infer a variety of potentially sensitive author information. Adversarial stylometry intends to attack such models by rewriting an author's text. Our research proposes several components to facilitate deployment of these adversarial attacks in the wild, where neither data nor target models are accessible. We introduce a transformer-based extension of a lexical replacement attack, and show it achieves high transferability when trained on a weakly labeled corpus---decreasing target model performance below chance. While not completely inconspicuous, our more successful attacks also prove notably less detectable by humans. Our framework therefore provides a promising direction for future privacy-preserving adversarial attacks.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.

Connected Papers in EACL2021

Similar Papers

Adv-OLM: Generating Textual Adversaries via OLM
Vijit Malik, Ashwani Bhat, Ashutosh Modi,
Evaluating Neural Model Robustness for Machine Comprehension
Winston Wu, Dustin Arendt, Svitlana Volkova,
On Robustness of Neural Semantic Parsers
shuo huang, Zhuang Li, Lizhen Qu, Lei Pan,
Data Augmentation for Hypernymy Detection
Thomas Kober, Julie Weeds, Lorenzo Bertolini, David Weir,