Challenges in Automated Debiasing for Toxic Language Detection

Xuhui Zhou, Maarten Sap, Swabha Swayamdipta, Yejin Choi, Noah Smith

Interpretability and Analysis of Models for NLP Long paper Paper

Zoom-5D: Apr 22, Zoom-5D: Apr 22 (12:00-13:00 UTC) [Join Zoom Meeting]
Gather-2B: Apr 22, Gather-2B: Apr 22 (13:00-15:00 UTC) [Join Gather Meeting]

You can open the pre-recorded video in separate windows.

Abstract: Biased associations have been a challenge in the development of classifiers for detecting toxic language, hindering both fairness and accuracy. As potential solutions, we investigate recently introduced debiasing methods for text classification datasets and models, as applied to toxic language detection. Our focus is on lexical (e.g., swear words, slurs, identity mentions) and dialectal markers (specifically African American English). Our comprehensive experiments establish that existing methods are limited in their ability to prevent biased behavior in current toxicity detectors. We then propose an automatic, dialect-aware data correction method, as a proof-of-concept. Despite the use of synthetic labels, this method reduces dialectal associations with toxicity. Overall, our findings show that debiasing a model trained on biased toxic language data is not as effective as simply relabeling the data to remove existing biases.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.

Connected Papers in EACL2021

Similar Papers

Civil Rephrases Of Toxic Texts With Self-Supervised Transformers
Léo Laugier, John Pavlopoulos, Jeffrey Sorensen, Lucas Dixon,
An Expert Annotated Dataset for the Detection of Online Misogyny
Ella Guest, Bertie Vidgen, Alexandros Mittos, Nishanth Sastry, Gareth Tyson, Helen Margetts,
From Toxicity in Online Comments to Incivility in American News: Proceed with Caution
Anushree Hede, Oshin Agarwal, Linda Lu, Diana C. Mutz, Ani Nenkova,