"Are you kidding me?": Detecting Unpalatable Questions on Reddit

Sunyam Bagga, Andrew Piper, Derek Ruths

Computational Social Science and Social Media Long paper Paper

Zoom-2A: Apr 21, Zoom-2A: Apr 21 (12:00-13:00 UTC) [Join Zoom Meeting]
Gather-1B: Apr 21, Gather-1B: Apr 21 (13:00-15:00 UTC) [Join Gather Meeting]

You can open the pre-recorded video in separate windows.

Abstract: Abusive language in online discourse negatively affects a large number of social media users. Many computational methods have been proposed to address this issue of online abuse. The existing work, however, tends to focus on detecting the more explicit forms of abuse leaving the subtler forms of abuse largely untouched. Our work addresses this gap by making three core contributions. First, inspired by the theory of impoliteness, we propose a novel task of detecting a subtler form of abuse, namely unpalatable questions. Second, we publish a context-aware dataset for the task using data from a diverse set of Reddit communities. Third, we implement a wide array of learning models and also investigate the benefits of incorporating conversational context into computational models. Our results show that modeling subtle abuse is feasible but difficult due to the language involved being highly nuanced and context-sensitive. We hope that future research in the field will address such subtle forms of abuse since their harm currently passes unnoticed through existing detection systems.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.

Connected Papers in EACL2021

Similar Papers

BERTective: Language Models and Contextual Information for Deception Detection
Tommaso Fornaciari, Federico Bianchi, Massimo Poesio, Dirk Hovy,
Us vs. Them: A Dataset of Populist Attitudes, News Bias and Emotions
Pere-Lluís Huguet Cabot, David Abadi, Agneta Fischer, Ekaterina Shutova,
Exploiting Emojis for Abusive Language Detection
Michael Wiegand, Josef Ruppenhofer,