BERTective: Language Models and Contextual Information for Deception Detection

Tommaso Fornaciari, Federico Bianchi, Massimo Poesio, Dirk Hovy

Sentiment Analysis, Stylistic Analysis, and Argument Mining Long paper Paper

Gather-2B: Apr 22, Gather-2B: Apr 22 (13:00-15:00 UTC) [Join Gather Meeting]

You can open the pre-recorded video in separate windows.

Abstract: Spotting a lie is challenging but has an enormous potential impact on security as well as private and public safety. Several NLP methods have been proposed to classify texts as truthful or deceptive. In most cases, however, the target texts' preceding context is not considered. This is a severe limitation, as any communication takes place in context, not in a vacuum, and context can help to detect deception. We study a corpus of Italian dialogues containing deceptive statements and implement deep neural models that incorporate various linguistic contexts. We establish a new state-of-the-art identifying deception and find that not all context is equally useful to the task. Only the texts closest to the target, if from the same speaker (rather than questions by an interlocutor), boost performance. We also find that the semantic information in language models such as BERT contributes to the performance. However, BERT alone does not capture the implicit knowledge of deception cues: its contribution is conditional on the concurrent use of attention to learn cues from BERT's representations.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.

Connected Papers in EACL2021

Similar Papers

WiC-TSV: An Evaluation Benchmark for Target Sense Verification of Words in Context
Anna Breit, Artem Revenko, Kiamehr Rezaee, Mohammad Taher Pilehvar, Jose Camacho-Collados,
Exploiting Emojis for Abusive Language Detection
Michael Wiegand, Josef Ruppenhofer,
Civil Rephrases Of Toxic Texts With Self-Supervised Transformers
Léo Laugier, John Pavlopoulos, Jeffrey Sorensen, Lucas Dixon,