A Simple Three-Step Approach for the Automatic Detection of Exaggerated Statements in Health Science News

Jasabanta Patro, Sabyasachee Baruah

Information Extraction and Text Mining Long paper Paper

Gather-3A: Apr 23, Gather-3A: Apr 23 (13:00-15:00 UTC) [Join Gather Meeting]

You can open the pre-recorded video in separate windows.

Abstract: There is a huge difference between a scientific journal reporting 'wine consumption might be correlated to cancer', and a media outlet publishing 'wine causes cancer' citing the journal's results. The above example is a typical case of a scientific statement being exaggerated as an outcome of the rising problem of media manipulation. Given a pair of statements (say one from the source journal article and the other from the news article covering the results published in the journal), is it possible to ascertain with some confidence whether one is an exaggerated version of the other? This paper presents a surprisingly simple yet rational three-step approach that performs best for this task. We solve the task by breaking it into three sub-tasks as follows -- (a) given a statement from a scientific paper or press release, we first extract relation phrases (e.g., 'causes' versus 'might be correlated to') connecting the dependent (e.g., 'cancer') and the independent ('wine') variable, (b) classify the strength of the relationship phrase extracted and (c) compare the strengths of the relation phrases extracted from the statements to identify whether one statement contains an exaggerated version of the other, and to what extent. Through rigorous experiments, we demonstrate that our simple approach by far outperforms baseline models that compare state-of-the-art embedding of the statement pairs through a binary classifier or recast the problem as a textual entailment task, which appears to be a very natural choice in this settings.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.

Connected Papers in EACL2021

Similar Papers

Hidden Biases in Unreliable News Detection Datasets
Xiang Zhou, Heba Elfardy, Christos Christodoulopoulos, Thomas Butler, Mohit Bansal,
Scientific Discourse Tagging for Evidence Extraction
Xiangci Li, Gully Burns, Nanyun Peng,