Rethinking Coherence Modeling: Synthetic vs. Downstream Tasks

Tasnim Mohiuddin, Prathyusha Jwalapuram, Xiang Lin, Shafiq Joty

Discourse and Pragmatics Long paper Paper

Zoom-3A: Apr 22, Zoom-3A: Apr 22 (07:00-08:00 UTC) [Join Zoom Meeting]
Gather-2B: Apr 22, Gather-2B: Apr 22 (13:00-15:00 UTC) [Join Gather Meeting]

You can open the pre-recorded video in separate windows.

Abstract: Although coherence modeling has come a long way in developing novel models, their evaluation on downstream applications for which they are purportedly developed has largely been neglected. With the advancements made by neural approaches in applications such as machine translation (MT), summarization and dialog systems, the need for coherence evaluation of these tasks is now more crucial than ever. However, coherence models are typically evaluated only on synthetic tasks, which may not be representative of their performance in downstream applications. To investigate how representative the synthetic tasks are of downstream use cases, we conduct experiments on benchmarking well-known traditional and neural coherence models on synthetic sentence ordering tasks, and contrast this with their performance on three downstream applications: coherence evaluation for MT and summarization, and next utterance prediction in retrieval-based dialog. Our results demonstrate a weak correlation between the model performances in the synthetic tasks and the downstream applications, motivating alternate training and evaluation methods for coherence models.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.

Connected Papers in EACL2021

Similar Papers

Few-shot learning through contextual data augmentation
Farid Arthaud, Rachel Bawden, Alexandra Birch,
Randomized Deep Structured Prediction for Discourse-Level Processing
Manuel Widmoser, Maria Pacheco, Jean Honorio, Dan Goldwasser,
CLiMP: A Benchmark for Chinese Language Model Evaluation
Beilei Xiang, Changbing Yang, Yu Li, Alex Warstadt, Katharina Kann,