Evaluating the Evaluation of Diversity in Natural Language Generation

Guy Tevet, Jonathan Berant

Language Resources and Evaluation Long paper Paper

Gather-3B: Apr 23, Gather-3B: Apr 23 (13:00-15:00 UTC) [Join Gather Meeting]

You can open the pre-recorded video in separate windows.

Abstract: Despite growing interest in natural language generation (NLG) models that produce diverse outputs, there is currently no principled method for evaluating the diversity of an NLG system. In this work, we propose a framework for evaluating diversity \emph{metrics}. The framework measures the correlation between a proposed diversity metric and a \emph{diversity parameter}, a single parameter that controls some aspect of diversity in generated text. For example, a diversity parameter might be a binary variable used to instruct crowdsourcing workers to generate text with either low or high content diversity. We demonstrate the utility of our framework by: (a) establishing best practices for eliciting diversity judgments from humans, (b) showing that humans substantially outperform automatic metrics in estimating content diversity, and (c) demonstrating that existing methods for controlling diversity by tuning a ``decoding parameter" mostly affect form but not meaning. Our framework can advance the understanding of different diversity metrics, an essential step on the road towards better NLG systems.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.

Connected Papers in EACL2021

Similar Papers

Changing the Mind of Transformers for Topically-Controllable Language Generation
Haw-Shiuan Chang, Jiaming Yuan, Mohit Iyyer, Andrew McCallum,
Through the Looking Glass: Learning to Attribute Synthetic Text Generated by Language Models
Shaoor Munir, Brishna Batool, Zubair Shafiq, Padmini Srinivasan, Fareed Zaffar,
NoiseQA: Challenge Set Evaluation for User-Centric Question Answering
Abhilasha Ravichander, Siddharth Dalmia, Maria Ryskina, Florian Metze, Eduard Hovy, Alan W Black,