We Need To Talk About Random Splits

Anders Søgaard, Sebastian Ebert, Jasmijn Bastings, Katja Filippova

Language Resources and Evaluation Short paper Paper

Zoom-3C: Apr 22, Zoom-3C: Apr 22 (07:00-08:00 UTC) [Join Zoom Meeting]
Gather-3D: Apr 23, Gather-3D: Apr 23 (13:00-15:00 UTC) [Join Gather Meeting]

You can open the pre-recorded video in separate windows.

Abstract: \newcite{Gorman:Bedrick:19} argued for using random splits rather than standard splits in NLP experiments. We argue that random splits, like standard splits, lead to overly optimistic performance estimates. We can also split data in biased or adversarial ways, e.g., training on short sentences and evaluating on long ones. Biased sampling has been used in domain adaptation to simulate real-world drift; this is known as the covariate shift assumption. In NLP, however, even worst-case splits, maximizing bias, often under-estimate the error observed on new samples of in-domain data, i.e., the data that models should minimally generalize to at test time. This invalidates the covariate shift assumption. Instead of using multiple random splits, future benchmarks should ideally include multiple, independent test sets instead; if infeasible, we argue that multiple biased splits leads to more realistic performance estimates than multiple random splits.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.

Connected Papers in EACL2021

Similar Papers

Evaluating Neural Model Robustness for Machine Comprehension
Winston Wu, Dustin Arendt, Svitlana Volkova,
Hidden Biases in Unreliable News Detection Datasets
Xiang Zhou, Heba Elfardy, Christos Christodoulopoulos, Thomas Butler, Mohit Bansal,