Comparing Knowledge-Intensive and Data-Intensive Models for English Resource Semantic Parsing

Junjie Cao, Zi Lin, Weiwei Sun, Xiaojun Wan

CL Track Cl paper Paper

Zoom-7C: Apr 23, Zoom-7C: Apr 23 (08:00-09:00 UTC) [Join Zoom Meeting]
Gather-3F: Apr 23, Gather-3F: Apr 23 (13:00-15:00 UTC) [Join Gather Meeting]

You can open the pre-recorded video in separate windows.

Abstract: In this work, we present a phenomenon-oriented comparative analysis of the two dominant approaches in English Resource Semantic (ERS) parsing: classic, knowledge-intensive and neural, data-intensive models. To reflect state-of-the-art neural NLP technologies, a factorization-based parser is introduced that can produce Elementary Dependency Structures much more accurately than previous data-driven parsers. We conduct a suite of tests for different linguistic phenomena to analyze the grammatical competence of different parsers, where we show that, despite comparable performance overall, knowledge- and data-intensive models produce different types of errors, in a way that can be explained by their theoretical properties. This analysis is beneficial to in-depth evaluation of several representative parsing techniques and leads to new directions for parser development.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.

Connected Papers in EACL2021

Similar Papers