Dependency parsing with structure preserving embeddings

Ákos Kádár, Lan Xiao, Mete Kemertas, Federico Fancellu, Allan Jepson, Afsaneh Fazly

Syntax: Tagging, Chunking, and Parsing Long paper Paper

Zoom-2E: Apr 21, Zoom-2E: Apr 21 (12:00-13:00 UTC) [Join Zoom Meeting]
Gather-3C: Apr 23, Gather-3C: Apr 23 (13:00-15:00 UTC) [Join Gather Meeting]

You can open the pre-recorded video in separate windows.

Abstract: Modern neural approaches to dependency parsing are trained to predict a tree structure by jointly learning a contextual representation for tokens in a sentence, as well as a head--dependent scoring function. Whereas this strategy results in high performance, it is difficult to interpret these representations in relation to the geometry of the underlying tree structure. Our work seeks instead to learn interpretable representations by training a parser to explicitly preserve structural properties of a tree. We do so by casting dependency parsing as a tree embedding problem where we incorporate geometric properties of dependency trees in the form of training losses within a graph-based parser. We provide a thorough evaluation of these geometric losses, showing that a majority of them yield strong tree distance preservation as well as parsing performance on par with a competitive graph-based parser (Qi et al., 2018). Finally, we show where parsing errors lie in terms of tree relationship in order to guide future work.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.

Connected Papers in EACL2021

Similar Papers