STAR: Cross-modal [STA]tement [R]epresentation for selecting relevant mathematical premises

Deborah Ferreira, André Freitas

Semantics: Sentence-level Semantics, Textual Inference and Other areas Long paper Paper

Gather-1E: Apr 21, Gather-1E: Apr 21 (13:00-15:00 UTC) [Join Gather Meeting]

You can open the pre-recorded video in separate windows.

Abstract: Mathematical statements written in natural language are usually composed of two different modalities: mathematical elements and natural language. These two modalities have several distinct linguistic and semantic properties. State-of-the-art representation techniques have demonstrated an inability in capturing such an entangled style of discourse. In this work, we propose STAR, a model that uses cross-modal attention to learn how to represent mathematical text for the task of Natural Language Premise Selection. This task uses conjectures written in both natural and mathematical language to recommend premises that most likely will be relevant to prove a particular statement. We found that STAR not only outperforms baselines that do not distinguish between natural language and mathematical elements, but it also achieves better performance than state-of-the-art models.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.

Connected Papers in EACL2021

Similar Papers

Attention Can Reflect Syntactic Structure (If You Let It)
Vinit Ravishankar, Artur Kulmizev, Mostafa Abdou, Anders Søgaard, Joakim Nivre,
Language Models for Lexical Inference in Context
Martin Schmitt, Hinrich Schütze,
Language Modelling as a Multi-Task Problem
Lucas Weber, Jaap Jumelet, Elia Bruni, Dieuwke Hupkes,
Changing the Mind of Transformers for Topically-Controllable Language Generation
Haw-Shiuan Chang, Jiaming Yuan, Mohit Iyyer, Andrew McCallum,