Two Training Strategies for Improving Relation Extraction over Universal Graph

Qin Dai, Naoya Inoue, Ryo Takahashi, Kentaro Inui

Information Extraction and Text Mining Long paper Paper

Gather-3A: Apr 23, Gather-3A: Apr 23 (13:00-15:00 UTC) [Join Gather Meeting]

You can open the pre-recorded video in separate windows.

Abstract: This paper explores how the Distantly Supervised Relation Extraction (DS-RE) can benefit from the use of a Universal Graph (UG), the combination of a Knowledge Graph (KG) and a large-scale text collection. A straightforward extension of a current state-of-the-art neural model for DS-RE with a UG may lead to degradation in performance. We first report that this degradation is associated with the difficulty in learning a UG and then propose two training strategies: (1) Path Type Adaptive Pretraining, which sequentially trains the model with different types of UG paths so as to prevent the reliance on a single type of UG path; and (2) Complexity Ranking Guided Attention mechanism, which restricts the attention span according to the complexity of a UG path so as to force the model to extract features not only from simple UG paths but also from complex ones. Experimental results on both biomedical and NYT10 datasets prove the robustness of our methods and achieve a new state-of-the-art result on the NYT10 dataset. The code and datasets used in this paper are available at https://github.com/baodaiqin/UGDSRE.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.

Connected Papers in EACL2021

Similar Papers

Bootstrapping Relation Extractors using Syntactic Search by Examples
Matan Eyal, Asaf Amrami, Hillel Taub-Tabib, Yoav Goldberg,
RelWalk - A Latent Variable Model Approach to Knowledge Graph Embedding
Danushka Bollegala, Huda Hakami, Yuichi Yoshida, Ken-ichi Kawarabayashi,
Cross-lingual Entity Alignment with Incidental Supervision
Muhao Chen, Weijia Shi, Ben Zhou, Dan Roth,
Complex Question Answering on knowledge graphs using machine translation and multi-task learning
Saurabh Srivastava, Mayur Patidar, Sudip Chowdhury, Puneet Agarwal, Indrajit Bhattacharya, Gautam Shroff,