SF-QA: Simple and Fair Evaluation Library for Open-domain Question Answering

Xiaopeng Lu, Kyusong Lee, Tiancheng Zhao

Demo Paper

Gather-1F: Apr 21, Gather-1F: Apr 21 (13:00-15:00 UTC) [Join Gather Meeting]

You can open the pre-recorded video in separate windows.

Abstract: Although open-domain question answering (QA) draws great attention in recent years, it requires large amounts of resources for building the full system and it is often difficult to reproduce previous results due to complex configurations. In this paper, we introduce SF-QA: simple and fair evaluation framework for open-domain QA. SF-QA framework modularizes the pipeline open-domain QA system, which makes the task itself easily accessible and reproducible to research groups without enough computing resources. The proposed evaluation framework is publicly available and anyone can contribute to the code and evaluations.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.

Similar Papers

NoiseQA: Challenge Set Evaluation for User-Centric Question Answering
Abhilasha Ravichander, Siddharth Dalmia, Maria Ryskina, Florian Metze, Eduard Hovy, Alan W Black,
Which is Better for Deep Learning: Python or MATLAB? Answering Comparative Questions in Natural Language
Viktoriia Chekalina, Alexander Bondarenko, Chris Biemann, Meriem Beloucif, Varvara Logacheva, Alexander Panchenko,
Complementary Evidence Identification in Open-Domain Question Answering
Xiangyang Mou, Mo Yu, Shiyu Chang, Yufei Feng, Li Zhang, Hui Su,