Query Generation for Multimodal Documents

kyungho kim, Kyungjae Lee, Seung-won Hwang, Young-In Song, seungwook lee

Information Retrieval, Search and Question Answering Long paper Paper

Zoom-7B: Apr 23, Zoom-7B: Apr 23 (08:00-09:00 UTC) [Join Zoom Meeting]
Gather-3A: Apr 23, Gather-3A: Apr 23 (13:00-15:00 UTC) [Join Gather Meeting]

You can open the pre-recorded video in separate windows.

Abstract: This paper studies the problem of generatinglikely queries for multimodal documents withimages. Our application scenario is enablingefficient “first-stage retrieval” of relevant doc-uments, by attaching generated queries to doc-uments before indexing. We can then indexthis expanded text to efficiently narrow downto candidate matches using inverted index, sothat expensive reranking can follow. Our eval-uation results show that our proposed multi-modal representation meaningfully improvesrelevance ranking.More importantly, ourframework can achieve the state of the art inthe first stage retrieval scenarios
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.

Connected Papers in EACL2021

Similar Papers

Discourse-Aware Unsupervised Summarization for Long Scientific Documents
Yue Dong, Andrei Mircea Romascanu, Jackie Chi Kit Cheung,
DOCENT: Learning Self-Supervised Entity Representations from Large Document Collections
Yury Zemlyanskiy, Sudeep Gandhe, Ruining He, Bhargav Kanagal, Anirudh Ravula, Juraj Gottweis, Fei Sha, Ilya Eckstein,