Exploiting Multimodal Reinforcement Learning for Simultaneous Machine Translation

Julia Ive, Andy Mingren Li, Yishu Miao, Ozan Caglayan, Pranava Madhyastha, Lucia Specia

Language Grounding to Vision, Robotics and Beyond Long paper Paper

Gather-1E: Apr 21, Gather-1E: Apr 21 (13:00-15:00 UTC) [Join Gather Meeting]

You can open the pre-recorded video in separate windows.

Abstract: This paper addresses the problem of simultaneous machine translation (SiMT) by exploring two main concepts: (a) adaptive policies to learn a good trade-off between high translation quality and low latency; and (b) visual information to support this process by providing additional (visual) contextual information which may be available before the textual input is produced. For that, we propose a multimodal approach to simultaneous machine translation using reinforcement learning, with strategies to integrate visual and textual information in both the agent and the environment. We provide an exploration on how different types of visual information and integration strategies affect the quality and latency of simultaneous translation models, and demonstrate that visual cues lead to higher quality while keeping the latency low.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.

Connected Papers in EACL2021

Similar Papers

Exploring Supervised and Unsupervised Rewards in Machine Translation
Julia Ive, Zixu Wang, Marina Fomicheva, Lucia Specia,
Cross-lingual Visual Pre-training for Multimodal Machine Translation
Ozan Caglayan, Menekse Kuyu, Mustafa Sercan Amac, Pranava Madhyastha, Erkut Erdem, Aykut Erdem, Lucia Specia,
Augmenting Transformers with KNN-Based Composite Memory for Dialog
Angela Fan, Claire Gardent, Chloe Braud, Antoine Bordes,