MIDAS: A Dialog Act Annotation Scheme for Open Domain HumanMachine Spoken Conversations

Dian Yu, Zhou Yu

Dialogue and Interactive Systems Long paper Paper

Zoom-5A: Apr 22, Zoom-5A: Apr 22 (12:00-13:00 UTC) [Join Zoom Meeting]
Gather-3B: Apr 23, Gather-3B: Apr 23 (13:00-15:00 UTC) [Join Gather Meeting]

You can open the pre-recorded video in separate windows.

Abstract: Dialog act prediction in open-domain conversations is an essential language comprehension task for both dialog system building and discourse analysis. Previous dialog act schemes, such as SWBD-DAMSL, are designed mainly for discourse analysis in human-human conversations. In this paper, we present a dialog act annotation scheme, MIDAS (Machine Interaction Dialog Act Scheme), targeted at open-domain human-machine conversations. MIDAS is designed to assist machines to improve their ability to understand human partners. MIDAS has a hierarchical structure and supports multi-label annotations. We collected and annotated a large open-domain human-machine spoken conversation dataset (consisting of 24K utterances). To validate our scheme, we leveraged transfer learning methods to train a multi-label dialog act prediction model and reached an F1 score of 0.79.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.

Connected Papers in EACL2021

Similar Papers

Domain Expert Platform for Goal-Oriented Dialog Collection
Didzis Goško, Arturs Znotins, Inguna Skadina, Normunds Gruzitis, Gunta Nešpore-Bērzkalne,
Zero-shot Generalization in Dialog State Tracking through Generative Question Answering
Shuyang Li, Jin Cao, Mukund Sridhar, Henghui Zhu, Shang-Wen Li, Wael Hamza, Julian McAuley,
The Gutenberg Dialogue Dataset
Richard Csaky, Gábor Recski,
ChainCQG: Flow-Aware Conversational Question Generation
Jing Gu, Mostafa Mirshekari, Zhou Yu, Aaron Sisto,