In this talk, we consider question answering from documents and present a novel multi-turn task extension, composed of a sequence of questions that build on each other. To automatically answer questions from the provided text, models should (1) efficiently read the document and (2) grasp the intent of the question from the interaction. To address these challenges, I will first describe a coarse-to-fine approach which achieves significant speed-up (up to 6.7x) while maintaining or even improving the accuracy. Second, I will present a new large-scale dataset — Question Answering in Context (QuAC) where we simulate multi-turn information seeking dialogs. Lastly, I will introduce a new model for such QA dialogs. Our model encodes dialog history more effectively by incorporating intermediate layers used to answer previous questions with an alternating parallel structure. Together, these works expand the scope of questions that can be answered with the next generation of machine reading systems.
Eunsol Choi is a PhD candidate in the Paul G. Allen School of Computer Science and Engineering at the University of Washington, advised by Yejin Choi and Luke Zettlemoyer. Her research focuses on natural language processing, specifically applying machine learning to recover semantics from text. She develops techniques for extracting information about entities from text, and answering natural language questions automatically using large-scale databases or unconstructed text. Prior to UW, she studied mathematics and computer science at Cornell University. Currently she is supported by Facebook Fellowship.