Induce, Edit, Retrieve: Language Grounded Multimodal Schema for Instructional Video Retrieval
Published in Arxiv, 2021
Recommended citation: Yang, Yue, Joongwon Kim, Artemis Panagopoulou, Mark Yatskar, and Chris Callison-Burch. "Induce, edit, retrieve: Language grounded multimodal schema for instructional video retrieval." arXiv preprint arXiv:2111.09276 (2021)
Abstract. Understanding what sequence of steps are needed to complete a goal can help artificial intelligence systems reason about human activities. Past work in NLP has examined the task of goal-step inference for text. We introduce the visual analogue. We propose the Visual Goal-Step Inference (VGSI) task, where a model is given a textual goal and must choose which of four images represents a plausible step towards that goal. With a new dataset harvested from wikiHow consisting of 772,277 images representing human actions, we show that our task is challenging for state-of-the- art multimodal models. Moreover, the multimodal representation learned from our data can be effectively transferred to other datasets like HowTo100m, increasing the VGSI accu- racy by 15 - 20%. Our task will facilitate multimodal reasoning about procedural events.
Yang, Yue, Joongwon Kim, Artemis Panagopoulou, Mark Yatskar, and Chris Callison-Burch. “Induce, edit, retrieve: Language grounded multimodal schema for instructional video retrieval.” arXiv preprint arXiv:2111.09276 (2021)