Visual Goal-Step Inference using wikiHow

Published in EMNLP 2021 (Oral), 2021

Recommended citation: Yue Yang, Artemis Panagopoulou, Qing Lyu, Li Zhang, Mark Yatskar, Chris Callison-Burch (2021). "Visual Goal-Step Inference using wikiHow." EMNLP 2021.

Abstract. Understanding what sequence of steps are needed to complete a goal can help artificial intelligence systems reason about human ac- tivities. Past work in NLP has examined the task of goal-step inference for text. We intro- duce the visual analogue. We propose the Vi- sual Goal-Step Inference (VGSI) task, where a model is given a textual goal and must choose which of four images represents a plausible step towards that goal. With a new dataset har- vested from wikiHow consisting of 772,277 images representing human actions, we show that our task is challenging for state-of-the- art multimodal models. Moreover, the mul- timodal representation learned from our data can be effectively transferred to other datasets like HowTo100m, increasing the VGSI accu- racy by 15 - 20%. Our task will facilitate mul- timodal reasoning about procedural events.

Download paper here

Recommended citation: Yue Yang et. al. (2021). “Visual Goal-Step Inference using wikiHow.” EMNLP 2021.