Publications

Induce, Edit, Retrieve: Language Grounded Multimodal Schema for Instructional Video Retrieval

Arxiv, 2021

Abstract. Schemata are structured representations of complex tasks that can aid artificial intelligence by allowing models to break down complex tasks into intermediate steps. We propose a novel system that induces schemata from web videos and generalizes them to capture unseen tasks with the goal of improving video retrieval performance. Our system proceeds in three major phases: (1) Given a task with related videos, we construct an initial schema for a task using a joint video-text model to match video segments with text representing steps from wikiHow; (2) We generalize schemata to unseen tasks by leveraging language models to edit the text within existing schemata. Through generalization, we can allow our schemata to cover a more extensive range of tasks with a small amount of learning data; (3) We conduct zero-shot instructional video retrieval with the unseen task names as the queries. Our schema-guided approach outperforms existing methods for video retrieval, and we demonstrate that the schemata induced by our system are better than those generated by other models.

Recommended citation: Yang, Yue, et al. "Induce, Edit, Retrieve: Language Grounded Multimodal Schema for Instructional Video Retrieval." arXiv preprint arXiv:2111.09276 (2021)

Visual Goal-Step Inference using wikiHow

EMNLP 2021 (Oral), 2021

Abstract. Understanding what sequence of steps are needed to complete a goal can help artificial intelligence systems reason about human activities. Past work in NLP has examined the task of goal-step inference for text. We intro- duce the visual analogue. We propose the Visual Goal-Step Inference (VGSI) task, where a model is given a textual goal and must choose which of four images represents a plausible step towards that goal. With a new dataset harvested from wikiHow consisting of 772,277 images representing human actions, we show that our task is challenging for state-of-the- art multimodal models. Moreover, the mul- timodal representation learned from our data can be effectively transferred to other datasets like HowTo100m, increasing the VGSI accuracy by 15 - 20%. Our task will facilitate mul- timodal reasoning about procedural events.

Recommended citation: Yue Yang, Artemis Panagopoulou, Qing Lyu, Li Zhang, Mark Yatskar, Chris Callison-Burch (2021). "Visual Goal-Step Inference using wikiHow." EMNLP 2021.

Self-Supervised Optical Flow with Spiking Neural Networks and Event Based Cameras

IROS 2021, 2021

Abstract. Optical flow can be leveraged in robotic systems for obstacle detection where low latency solutions are critical in highly dynamic settings. While event-based cameras have changed the dominant paradigm of sending by encoding stimuli into spike trails, offering low bandwidth and latency, events are still processed with traditional convolutional networks in GPUs defeating, thus, the promise of efficient low capacity low power processing that inspired the design of event sensors. In this work, we introduce a shallow spiking neural network for the computation of optical flow consisting of Leaky Integrate and Fire neurons. Optical flow is predicted as the synthesis of motion orientation selective channels. Learning is accomplished by Backpropapagation Through Time. We present promising results on events recorded in real “in the wild” scenes that has the capability to use only a small fraction of the energy consumed in CNNs deployed on GPUs.

Recommended citation: Kenneth Chaney, Artemis Panagopoulou, Chankyu Lee, Kaushik Roy, and Kostas Daniilidis (2021). "Self-Supervised Optical Flow with Spiking Neural Networks and Event Based Cameras." IROS 2021.