Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Metaphor and Entailment: Looking at metaphors through the lens of textual entailment.

, 2020

Abstract. Metaphors are very intriguing elements of human language that are surprisingly prevalent in our everyday communications. In fact, studies show that the human brain processes conventional metaphors in the same speed as literal language. Nevertheless, the computational linguistics literature consistently treats metaphors as a separate domain to literal language. This study investigates the potential of constructing systems that can jointly handle metaphoric and literal sentences by leveraging the newfound capabilities of deep learning systems. We narrow the scope of the report, following earlier work, to evaluate deep learning systems fine-tuned on the task of textual entailment (TE). We argue that TE is a task naturally suited to the interpretation of metaphoric language. We show that TE systems can improve significantly in metaphoric performance by being fine tuned on a small dataset with metaphoric premises. Even though the improvement in performance on metaphors is typically accompanied by a drop in performance on the original dataset we note that auto-regressive models seem to show a smaller drop in performance on literal examples compared to other types of models.

Recommended citation: Artemis Panagopoulou, Mitch Marcus (2020). "Metaphor and Entailment: Looking at metaphors through the lens of textual entailment." Not Published.

Posts

Future Blog Post

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

publications

Self-Supervised Optical Flow with Spiking Neural Networks and Event Based Cameras

IROS 2021, 2021

Abstract. Optical flow can be leveraged in robotic systems for obstacle detection where low latency solutions are critical in highly dynamic settings. While event-based cameras have changed the dominant paradigm of sending by encoding stimuli into spike trails, offering low bandwidth and latency, events are still processed with traditional convolutional networks in GPUs defeating, thus, the promise of efficient low capacity low power processing that inspired the design of event sensors. In this work, we introduce a shallow spiking neural network for the computation of optical flow consisting of Leaky Integrate and Fire neurons. Optical flow is predicted as the synthesis of motion orientation selective channels. Learning is accomplished by Backpropapagation Through Time. We present promising results on events recorded in real “in the wild” scenes that has the capability to use only a small fraction of the energy consumed in CNNs deployed on GPUs.

Recommended citation: Kenneth Chaney, Artemis Panagopoulou, Chankyu Lee, Kaushik Roy, and Kostas Daniilidis (2021). "Self-Supervised Optical Flow with Spiking Neural Networks and Event Based Cameras." IROS 2021.

Visual Goal-Step Inference using wikiHow

EMNLP 2021 (Oral), 2021

Abstract. Understanding what sequence of steps are needed to complete a goal can help artificial intelligence systems reason about human activities. Past work in NLP has examined the task of goal-step inference for text. We intro- duce the visual analogue. We propose the Visual Goal-Step Inference (VGSI) task, where a model is given a textual goal and must choose which of four images represents a plausible step towards that goal. With a new dataset harvested from wikiHow consisting of 772,277 images representing human actions, we show that our task is challenging for state-of-the- art multimodal models. Moreover, the mul- timodal representation learned from our data can be effectively transferred to other datasets like HowTo100m, increasing the VGSI accuracy by 15 - 20%. Our task will facilitate mul- timodal reasoning about procedural events.

Recommended citation: Yue Yang, Artemis Panagopoulou, Qing Lyu, Li Zhang, Mark Yatskar, Chris Callison-Burch (2021). "Visual Goal-Step Inference using wikiHow." EMNLP 2021.

Induce, Edit, Retrieve: Language Grounded Multimodal Schema for Instructional Video Retrieval

Arxiv, 2021

Abstract. Schemata are structured representations of complex tasks that can aid artificial intelligence by allowing models to break down complex tasks into intermediate steps. We propose a novel system that induces schemata from web videos and generalizes them to capture unseen tasks with the goal of improving video retrieval performance. Our system proceeds in three major phases: (1) Given a task with related videos, we construct an initial schema for a task using a joint video-text model to match video segments with text representing steps from wikiHow; (2) We generalize schemata to unseen tasks by leveraging language models to edit the text within existing schemata. Through generalization, we can allow our schemata to cover a more extensive range of tasks with a small amount of learning data; (3) We conduct zero-shot instructional video retrieval with the unseen task names as the queries. Our schema-guided approach outperforms existing methods for video retrieval, and we demonstrate that the schemata induced by our system are better than those generated by other models.

Recommended citation: Yang, Yue, et al. "Induce, Edit, Retrieve: Language Grounded Multimodal Schema for Instructional Video Retrieval." arXiv preprint arXiv:2111.09276 (2021)

teaching

CIS 262: Teaching Assistant

Undergraduate course, University of Pennsylvania, Computer and Information Science, 2018

  • Course: (CIS 262) Automata, Computability, and Complexity
  • Instructor: Dr. Nima Roohi
  • Semesters: Spring 2018

MCIT 592: Head Teaching Assistant

Graduate Course, University of Pennsylvania, Computer and Information Science, 2018

  • Course: (MCIT 592) Mathematical Foundations of Computer Science
  • Instructor: Prof. Val Tannen
  • Semesters: Summer 2018-Spring 2019

CIS 521: Teaching Assistant

Graduate course, University of Pennsylvania, Computer and Information Science, 2021

  • Course: (CIS 521) Introduction to Artificial Intelligence
  • Instructor: Prof. Chris Callison-Burch
  • Semesters: Fall 2021

Coding Club: Instructor

Workshop for Elementary School Students, Kohelet-Yeshiva, 2021

  • Course: (Coding Club) Introduction to Python
  • Semesters: Fall 2021, Spring 2022

CIS 700: Teaching Assistant

Graduate course, University of Pennsylvania, Computer and Information Science, 2022

  • Course: (CIS 700) Interactive Fiction and Text (Story) Generation
  • Instructor: Prof. Chris Callison-Burch, Dr. Lara Martin
  • Semesters: Spring 2022