Artemis Panagopoulou

Email Email GitHub GitHub LinkedIn LinkedIn Google Scholar Google Scholar
prof_pic.jpg

I am a PhD student at the University of Pennsylvania working in the intersection of Natural Language Processing and Computer Vision under the supervision of Professor Chris Callison-Burch and Professor Mark Yatskar. Currently I also work as a student researcher at Google (Augmented Reality). Previously I was a research intern at Salesforce AI.

My research focuses on advancing multimodal AI by integrating diverse modalities such as images, audio, video, text, and 3D. I address challenges in multimodal integration, benchmark development, and enhancing interpretability to build trustworthy models. My mission is to craft models that can see, listen, and comprehend with the nuance of perceptual coherenceβ€”models that are as robust as they are insightful, and as interpretable as they are performant, bringing us closer to a future where machines are not just tools, but reliable, insightful collaborators.

In addition to my academic pursuits, I have a strong passion for education. As a Teaching Assistant at the University of Pennsylvania, and through my community teaching experiences, I strive to challenge students with the beautiful and mentally stimulating concepts of mathematics, logic, and computer science, while also breaking down any mental barriers that may have been created from past negative experiences.

news

Aug 21, 2025 πŸ“’ Announcement: Our paper Contra4: Evaluating Contrastive Cross-Modal Reasoning in Audio, Video, Image, and 3D has been accepted to EMNLP 2025πŸŽ‰πŸŽ‰
Feb 25, 2025 πŸ“’ Announcement: Our paper ViUniT: Visual Unit Tests for More Robust Visual Programming. has been accepted to CVPR2025πŸŽ‰πŸŽ‰
Aug 29, 2024 πŸ“’ Announcement: Our paper Evaluating Vision-Language Models on Bistable Images has received best paper award at CMCL 2024!πŸŽ‰πŸ†
Aug 17, 2024 πŸ“’ Announcement: Our paper X-InstructBLIP: A Framework for aligning X-Modal instruction-aware representations to LLMs and Emergent Cross-modal Reasoning has been accepted to ECCV 2024!πŸŽ‰
Mar 17, 2024 πŸ“’ Announcement: Our paper ULIP-2 has been accepted to CVPR2024!πŸŽ‰