Artemis Panagopoulou

Email Email GitHub GitHub LinkedIn LinkedIn Google Scholar Google Scholar
prof_pic.jpg

I am a third year PhD student at the University of Pennsylvania working in the intersection of Natural Language Processing and Computer Vision under the supervision of Professor Chris Callison-Burch and Professor Mark Yatskar.

My interest lies in the study of knowledge and its acquisition, encoding, and communication. I recognize that knowledge encompasses more than just language - especially for procedural information - and therefore my research explores the importance of multimodality in knowledge encoding and transmission. I examine the impact of sensory inputs and mental experiences on our understanding of the world. My higher-arching goal is to gain a deeper understanding of the relationship between knowledge, perception, and communication and how they can be utilized for a comprehensive view of the world.

In addition to my academic pursuits, I have a strong passion for education. As a Teaching Assistant at the University of Pennsylvania, and through my community teaching experiences, I have acquired a teaching style that prioritizes creating a comfortable and inclusive environment for learning. I strive to challenge students with the beautiful and mentally stimulating concepts of mathematics, logic, and computer science, while also breaking down any mental barriers that may have been created from past negative experiences.

I am convinced that computer science is a field accessible to all, no matter their background, identity, or prior experience. In our technology-driven society, enabling people from various walks of life to contribute to and shape the future of computer science is not just advantageous but vital for creating strong and inclusive technological solutions.

news

Mar 17, 2024 📢 Announcement: Our paper ULIP-2 has been accepted to CVPR2024!🎉
Jan 25, 2024 📢 Announcement: We released X-InstructBLIP a simple and effective, scalable cross-modal framework to empower LLMs to handle a diverse range of tasks across a variety of modalities (image, text, video, audio, and 3D), without requiring modality-specific pre-training. Checkout our paper and code🤖🤖
Sep 25, 2023 📢 Exciting News: Honored to have received the CETLI Graduate Fellowship for Teaching Excellence for the year 2023-2024.✨🎊
May 25, 2023 📢 Announcement:Our paper I Spy a Metaphor: Large Language Models and Diffusion Models Co-Create Visual Metaphors has been accepted to the Findings of ACL 2023! 🎉🎉
May 22, 2023 📢 Exciting News: This summer I will be working at Salesforce as a Research Intern 🙌 😄 I will be based at the Palo Alto office and reporting to Dr. Juan Carlos Niebles.