recent news
-
•Four papers to be presented at CVPR 2024:
-
✓Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives,
with 100 co-authors! Accepted as oral (<1% accept rate). -
✓Learning to Segment Referred Objects from Narrated Egocentric Videos,
with Yuhan Shen, Huiyu Wang, Xitong Yang, Matt Feiszli, Ehsan Elhamifar, and Effrosyni Mavroudi. Accepted as oral (<1% accept rate). -
✓Step Differences in Instructional Video,
with Tushar Nagarajan. -
✓Video ReCap: Recursive Captioning of Hour-Long Videos,
with Md Mohaiminul Islam, Ngan Ho, Xitong Yang, Tushar Nagarajan, and Gedas Bertasius.
-
•Two papers presented at NeurIPS 2023:
-
✓Ego4D Goal-Step: Toward Hierarchical Understanding of Procedural Activities,
with Yale Song, Gene Byrne, Tushar Nagarajan, Huiyu Wang, and Miguel Martin. Accepted as spotlight. -
✓HT-Step: Aligning Instructional Articles with How-To Videos,
with Triantafyllos Afouras, Effrosyni Mavroudi, Tushar Nagarajan, and Huiyu Wang.
-
•Two papers presented at ICCV 2023:
-
✓Ego-Only: Egocentric Action Detection without Exocentric Transferring,
with Huiyu Wang, and Mitesh Kumar Singh. -
✓Learning to Ground Instructional Articles in Videos through Narrations,
with Effrosyni Mavroudi, and Triantafyllos Afouras.
-
•Three papers presented at CVPR 2023 as highlights (10% of accepted papers, 2.6% of submitted papers):
-
✓Egocentric Video Task Translation,
with Zihui Xue, Yale Song, and Kristen Grauman. -
✓HierVL: Learning Hierarchical Video-Language Embeddings,
with Kumar Ashutosh, Rohit Girdhar, and Kristen Grauman. -
✓ Relational Space-Time Query in Long-Form Videos,
with Xitong Yang, Fu-Jen Chu, Raghav Goyal, Matt Feiszli, and Du Tran.
research overview
My research interests are in computer vision and machine learning. My current work is primarily focused on multimodal learning and video understanding.
previous affiliations
-
•Fulbright U.S. Scholar at Ashesi University in Ghana.
-
•Riya/Like.com
-
•DigitalPersona
-
•Istituto per la Ricerca Scientifica e Tecnologica (IRST)