AI & Fundamentals
Imagination Inspired Vision - Mohamed H. Elhoseiny, Assistant Professor, KAUST

Mohamed H. Elhoseiny Image

DATE: Mon, December 16, 2019 - 9:30 am

LOCATION: ICCS - X836, ICICS Computer Science, 2366 Main Mall, Vancouver, BC

DETAILS

Abstract:

Imagination is one of the key properties of human intelligence that enables us not only to learn new concepts quickly and efficiently but also to generate creative products like art and music. My research has focused on developing imagination-inspired techniques that empower AI machines to see the world (computer vision) or to create novel products (e.g., fashion and art); “Imagine to See” and “Imagine to Create”. In this talk, I will cover some of my works on these two directions and I will show how they are connected by the developed techniques and that they circle back to benefit each other. Imagine to See: There are over 10,000 living bird species, yet most computer vision datasets of birds have only 200-500 categories. Typically, there are few images available for training classifiers for most of these categories (long-tail classes). How could imagination help understand visual classes with zero/few examples? Many people might not know what “Parakeet Auklet” is but can imagine it when described in language by saying that “Parakeet Auklet is a bird that has an orange bill, dark above and white below.”. If we give this description to an average person, he will be able to select the relevant bird among other different birds, due to our capability to imagine the “Parakeet Auklet” class from the language description. I will cover in the presentation my most recent work on zero-shot learning by generating imaginary data from text descriptions, presented at CVPR18. Imagine to Create: In the short term, Creative AI has a high potential to speed up our rate of generating creative products like paintings, music, animations, etc. as a source of inspiration. I have worked on modeling Creative AI to produce art (ICCC17) and fashion. Our work on both art and fashion grabbed attention from the scientific community, media, and industry. One of the exciting results we achieved recently is that our AI fashion model was able to create new pants with additional arm sleeves (non-existing in the dataset). The surprising aspect of this design is that professional fashion designers found it inspirational for designing new pants, showing how creativity may impact the fashion industry. Our work has been featured by the new scientist magazine, the Facebook F8 conference. Our work also received the best paper award at ECCV18 workshop on fashion and art. I will show that these Creative AI techniques for creating a likable unseen circle back to benefit understanding an unseen (zero-shot learning) in our very recent work at ICCV 2019. With the success of generative models in both tasks, we got motivated to proposed a data efficient generative model, dubbed as Generative Determinative Point Processes (GDPP), published at ICML 2019.
 


Bio: 

Dr. Mohamed Elhoseiny is Assistant Professor of Computer Science at the Visual Computing Center at KAUST (King Abdullah University of Science and Technology) and Visiting Faculty at Stanford University. He received his PhD from Rutgers university under Prof. Ahmed Elgammal in October 2016 then spent more than two years at Facebook AI Research (FAIR) until January 2019 as a Postdoc Researcher. He later was an AI Research consultant at Baidu Research at Silicon Valley AI Lab (SVAIL) until September 2019. His primary research interests are in computer visionand especially about learning about the unseen or the least unseen by recognition (e.g., zero-shot learning) or by generation (creative art and fashion generation). Under the umbrella of how AImay benefit biodiversity, Dr. Elhoseiny's 6-years long development of the zero-shot learning problem was featured in the United Nations biodiversity conference in November 2018 (~10,000 audience from >192 countries). His creative AI research projects were recognized at the ECCV18 workshop on Fashion and Art with the best paper award, media coverage at the New Scientist Magazine and MIT Tech review (2017, 2018), 20 min speech at the Facebook F8 conference (2018), the official FAIR video (2018), and coverage at HBO Silicon Valley TV Series (2018).

 

 

 

Host:

Associate Professor Leonid Sigal, Computer Science, UBC

CAIDA Contact: 

Arynn Keane

arynnk@mail.ubc.ca


< Back to Events