AI & Fundamentals
Towards Interpretable Deep Learning - Lily Weng, Assistant Professor, UC San Diego

DATE: Mon, February 26, 2024 - 3:00 pm

LOCATION: UBC Vancouver Campus, MCLD 3038



Deep neural networks (DNNs) have achieved unprecedented success across many scientific and engineering fields in the last decades. Despite its empirical success, however, they are notoriously black-box models that are difficult to understand their decision process. Lacking interpretability is one critical issue that may seriously hinder the deployment of DNNs in high-stake applications, which need interpretability to trust the prediction, to understand potential failures, and to be able to mitigate harms and eliminate biases in the model. 
In this talk, I'll share some exciting results in my lab on advancing explainable AI and interpretable machine learning. Specifically, I will show how we could bring interpretability into deep learning by leveraging recent advances in multi-modal models. I'll present two works [1,2] in our group on demystifying neural networks and interpretability-guided neural network design, which are the important first steps to enable Trustworthy AI and Trustworthy Machine Learning. I will also briefly overview our other recent efforts on Trustworthy Machine Learning and automated explanations for LLMs [3]. 
[2] Oikarinen, Das, Nguyen and Weng, Label-Free Concept Bottleneck Models, ICLR 23
[3] Lee, Oikarinen etal, The Importance of Prompt Tuning for Automated Neuron Explanations, NeurIPS 23 ATTRIB workshop


Lily Weng is an Assistant Professor in the Halıcıoğlu Data Science Institute at UC San Diego. She received her PhD in Electrical Engineering and Computer Sciences (EECS) from MIT in August 2020, and her Bachelor and Master degree both in Electrical Engineering at National Taiwan University. Prior to UCSD, she spent 1 year in MIT-IBM Watson AI Lab and several research internships in Google DeepMind, IBM Research and Mitsubishi Electric Research Lab. Her research interest is in machine learning and deep learning, with primary focus on trustworthy AI. Her vision is to make the next generation AI systems and deep learning algorithms more robust, reliable, explainable, trustworthy and safer. For more details, please see


< Back to Events