AI & Fundamentals
Inverse constraint learning and risk averse reinforcement learning for safe AI - Pascal Poupart, Professor, University of Waterloo

DATE: Thu, February 29, 2024 - 10:00 am

LOCATION: UBC Vancouver Campus, ICCS X836

DETAILS

Zoom Link

 

Abstract:

In many applications of reinforcement learning (RL) and control, policies need to satisfy constraints to ensure feasibility, safety or thresholds about key performance indicators.  However, some constraints may be difficult to specify.  For instance, in autonomous driving, it is relatively easy to specify a reward function to reach a destination, but implicit constraints followed by expert human drivers to ensure a safe, smooth and comfortable ride are much more difficult to specify.  I will present some techniques to learn soft constraints from expert trajectories in autonomous driving and robotics.  I will also present an alternative to variance based on Gini deviation for risk-averse reinforcement learning.

The content of this talk will be based on the following papers:

Ashish Gaurav, Kasra Rezaee, Guiliang Liu, Pascal Poupart (2023) Learning Soft Constraints from Constrained Expert Demonstrations, ICLR.

Guiliang Liu, Yudong Luo, Ashish Gaurav, Kasra Rezaee, Pascal Poupart (2023) Benchmarking Constraint Inference in Inverse Reinforcement Learning, ICLR.

Yudong Luo, Guiliang Liu, Pascal Poupart and Yangchen Pan (2023) An Alternative to Variance: Gini Deviation for Risk-averse Policy Gradient, NeurIPS.


Bio:

Pascal Poupart is a Professor in the David R. Cheriton School of Computer Science at the University of Waterloo (Canada). He is also a Canada CIFAR AI Chair at the Vector Institute and a member of the Waterloo AI Institute. He serves on the advisory board of the NSF AI Institute for Advances in Optimization (2022-present) at Georgia Tech. He served as Research Director and Principal Research Scientist at the Waterloo Borealis AI Research Lab at the Royal Bank of Canada (2018-2020). He also served as scientific advisor for ProNavigator (2017-2019), ElementAI (2017-2018) and DialPad (2017-2018). His research focuses on the development of algorithms for Machine Learning with application to Natural Language Processing and Material Design. He is most well-known for his contributions to the development of Reinforcement Learning algorithms. Notable projects that his research team are currently working on include inverse constraint learning, mean field RL, RL foundation models, Bayesian federated learning, probabilistic deep learning, conversational agents, automated document editing, sport analytics, adaptive satisfiability and material design for CO2 recycling.

 

Zoom Link


< Back to Events