DATE: Thu, May 6, 2021 - 9:00 am
LOCATION: Please register to receive the virtual platform links
Join us for our virtual open house. We look forward to connecting with you and showcasing some of the work being done within our centre!
Please note that all times are PT.
Kwang Moo Yi
Learning what is interesting in images and matching them
Dancing with Data: An Engineer’s Choreography
Statistical estimation under algebraic constraints
Machine Learning Algorithms and Applications
Designing for Impact: AI-enabled point of care imaging
Bayesian data science
Autonomy under Uncertainty: Learning for Safety and Coordination
Capturing the Full Uncertainty Landscape: Parallel Tempering on Optimized Paths
Joint work with Vittorio Romaniello, Saifuddin Syed, and Alexandre Bouchard-Cote at UBC
Modern models in statistical machine learning continue to grow in size and complexity, and traditional inference algorithms struggle to explore their full uncertainty landscape. Parallel tempering (PT) is a (meta)algorithm that has recently re-emerged as a candidate to address this challenge. PT operates by creating a path of models from a simple "reference model" to the desired complex target model, taking samples from each, and then interchanging the samples along the path to improve the quality of samples from the target. The performance of PT depends on how quickly a sample from the reference distribution makes its way to the target, which in turn depends on the particular path of models. Past work on PT used only simple linear paths; in this talk I'll show that this path performs poorly in common applications. To address this issue, I'll present an extension of the PT framework to general families of paths, formulate the choice of path as an optimization problem that admits tractable gradient estimates, and present a flexible new family of spline interpolation paths for use in practice. Theoretical and empirical results will demonstrate that the proposed methodology breaks previously-established upper performance limits for traditional paths.
Trevor Campbell is an assistant professor of statistics at the University of British Columbia. His research focuses on automated, scalable Bayesian inference algorithms, Bayesian nonparametrics, streaming data, and Bayesian theory. He was previously a postdoctoral associate advised by Tamara Broderick in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and Institute for Data, Systems, and Society (IDSS) at MIT, a Ph.D. candidate under Jonathan How in the Laboratory for Information and Decision Systems (LIDS) at MIT, and before that he was in the Engineering Science program at the University of Toronto.
Learning Human Perception of Shape: Identifying and Processing Available Ground Truth
Many algorithms for manipulating 2D and 3D shapes aim to replicate or match human choices, e.g. vectorize a 2D image in a manner consistent with viewer expectations or locate a building in a database that has the same architectural style as a user provided exemplar. In my talk I will discuss the challenges that arise when trying to design algorithms that learn such human preferences, and in particular the difficulty in collecting large training sets that can be used for off-the-shelf learning methods. I will then describe a range of approaches that overcome these challenges by operating on available or easier to collect sets of training inputs, and combining those with perceptual, domain specific priors.
Join us to hear more about some of the contributions UBC is making in the world of Artificial Intelligence. We look forward to sharing a piece of CAIDA with you and getting to know all of you better.
If you have any questions, please contact Arynn Keane (firstname.lastname@example.org)