Seminar
BMIAI Seminar - A Marauder's Map of Security and Privacy in Machine Learning - Nicolas Papernot

Nicolas Papernot photo
Photo credit: https://www.papernot.fr/

DATE: Thu, October 3, 2019 - 11:00 am

LOCATION: Fred Kaiser - 2020/2030, 2332 Main Mall, Vancouver, BC

DETAILS

Abstract:

There is growing recognition that machine learning (ML) exposes new security and privacy vulnerabilities in software systems, yet the technical community's understanding of the nature and extent of these vulnerabilities remains limited but expanding. In this talk, we explore the threat model space of ML algorithms through the lens of Saltzer and Schroeder's principles for the design of secure computer systems. This characterization of the threat space prompts an investigation of current and future research directions. We structure our discussion around two of these directions, which we believe are likely to lead to significant progress. The first encompasses a spectrum of approaches to verification and admission control, which is a prerequisite to enable fail-safe
defaults in machine learning systems. The second seeks to design mechanisms for assembling reliable records of compromise that would help understand the degree to which vulnerabilities are exploited by adversaries, as well as favor psychological acceptability of machine learning applications. Key insights resulting from these three directions pursued both in the ML and security communities are identified and the effectiveness of approaches are related to structural elements of ML algorithms and the data used to train them.

Bio:

Nicolas Papernot is an Assistant Professor of Electrical and Computer Engineering at the University of Toronto and Canada CIFAR AI Chair at the Vector Institute. His research interests span the security and privacy of machine learning. Nicolas received a best paper award at ICLR 2017. He is also the co-author of CleverHans, an open-source library widely adopted in the technical community to benchmark machine learning in adversarial settings, and TF Privacy, an open-source library for training differentially private models. He serves on the program committees of several conferences including ACM CCS, IEEE S&P, and USENIX Security. He earned his Ph.D. at the Pennsylvania State University, working with Prof. Patrick McDaniel and supported by a Google PhD Fellowship. Upon graduating, he spent a year as a research scientist at Google Brain.


< Back to Events