BEGIN:VCALENDAR VERSION:2.0 PRODID:-//https://caida.ubc.ca//NONSGML iCalcreator 2.41.92// CALSCALE:GREGORIAN METHOD:PUBLISH UID:36383965-3265-4561-b730-373263323663 X-WR-RELCALID:efc09d74-9c93-479e-a94f-485231ddccde X-WR-TIMEZONE:America/Vancouver X-WR-CALNAME:Robust Deep Learning Under Distribution Shift - Zachary Chase Lipton\, Assistant Professor\, Carnegie Mellon University BEGIN:VTIMEZONE TZID:America/Vancouver TZUNTIL:20211107T090000Z BEGIN:STANDARD TZNAME:PST DTSTART:20191103T020000 TZOFFSETFROM:-0700 TZOFFSETTO:-0800 RDATE:20201101T020000 END:STANDARD BEGIN:DAYLIGHT TZNAME:PDT DTSTART:20190310T020000 TZOFFSETFROM:-0800 TZOFFSETTO:-0700 RDATE:20200308T020000 RDATE:20210314T020000 END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:03e8a7ed-1302-41d4-aa7e-ea3892b98d84 DTSTAMP:20260227T085423Z CLASS:PUBLIC CREATED:20191203T183127Z DESCRIPTION:Abstract: We might hope that when faced with unexpected inputs\ , well-designed software systems would fire off warnings. However\, ML sys tems\, which depend strongly on properties of their inputs (e.g. the i.i.d . assumption)\, tend to fail silently. Faced with distribution shift\, we wish (i) to detect and (ii) to quantify the shift\, and (iii) to correct o ur classifiers on the fly—when possible. This talk will describe a line of recent work on tackling distribution shift. First\, I will focus on recen t work on label shift\, a classic problem\, where strong assumptions enabl e principled methods. Then I… DTSTART;TZID=America/Vancouver:20191216T113000 DTEND;TZID=America/Vancouver:20191216T123000 LAST-MODIFIED:20210611T170858Z LOCATION:ICCS - X836\, ICICS Computer Science\, 2366 Main Mall\, Vancouver\ , BC SUMMARY:Robust Deep Learning Under Distribution Shift - Zachary Chase Lipto n\, Assistant Professor\, Carnegie Mellon University TRANSP:OPAQUE URL:https://caida.ubc.ca/event/robust-deep-learning-under-distribution-shif t-zachary-chase-lipton-assistant-professor END:VEVENT END:VCALENDAR