BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//https://caida.ubc.ca//NONSGML iCalcreator 2.41.92//
CALSCALE:GREGORIAN
METHOD:PUBLISH
UID:30393239-6236-4735-a261-303765303637
X-WR-RELCALID:efc09d74-9c93-479e-a94f-485231ddccde
X-WR-TIMEZONE:America/Vancouver
X-WR-CALNAME:Differentially Private Fine-tuning of Language Models - Gautam
  Kamath\, Assistant Professor\, University of Waterloo
BEGIN:VTIMEZONE
TZID:America/Vancouver
TZUNTIL:20240310T100000Z
BEGIN:STANDARD
TZNAME:PST
DTSTART:20211107T020000
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
RDATE:20221106T020000
RDATE:20231105T020000
END:STANDARD
BEGIN:DAYLIGHT
TZNAME:PDT
DTSTART:20220313T020000
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
RDATE:20230312T020000
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:71fe6a15-56cc-43e4-a4b0-8762b74755e0
DTSTAMP:20260414T234847Z
CLASS:PUBLIC
CREATED:20220727T183241Z
DESCRIPTION:Please register for this event here. Abstract: We give simpler\
 , sparser\, and faster algorithms for differentially private fine-tuning o
 f large-scale pre-trained language models\, which achieve the state-of-the
 -art privacy versus utility tradeoffs on many standard NLP tasks. We propo
 se a meta-framework for this problem\, inspired by the recent success of h
 ighly parameter-efficient methods for fine-tuning. Our experiments show th
 at differentially private adaptations of these approaches outperform previ
 ous private algorithms in three important dimensions: utility\, privacy\, 
 and the computational and…
DTSTART;TZID=America/Vancouver:20220829T110000
DTEND;TZID=America/Vancouver:20220829T120000
LAST-MODIFIED:20220727T183357Z
LOCATION:UBC Vancouver Campus\, ICCS X836
SUMMARY:Differentially Private Fine-tuning of Language Models - Gautam Kama
 th\, Assistant Professor\, University of Waterloo
TRANSP:OPAQUE
URL:https://caida.ubc.ca/event/differentially-private-fine-tuning-language-
 models-gautam-kamath-assistant-professor
END:VEVENT
END:VCALENDAR
