BEGIN:VCALENDAR VERSION:2.0 PRODID:-//https://caida.ubc.ca//NONSGML iCalcreator 2.41.92// CALSCALE:GREGORIAN METHOD:PUBLISH UID:39373138-3731-4233-a437-346530363964 X-WR-RELCALID:efc09d74-9c93-479e-a94f-485231ddccde X-WR-TIMEZONE:America/Vancouver X-WR-CALNAME:Bridge theory and practice: One-step full gradient could suffi ce for low-rank fine-tuning in LLMs\, provably and efficiently - Fanghui L iu\, Assistant Professor\, Warwick University BEGIN:VTIMEZONE TZID:America/Vancouver TZUNTIL:20270314T100000Z BEGIN:STANDARD TZNAME:PST DTSTART:20241103T020000 TZOFFSETFROM:-0700 TZOFFSETTO:-0800 RDATE:20251102T020000 RDATE:20261101T020000 END:STANDARD BEGIN:DAYLIGHT TZNAME:PDT DTSTART:20250309T020000 TZOFFSETFROM:-0800 TZOFFSETTO:-0700 RDATE:20260308T020000 END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:7ca12fa6-d065-4510-bda6-c79d6b0533df DTSTAMP:20260307T143552Z CLASS:PUBLIC CREATED:20250704T230846Z DESCRIPTION:Abstract: In this talk\, I will talk about how theory can guide practice exemplified by low-rank fine-tuning (LoRA) in large language mod els. Our theory demonstrates that LoRA naturally aligns with a specific si ngular subspace of the one-step gradient from full fine-tuning. This cruci al insight motivated our development of a spectral initialization strategy that precisely targets and achieves this subspace alignment from the outs et. This strategy is not only theoretically optimal for subspace alignment but also allows for effective fine-tuning with just a single full gradien t step. We'll… DTSTART;TZID=America/Vancouver:20250721T110000 DTEND;TZID=America/Vancouver:20250721T120000 LAST-MODIFIED:20250704T231259Z LOCATION:UBC Vancouver Campus\, ICCS X836 SUMMARY:Bridge theory and practice: One-step full gradient could suffice fo r low-rank fine-tuning in LLMs\, provably and efficiently - Fanghui Liu\, Assistant Professor\, Warwick University TRANSP:OPAQUE URL:https://caida.ubc.ca/event/bridge-theory-and-practice-one-step-full-gra dient-could-suffice-low-rank-fine-tuning-llms END:VEVENT END:VCALENDAR