AI & Fundamentals
Bridge theory and practice: One-step full gradient could suffice for low-rank fine-tuning in LLMs, provably and efficiently - Fanghui Liu, Assistant Professor, Warwick University
DATE: Mon, July 21, 2025 - 11:00 am
LOCATION: UBC Vancouver Campus, ICCS X836
DETAILS
Abstract:
In this talk, I will talk about how theory can guide practice exemplified by low-rank fine-tuning (LoRA) in large language models. Our theory demonstrates that LoRA naturally aligns with a specific singular subspace of the one-step gradient from full fine-tuning. This crucial insight motivated our development of a spectral initialization strategy that precisely targets and achieves this subspace alignment from the outset. This strategy is not only theoretically optimal for subspace alignment but also allows for effective fine-tuning with just a single full gradient step. We'll present LoRA-One, our theoretically-grounded algorithm, which delivers substantial performance gains over existing LoRA methods on various benchmarks, including MetaMathQA. Furthermore, our theoretical framework sheds light on common misconceptions in algorithm design and contributes to the broader understanding of training dynamics in multi-index models. Talk is based on https://arxiv.org/abs/2502.01235 [ICML'25 spotlight]
Bio:
Dr. Fanghui Liu is currently an assistant professor at University of Warwick, UK, a member of Centre for Discrete Mathematics and its Applications (DIMAP). His research focuses on the foundations of modern machine learning and learning efficiency in practice. He was a recipient of AAAI'24 New Faculty Award, Rising Star in AI (KAUST 2023), co-founded the fine-tuning workshop at NeurIPS'24, and presented three tutorials at ISIT’24, CVPR’23, and ICASSP’23, respectively. He has served as an area chair of NeruIPS, ICLR and AISTATS, etc. Prior to his current position, he worked as a postdoc researcher at EPFL (2021-2023) and KU Leuven (2019-2023), respectively. He received his PhD degree from Shanghai Jiao Tong University in 2019, receiving several Excellent Doctoral Dissertation Awards.