AI & Applications
Faster Learning on Slow Hardware - Tor Aamodt, Professor, UBC

Tor Aamodt image

DATE: Wed, May 18, 2022 - 10:00 am

LOCATION: UBC Vancouver Campus, ICCS X836 / Please register to receive Zoom link

DETAILS

Please register for this event here.

We request that you register for this event, whether you plan to attend virtually or in-person.

 

Abstract:

Advances in hardware performance have yielded improvements in machine learning by enabling deep neural networks trained on large datasets in reasonable time.  However, scaling of existing hardware technologies is slowing and may (eventually) reach limits.  This talk describes our recent work exploring approaches to accelerate the training of deep neural networks further in spite of this.  Encouraging sparsity during training can reduce computation requirements.  Two techniques for accomplishing this will be described: SWAT encourages sparsity in weights and activations; ReSprop avoids computations by selectively reusing gradients between successive training iterations. Recent parallel hardware provides limited support for exploiting the sparsity generated by techniques like SWAT and ReSprop. Thus, we propose and evaluate a hardware mechanism, ANT, for anticipating and eliminating redundant non-zero multiplications during training that result from mapping convolutions onto outer-product based systolic architectures.  The talk will also describe our recent work on hardware and software techniques for employing lossy compression to enable larger model sizes.


Bio:

Tor Aamodt is a Professor in the Department of Electrical and Computer Engineering at the University of British Columbia.  His research is focused on computer architecture with recent forays into machine learning.  Along with students in his research group he developed the GPGPU-Sim simulator that has found widespread use in computer architecture research.

 

Please register for this event here.


< Back to Events