Deep Learning Hardware: Past, Present, and Future ๐Ÿ—“

Sponsor: EEE Orange County Computer Society and the Los Angeles Chapter of the ACM
Speaker: Bill Dally
register
Meeting Date: March 16, 2022
Time: 7PM
Cost:
Reservations: IEEE

Summary:
The current resurgence of artificial intelligence is due to advances in deep learning. Systems based on deep learning now exceed human capability in speech recognition, object classification, and playing games like Go. Deep learning has been enabled by powerful, efficient computing hardware. The algorithms used have been around since the 1980s, but it has only been in the last decade – when powerful GPUs became available to train networks – that the technology has become practical. Advances in deep learning are now gated by hardware performance. This talk will review the current state of deep learning hardware and explore a number of directions to continue performance scaling in the absence of Mooreโ€™s Law. Topics discussed will include number representation, sparsity, memory organization, optimized circuits, and analog computation.

Bio: Bill Dally is Chief Scientist and Senior Vice President of Research at NVIDIA Corporation and an Adjunct Professor and former chair of Computer Science at Stanford University. Bill is currently working on developing hardware and software to accelerate demanding applications including machine learning, bioinformatics, and logical inference. He has a history of designing innovative and efficient experimental computing systems. While at Bell Labs Bill contributed to the BELLMAC32 microprocessor and designed the MARS hardware accelerator. At Caltech he designed the MOSSIM Simulation Engine and the Torus Routing Chip which pioneered wormhole routing and virtual-channel flow control. At the Massachusetts Institute of Technology his group built the J-Machine and the M-Machine, experimental parallel computer systems that pioneered the separation of mechanisms from programming models and demonstrated very low overhead synchronization and communication mechanisms. At Stanford University his group developed the Imagine processor, which introduced the concepts of stream processing and partitioned register organizations, the Merrimac supercomputer, which led to GPU computing, and the ELM low-power processor.

Moved Online Webinar