July 24, 2019

Squeezing down the computing requirements of deep neural networks

Wednesday, July 24, 2019, 6:00 PM – 8:00 PM PDT

The event is FREE. Food & drinks will be provided.

This event is organized by:


Speaker1:  Forrest Iandola, CEO and co-Founder of DeepScale

Speaker2: Albert Shaw, Engineer at DeepScale

Location: 673 South Milpitas Blvd. Milpitas, CA 95035, USA  View Map 

(Register here)


  • 6:00 – 6:30 PM Networking & Refreshments
  •  6:30 – 7:30 PM Talk
  • 7:30 – 8:00 PM Q&A/Adjourn

Abstract: Deep Neural Networks (DNNs) have enabled breakthrough levels of accuracy on a variety of tasks in vision, audio, and text. However, DNNs can be quite computationally-intensive, and highly-accurate DNNs often require a full-sized GPU server for real-time inference. To squeeze DNNs into smaller computing footprints, there are a number of techniques, including better DNN design, DNN quantization, better implementations of DNNs, and better utilization of specialized computing hardware. This talk touches on all these techniques, with particular focus on better DNN design for computer vision. Recently, Neural Architecture Search (NAS) technologies have begun to make significant progress in automating the process of designing “squeezed” DNNs, and we cover some of the latest work on NAS in this talk.

Bio: Forrest Iandola

Forrest Iandola completed a PhD in Electrical Engineering and Computer Science at UC Berkeley, where his research focused on improving the efficiency of deep neural networks (DNNs). His best-known published research ranges from scaling DNN training to hundreds of GPUs (FireCaffe), to squeezing DNNs onto small edge-devices (SqueezeNet and SqueezeDet). His advances in scalable training and efficient inference of DNNs led to the founding of DeepScale, where he has been CEO since 2015. DeepScale builds energy-efficient vision and perception systems for automated vehicles.

Bio: Albert Shaw

Albert Shaw has been working on research in Neural Architecture Search (NAS) since 2017. He received his Masters in Computer Science from the Georgia Institute of Technology where he focused on speeding up Neural Architecture Search and creating more sample-efficient algorithms for Reinforcement Learning. At DeepScale, his work focuses on using NAS to develop state-of-the-art fast and accurate Deep Neural Networks for automated vehicle perception systems.

Open to all to attend

(Online registration is needed. If you did not register, seating is not guaranteed.)

Eventbrite - enabling-digital-transformation-at-the-edge


Santa Clara Valley Chapter of the Solid State Circuits Society


September 2021

Next Meeting

“Automatic Generation of SystemVerilog Models from Analog/Mixed-Signal Circuits: a Pipelined ADC Example” – Prof. Jaeha Kim, Seoul National University (SNU)

Search Previous Events