- This event has passed.
IEEE Silicon Valley Machine Learning Compiler workshop- Fall 2018
December 17, 2018 @ 8:00 am - 5:00 pm PST
Tensorflow XLA, Justin Lebar and Sanjoy Das, Google
Intel nGraph & PlaidML, Jayaram Bobba and Tim Zerrell, Intel
TVM, Tianqi Chen, University of Washington
Xilinx ML suite Rahul Nimaiyar, Ashish Sirasao, Xilinx
Host/Program Chair: Dr. Kiran Gunnam
Workshop theme: We would like to explore the state-of-the art in compilers for machine learning in this series of workshops. We would be featuring speakers who are working on open source compilers such as Google TensorFlow XLA, Facebook Glow, TVM, Intel nGraph & PlaidMLand Swift for Tensorflow. The talks are focused on explaining the design choices as well as hands-on overview of how to get started on these compilers to contribute. We will also be featuring the speakers working on Nvidia Tensor-RT, Xilinx Machine Learning (ML) Suit, Intel DLA compilers.
Background for workshop theme: Designing new models is currently driven by automated search or human expert guided search for exploring model architectures and hyperparameters. Training these models takes significant amount of computer resources and time as well as larger training data sets. The current way of optimizing inference models is to take these larger models/neural networks and then attempt to simplify them through pruning. One of the open problem in machine learning is building the optimal networks while using less training data. Another open problem in machine learning is designing networks that can learn from mix of information (text, audio, images and side channel information) to more abstraction of understanding of what is happening in scene. These problems need major breakthroughs and need a holistic understanding of several scientific disciplines including information theory, neuro science.
There is also new focus on new computing and memory architectures that can address the limitations of traditional computing for current as well as emerging machine learning workloads. These architectures require new compilation techniques. While, it is potentially possible to brew a customized compiler that does not have any dependencies on the open source compilers such as LLVM for a given dataflow architecture or accelerator, given the speed at which machine learning models and workloads are evolving, the needed dataflow architectures to run these models are expected to be in flux as well. So there is a critical need for a compiler framework that can support machine learning accelerators.
Event Schedule:Dec 17th (Monday)8 am to 6pm
8am- 10.00am- Tensorflow XLA, Justin Lebar and Sanjoy Das, Google
10am-12.00 noon- Intel nGraph & PlaidML, Jayaram Bobba and Tim Zerrell, Intel
12.00noon – 1.00 pm – Lunch
1.00pm -3.00 pm – TVM, Tianqi Chen, University of Washington
3.00pm -5.00 pm- Xilinx ML suite, Rahul Nimaiyar, Ashish Sirasao, Xilinx
Lunch break: 12 noon to 1 pm. Boxed sandwiches which includes, a cookie, chips, whole fruit and a drink.
Coffee and snacks are provided for the event.
(We plan to host Intel DLA, Facebook Glow, Cadence, Swift for Tensorflow team on March 4th. If you have any suggestions on speakers or you want to speak at our next offerings of this work, please contact us through the form at the end)
Primary Organizer: IEEE Silicon Valley Artificial Intelligence Chapter (Santa Clara Valley Chapter of IEEE Computational Intelligence Society)
Co-Sponsored by Western Digital Research (Limited to providing venue and food and snack to attendees, video recording of event if possible)
Host/Program Chair: Dr. Kiran Gunnam
Venue: Western Digital, 5601 Great Oaks Pkwy, San Jose, CA 95119. BLDG A1, CR119 and CR120.
Admission fee: None.
Registration: By invite through Western Digital & IEEE. If you want to get an invite, please contact Dr. Kiran Gunnam through Contact button below.
Western Digital Research :The core mission of the Research team in the CTO Office is to develop new technologies for future Western Digital products and businesses. We work on a wide range of emerging technologies, including storage-class memories, memory system architecture, high-performance storage architecture, machine-learning, RISC-V core microarchitecture, Linux kernel architecture, memory-centric compute, industry standards and security.
The IEEE Silicon Valley Artificial Intelligence Chapter (Santa Clara Valley Chapter of IEEE Computational Intelligence Society) is interested in areas such as Artificial Intelligence, Statistical Learning, Data Mining and Pattern Recognition. More specifically we are interested the theory, design, application, and development of biologically and linguistically motivated computational paradigms emphasizing neural networks, connectionist systems, genetic algorithms, evolutionary programming, fuzzy systems, and hybrid intelligent systems in which these paradigms are contained.