IISWC-2019

Nobember 3 - November 5, 2019

 Orlando, Florida, USA


MLPerf Design Challenges

Peter Mattson
Research Scientist, Facebook

MLPerf (mlperf.org) is fast becoming the industry standard ML performance benchmark. It has support from over 50 companies and researchers from 8 academic institutions. MLPerf includes a training benchmark suite and an inference benchmark suite. Creating MLPerf involved overcoming a series of design challenges including: ML benchmark definition, benchmark selection, choice of metric, implementation equivalence, and presentation of results. This talk will describe how we overcame those design challenges and produced a benchmark suite that, based on MLPerf Training results, is driving substantial improvements in performance and scalability for ML users everywhere.





Bio:

Peter Mattson leads the ML Performance Measurement at Google. He co-founded and is the General Chair of MLPerf. Previously, he founded the Programming Systems and Applications Group at NVIDIA Research, was VP of software infrastructure for Stream Processors Inc (SPI), and was a managing engineer at Reservoir Labs. His research focuses on accelerating and understanding the behavior of machine learning systems by applying novel benchmarks and analysis tools. Peter holds a PhD and MS from Stanford University and a BS from the University of Washington.