IISWC-2019

Nobember 3 - November 5, 2019

 Orlando, Florida, USA


Addressing the Challenges of Supporting At-Scale, Time-Sensitive Deep Learning Inference Workloads

Sridhar Lakshmanamurthy
Senior Principal Engineer, Intel

Deep learning (DL) inference is one of the fastest growing workloads in the data center and edge, with time-critical applications ranging across image classification, object detection, video analytics, machine translation and recommendation systems. These workloads stress the compute system in significantly different ways compared to general purpose CPU- or GPU-based systems. This talk discusses the workload requirements and presents the an overview of a purpose built inference accelerator system-on-a-chip that addresses these requirements. The talk will also include areas of research to better analyze and characterize these emerging DL inference workloads and explore future opportunities for acceleration.





Bio:

Sridhar Lakshmanamurthy is a Senior Principal Engineer in Intel's Inference Products Group (IPG), part of Intel's AI Products Group (AIPG). He is currently a platform solution architect working to deploy Intel's new line of Inference Accelerator Products in compute systems. Sridhar has been at Intel for over 26 years. During his career at Intel, he has also worked on custom Xeon SoCs, network processors, embedded IA and mobile SoC architectures and on-die interconnects. Sridhar got his MS in Computer Engineering from Rice University, Houston.