I am a principal scientist in the AWS AI team of Amazon. Before joining Amazon, I worked at Parallel Computing Lab, Intel Labs. I received my Ph.D. in computer science and neuroscience from Princeton University.
My research interest is in systems, high-performance computing, and big data analytics. I currently work on deep learning systems, with a focus on compiling and optimizing deep learning models for efficient training and inference. Our mission is to bridge the high-level models from various frameworks and low-level hardware platforms including CPUs, GPUs, and AWS homegrown accelerators, so that different models can execute in high-performance on different devices. My team on one hand delivers solid solutions and systems to serve both external and internal usages, on the other hand explores the cutting-edge techniques to conduct research and publish in top-tier systems conferences. Here is a collection of my published papers in AWS.
We are actively hiring! We are looking for talents in machine learning systems in general, ranging from distributed training, deep learning compiler, to tensor program generation, etc. Do drop me a line if you are interested in our work.