CAISS: Artificial Intelligence Symposium


Artificial Intelligence Symposium

Registration Link

About this Event


1:30-2:00pm: Reception and Networking

2:00-2:15pm: CAISS 2019 Introduction

2:15-2:45: Speaker 1- Yiran Chen, “Challenges in AI Chip Design”

2:45-3:15: Speaker 2- William Gao, “AI in self-driving”

3:15-3:45: Speaker 3- Andy He, “The importance of ultra-high-speed memory interface in GPU architecture and the future of GPU computing in AI”

3:45-4:15: Speaker 4- Chunsheng Liu, “Computing for AI after Moore’s Law: A Hardware Perspective”

4:15-5:00: Panel Discussion

Speaker 1: Yiran Chen

Professor, Duke University

陈怡然 ,美国杜克大学教授

Topic: “Challenges in AI Chip Design”

Execution of deep neural networks (DNNs) consumes huge computing power. Many ASIC chips have been designed to accelerate the computation of DNNs by leveraging specially designed circuitries and architectures. The algorithms of DNNs can be also optimized to adapt to these new AI chips for performance and efficiency enhancements. In this talk, we will discuss thecommon techniques used in AI chip design and the future technology trend.


Speaker 2: William Gao

Senior Technical program manager,

Topic: “AI in self-driving”


Speaker 3: Andy He

GPU Engineering Manager, NVIDIA


Topic: “The importance of ultra-high-speed memory interface in GPU architecture and the future of GPU computing in AI”

As the GPU becomes more and more popular in various AI and HPC applications, including autonomous driving, data center training/inference and robots, more and more efforts have been being spent on increasing the performance and energy efficiency of the GPU platform. High speed memory interface, including GDDR and HBM, have thus become the bottleneck of achieving this goal in GPU hardware. This talk will examine the importance of this technology and its future development and briefly discuss about the future of GPU computing.


Speaker 4: Chunsheng Liu

Research Scientist, Alibaba


Topic: “Computing for AI after Moore’s Law: A Hardware Perspective”

In the post Moore’s law era, focus of computing chips has been moving from “chasing processing nodes” to “domain specific”. Among many domain specific applications, those related to artificial intelligence (AI) have received substantial investment recently and produced thousands of publications each year. With the explosive emergency of new algorithms, AI applications need to be efficiently executed on appropriate hardware. This talk will attempt to deliver a perspective on the hardware design for AI computing, e.g. platforms, architectures, applications, challenges and core competence of an AI accelerator designer. Chunsheng Liu is a research scientist with the Computing Technology Lab of Alibaba Damo Academy. He has 10+ years of experience in IC design, test and engineering. He has been with Nvidia, Altera, Hisilicon and other design houses where he played leading role in developing state-of-the-art GPU, FPGA, server and mobile processors. His current research interest is at hardware architecture and algorithms for low power, high performance, high reliability and low cost accelerators for machine learning. He received BS and MS of EE from Tsinghua University and Ph.D. of ECE from Duke University.

Check about more on their website: