Scroll Top

03/21/2026 – CASPA 2026 Spring Symposium Highlight

CASPA 2026 Spring Symposium Highlight

AI Infrastructure Reimagined: Memory and Storage Take Center Stage

The CASPA hosted its 2026 Spring Symposium on March 21 at Sandisk Auditorium, Milpitas, bringing together industry and academic leaders to examine the evolving role of memory and storage in the age of artificial intelligence.

Under the theme “Paving the AI Super-Highway: Rethinking Memory and Storage at Scale,” the event drew over 500 registrants. Discussions centered on overcoming the “memory wall” and re-architecting infrastructure to meet the demands of next-generation AI workloads.

In his opening remarks, CASPA President and Chairman Gary Wang emphasized the organization’s mission to foster cross-industry collaboration, drive semiconductor innovation, and strengthen global partnerships. Speakers throughout the day addressed advancements spanning device innovation, system architecture, and full-stack integration—outlining a roadmap for future AI infrastructure.

“Houston, we have a problem: AI & Memory Wall. ” — Praveen Midha, Director of Technology and Market Development for Data Center Flash at Sandisk, highlighted how AI workloads are placing unprecedented demands on latency, throughput, and energy efficiency. As a result, the industry is re-evaluating architectures across both hardware and software, accelerating the adoption of flash technologies in performance-critical environments.

“10 billion of cost reduction in 60 years” — Jian Chen, Chief Scientist at Longsys, provided a historical perspective on six decades of storage evolution. He noted that emerging AI inference workloads—such as long-context processing and multi-agent systems—are reshaping the role of memory and storage, making them more dynamic and integral to overall system performance.

“100TB by 2029” — Xiaodong Che, Chief Technology Officer at Western Digital, emphasized that approximately 80% of cloud and AI data center storage today still relies on hard disk drives (HDDs). He projected that HDD capacity could exceed 100TB by 2029, with continued improvements in bandwidth to keep pace with exponential data growth. Che underscored that scalability in storage infrastructure will be a defining factor in the global AI race.

Panel Discussion Highlights Compute Bottlenecks

A panel moderated by former CASPA President Tony Xia brought together experts to discuss emerging technologies and system-level challenges:

  • Yiming Huai(CTO, Avalanche Technology) explored high-density STT-MRAM applications in aerospace AI systems.
  • Joe Chen(President & Chairman, TetraMem) discussed novel memory architectures aimed at improving AI inference efficiency.
  • Xiaoyu Ma(Senior Staff Engineer, Google DeepMind) analyzed capacity and bandwidth constraints in large language model (LLM) inference, stressing the urgency of memory innovation.
  • Zhizhen Zhong(CEO, Netpreme) introduced the concept of “networked memory tiering,” aimed at overcoming limitations in on-package memory capacity.

A Critical Inflection Point for AI Infrastructure

Across sessions, speakers agreed that memory and storage are no longer peripheral components—they are now foundational to AI system performance and scalability.

The symposium highlighted a growing consensus: as AI continues to evolve at breakneck speed, innovation in memory and storage will be central to unlocking the next wave of computing capability.