AI compute faces memory bottlenecks. HBM's capacity/cost are limits. NAND Flash, via HBF (high-capacity near-memory) and AI SSD (intelligent preprocessing), optimizes AI architecture, redefining its value and role.
Key Highlights:
- AI computing faces severe memory bottlenecks; HBM, while high-bandwidth, is limited by capacity and cost.
- NAND Flash is transforming from passive storage to a core component in AI infrastructure.
- HBF: Provides high-bandwidth near-memory, effectively augmenting HBM capacity for "warm data."
- AI SSD: Integrates AI compute units for "near-data processing," offloading GPU preprocessing tasks.
- HBF and AI SSD are complementary, jointly optimizing AI systems and redefining NAND Flash's industry value and market opportunities.