AI traffic is vastly different than the cloud and enterprise traffic that existing networks were built to carry. Training clusters running at petabit-per-rack bandwidth, zero-loss protocols, and microsecond synchronization are a new infrastructure challenge. Hyperscalers are spending over $600 billion in capex this year addressing the challenge. This report draws on 30 papers from OFC 2026, from Meta, KDDI, China Mobile, Samsung, TSMC, and others, to establish what the engineering record shows about how AI networks are built today, where the architecture is heading, and where the supply chain risk is highest.
Our key conclusions are as follows:
- AI traffic is a different animal. Training clusters running at Petabit-per-rack bandwidth requirements, zero-loss protocols, and microsecond synchronization constraints have little in common with the stochastic, jitter-tolerant traffic that existing networks were built to carry.
- The industry is flying partially blind. No comprehensive public study of AI traffic volumes, patterns, or growth exists. Nokia, Ericsson, and a handful of others have made partial contributions, but hyperscalers don’t share traffic data. For an industry spending over $600B in capex this year, this is a significant planning liability.
- Co-packaged optics has crossed the reliability threshold. Meta’s 36 million device-hour field evaluation of CPO switches provides strong support for this market’s viability. The displacement of retimed pluggable modules at the scale-up tier is no longer a question of if, but when.
- Scale-across is the next frontier. Power constraints are forcing GPU clusters across multiple facilities and campuses. KDDI confirms that distributed training across 30 km produces AI job completion times on par with single-site clusters. Microsoft has announced 15,000 km of hollow-core fiber deployment for AI connectivity.
- The optical supply chain faces a shift. As hyperscalers bring silicon photonics design in-house using TSMC and Samsung platforms, the value of interconnects is moving from discrete module vendors toward foundries, chiplet suppliers, and hyperscalers. Vendors without a foundry process design kit (PDK) strategy face a narrowing addressable market at precisely the moment overall volumes are peaking.
Organizations mentioned
- 1FINITY Inc.
- Alcatel Submarine Networks
- Alibaba Cloud
- Alphabet (Google Cloud Platform)
- Amazon (Amazon Web Services, AWS)
- AMD
- Ampere
- Anthropic (Claude)
- ARM
- AttoTude Inc.
- Berxel Photonics Co. Ltd. (Shenzhen, China)
- Broadcom
- ByteDance
- Centre Tecnologic de Telecomunicacions de Catalunya (CTTC-CERCA) (Spain)
- China Mobile Research Institute (Beijing, China)
- Chinese University of Hong Kong
- Ciena
- Cornell University
- Corning Inc.
- DeepSeek
- Ericsson
- Flexcompute Inc.
- Furukawa Electric Co., Ltd.
- Huazhong Univ. of Science and Technology (Wuhan, China)
- Hubei Optical Fundamental Research Center
- II-VI/Coherent
- Innolight
- Intel
- iPronics
- Jinyinhu Laboratory
- KDDI Research, Inc.
- Lumentum
- Lumiphase AG
- McGill University
- Meta Platforms
- Microsoft (Azure)
- Mistral
- Nagoya University (Nagoya, Japan)
- National Institute of Advanced Industrial Science and Technology (AIST) (Japan)
- Nokia Bell Labs
- Nokia Corporation
- NVIDIA
- NYSERNet
- OpenAI (ChatGPT)
- Photonics Electronics Technology Research Association (PETRA) (Tokyo, Japan)
- Photonics-Electronics Integration Research Center (Tsukuba, Japan)
- Politecnico di Torino (Turin, Italy)
- Ruijie (Fuzhou, China)
- Samsung Electronics Co., Ltd.
- State Key Lab of Information Photonics and Optical Communications, BUPT (Beijing, China)
- State Key Laboratory of Photonics and Communications, Peking University (Beijing, China)
- Taiwan Semiconductor Manufacturing Company (TSMC) (Hsinchu, Taiwan)
- Toyota Technological Institute
- Tsinghua-Berkeley Shenzhen Institute
- University of California, Santa Barbara
- Wuhan Changjin Photonics Technology Co.
- Wuhan Research Institute of Posts and Telecommunications
- Yonsei University (South Korea)