Picture
SEARCH
What are you looking for?
Need help finding what you are looking for? Contact Us
Compare

PUBLISHER: MTN Consulting, LLC | PRODUCT CODE: 2026994

Cover Image

PUBLISHER: MTN Consulting, LLC | PRODUCT CODE: 2026994

Petabits Per Rack - How AI Traffic is Reshaping Networks: What OFC 2026 Reveals about Network Architecture, Measurement Gaps, and Supply Chain Risk

PUBLISHED:
PAGES: 31 Pages
DELIVERY TIME: 1-2 business days
SELECT AN OPTION
PDF (Site License)
USD 2500
PDF (Enterprise License)
USD 6250

Add to Cart

AI traffic is vastly different than the cloud and enterprise traffic that existing networks were built to carry. Training clusters running at petabit-per-rack bandwidth, zero-loss protocols, and microsecond synchronization are a new infrastructure challenge. Hyperscalers are spending over $600 billion in capex this year addressing the challenge. This report draws on 30 papers from OFC 2026, from Meta, KDDI, China Mobile, Samsung, TSMC, and others, to establish what the engineering record shows about how AI networks are built today, where the architecture is heading, and where the supply chain risk is highest.

Our key conclusions are as follows:

  • AI traffic is a different animal. Training clusters running at Petabit-per-rack bandwidth requirements, zero-loss protocols, and microsecond synchronization constraints have little in common with the stochastic, jitter-tolerant traffic that existing networks were built to carry.
  • The industry is flying partially blind. No comprehensive public study of AI traffic volumes, patterns, or growth exists. Nokia, Ericsson, and a handful of others have made partial contributions, but hyperscalers don’t share traffic data. For an industry spending over $600B in capex this year, this is a significant planning liability.
  • Co-packaged optics has crossed the reliability threshold. Meta’s 36 million device-hour field evaluation of CPO switches provides strong support for this market’s viability. The displacement of retimed pluggable modules at the scale-up tier is no longer a question of if, but when.
  • Scale-across is the next frontier. Power constraints are forcing GPU clusters across multiple facilities and campuses. KDDI confirms that distributed training across 30 km produces AI job completion times on par with single-site clusters. Microsoft has announced 15,000 km of hollow-core fiber deployment for AI connectivity.
  • The optical supply chain faces a shift. As hyperscalers bring silicon photonics design in-house using TSMC and Samsung platforms, the value of interconnects is moving from discrete module vendors toward foundries, chiplet suppliers, and hyperscalers. Vendors without a foundry process design kit (PDK) strategy face a narrowing addressable market at precisely the moment overall volumes are peaking.

Organizations mentioned

  • 1FINITY Inc.
  • Alcatel Submarine Networks
  • Alibaba Cloud
  • Alphabet (Google Cloud Platform)
  • Amazon (Amazon Web Services, AWS)
  • AMD
  • Ampere
  • Anthropic (Claude)
  • ARM
  • AttoTude Inc.
  • Berxel Photonics Co. Ltd. (Shenzhen, China)
  • Broadcom
  • ByteDance
  • Centre Tecnologic de Telecomunicacions de Catalunya (CTTC-CERCA) (Spain)
  • China Mobile Research Institute (Beijing, China)
  • Chinese University of Hong Kong
  • Ciena
  • Cornell University
  • Corning Inc.
  • DeepSeek
  • Ericsson
  • Flexcompute Inc.
  • Furukawa Electric Co., Ltd.
  • Huazhong Univ. of Science and Technology (Wuhan, China)
  • Hubei Optical Fundamental Research Center
  • II-VI/Coherent
  • Innolight
  • Intel
  • iPronics
  • Jinyinhu Laboratory
  • KDDI Research, Inc.
  • Lumentum
  • Lumiphase AG
  • McGill University
  • Meta Platforms
  • Microsoft (Azure)
  • Mistral
  • Nagoya University (Nagoya, Japan)
  • National Institute of Advanced Industrial Science and Technology (AIST) (Japan)
  • Nokia Bell Labs
  • Nokia Corporation
  • NVIDIA
  • NYSERNet
  • OpenAI (ChatGPT)
  • Photonics Electronics Technology Research Association (PETRA) (Tokyo, Japan)
  • Photonics-Electronics Integration Research Center (Tsukuba, Japan)
  • Politecnico di Torino (Turin, Italy)
  • Ruijie (Fuzhou, China)
  • Samsung Electronics Co., Ltd.
  • State Key Lab of Information Photonics and Optical Communications, BUPT (Beijing, China)
  • State Key Laboratory of Photonics and Communications, Peking University (Beijing, China)
  • Taiwan Semiconductor Manufacturing Company (TSMC) (Hsinchu, Taiwan)
  • Toyota Technological Institute
  • Tsinghua-Berkeley Shenzhen Institute
  • University of California, Santa Barbara
  • Wuhan Changjin Photonics Technology Co.
  • Wuhan Research Institute of Posts and Telecommunications
  • Yonsei University (South Korea)
Product Code: DCPC-27042026-1

Table of Contents

  • Summary
  • AI Traffic 101
    • Understanding AI traffic is high-stakes as hyperscale capex explodes
    • AI traffic types: A primer
    • The challenge of measuring and forecasting AI traffic
  • Traffic directions & findings for scale up, out and across
    • Scale-up: Traffic within the cluster
    • Scale-out and the data center fabric
    • Scale-across: Inter-datacenter AI traffic
  • Implications for the hyperscale market
    • Scale up
    • Scale out
    • Scale across
    • Transoceanic
  • Recommendations for industry players
    • For optical component and transceiver vendors
    • For vendors in the coherent space
    • For InfiniBand and Ethernet switch and NIC vendors
    • For data center operators deploying AI infrastructure
    • For operators planning distributed training infrastructure
    • For operators carrying AI traffic in regional and metro networks
    • For telcos with subsea cable assets
  • Conclusion
    • Using the appendix
  • Appendix 1: OFC paper details
    • Paper metadata
    • Paper summaries
  • Appendix 2: Report and publisher information
Product Code: DCPC-27042026-1

List of Figures and Tables

  • Figure 1: Hyperscale capex history and near-term outlook ($B)
  • Figure 2: Training, inference, and agentic AI traffic patterns are very different
  • Figure 3: AI training cluster size progression since 2016
  • Figure 4: Scale up, out, and across bandwidth requirements all growing fast
  • Figure 5: Three versions of AI cluster GPU racks at Meta
  • Table A-1: OFC 2026 traffic-related papers – metadata summary
Have a question?
Picture

Jeroen Van Heghe

Manager - EMEA

+32-2-535-7543

Picture

Christine Sirois

Manager - Americas

+1-860-674-8796

Questions? Please give us a call or visit the contact form.
Hi, how can we help?
Contact us!