PUBLISHER: The Business Research Company | PRODUCT CODE: 1987878
PUBLISHER: The Business Research Company | PRODUCT CODE: 1987878
Remote direct memory access over converged ethernet (RoCE) for artificial intelligence (AI) workloads refers to the use of RoCE technology to enable high-speed, low-latency memory-to-memory data transfers across servers and storage systems in AI computing environments. Its primary purpose is to accelerate AI training and inference by reducing CPU overhead, minimizing data transfer latency, and improving bandwidth efficiency for large datasets.
The primary components of remote direct memory access over converged ethernet for artificial intelligence workloads include hardware, software, and services. Hardware refers to network adapters and acceleration devices that enable high-speed, low-latency memory access across servers to enhance artificial intelligence workloads. These solutions are deployed through on-premises and cloud models based on infrastructure and organizational requirements. The applications involved include data centers, high-performance computing, cloud artificial intelligence, edge artificial intelligence, enterprise artificial intelligence, and other applications, and they are used by end users such as banking, financial services, and insurance companies, healthcare providers, information technology and telecommunications companies, manufacturing companies, retail organizations, government organizations, and others.
Tariffs have introduced cost pressures and supply chain adjustments in the remote direct memory access over converged ethernet (RoCE) for artificial intelligence (AI) workloads market by increasing the price of imported ethernet adapters, networking switches, optical cables, and gpu networking modules required for high performance ai infrastructure. These impacts are most evident in hardware intensive deployments and in regions dependent on cross border semiconductor and networking equipment supply chains such as Asia-Pacific and parts of North America. However, tariffs are also encouraging domestic semiconductor fabrication, regional supplier diversification, and stronger investment in software defined networking and virtualization technologies, which may strengthen long term infrastructure resilience and reduce reliance on foreign hardware suppliers.
The remote direct memory access over converged ethernet (roce) for artificial intelligence (AI) workloads market size has grown exponentially in recent years. It will grow from $2.63 billion in 2025 to $3.19 billion in 2026 at a compound annual growth rate (CAGR) of 21.0%. The growth in the historic period can be attributed to growth in data center expansion, rise in big data processing requirements, increasing adoption of high performance computing clusters, early deployment of ethernet based networking solutions, growth in enterprise cloud infrastructure.
The remote direct memory access over converged ethernet (roce) for artificial intelligence (AI) workloads market size is expected to see exponential growth in the next few years. It will grow to $6.89 billion in 2030 at a compound annual growth rate (CAGR) of 21.3%. The growth in the forecast period can be attributed to growing artificial intelligence model complexity, expansion of hyperscale ai data centers, rising demand for distributed training frameworks, increasing investment in gpu and accelerator hardware, growth in real time analytics and inference workloads. Major trends in the forecast period include increasing deployment of high bandwidth ethernet interconnects, rising adoption of low latency distributed ai training architectures, expansion of gpu cluster networking optimization solutions, growing integration of traffic management and congestion control software, enhancement of scalable data center networking infrastructure.
The rising adoption of Ethernet-based alternatives to InfiniBand is expected to support the growth of the RoCE for AI workloads market going forward. Ethernet-based alternatives to InfiniBand include technologies such as RDMA over Converged Ethernet (RoCE), which enable remote direct memory access capabilities over standard Ethernet networks deployed in AI data centers. The adoption of these alternatives is increasing as hyperscalers and cloud providers seek cost-efficient and scalable networking solutions that are compatible with existing Ethernet infrastructure. RoCE for AI workloads supports this transition by enabling low-latency, high-throughput GPU-to-GPU communication and efficient distributed training across Ethernet fabrics. For example, in June 2025, according to Vitex LLC, a US-based dynamic fiber optic solutions provider, hyperscale cloud providers are making unprecedented capital investments in AI infrastructure. In 2025, Microsoft allocated approximately $80 billion, Amazon committed $86 billion as part of a broader $100 billion investment plan, Google invested $75 billion, and Meta spent $65 billion in capital expenditures, bringing the combined total from leading technology firms to over $450 billion. Therefore, the rising adoption of Ethernet-based alternatives to InfiniBand is contributing to the growth of the RoCE for AI workloads market.
Leading companies in the RoCE for AI workloads market are focusing on developing innovative advancements such as AI network fabrics to support large-scale, low-latency, and cost-efficient distributed AI training. An AI network fabric is a high-performance interconnect system that enables scalable GPU communication using Ethernet-based RDMA technologies. For example, in October 2025, Oracle Corporation, a US-based cloud infrastructure and enterprise software provider, announced the OCI Zettascale10 Supercluster with Oracle Acceleron RoCE. The solution integrates up to 800,000 NVIDIA graphics processing units across multiple data centers using a flatter, Ethernet-based RoCE topology designed to reduce latency, improve resiliency, and enhance performance predictability. By combining line-rate encryption, network interface card-level security enforcement, and multicloud deployment flexibility, the platform demonstrates that RoCE can support so-called zettascale-class artificial intelligence workloads with performance comparable to leading InfiniBand-based architectures for targeted AI training applications.
In September 2025, NVIDIA Corporation, a US-based technology company, acquired Enfabrica for over $900 million. With this acquisition, NVIDIA strengthened its AI networking infrastructure by integrating Enfabrica's SuperNIC and memory fabric technologies to improve scalability, reliability, and performance of large-scale AI computing clusters. Enfabrica is a US-based provider of high-performance AI interconnect solutions, including advanced network silicon and disaggregated memory architectures for distributed AI workloads.
Major companies operating in the remote direct memory access over converged ethernet (roce) for artificial intelligence (ai) workloads market are Huawei Technologies Co. Ltd., Dell Technologies Inc., IBM Corporation, NVIDIA Corporation, Cisco Systems Inc., Lenovo Group Limited, Intel Corporation, Oracle Corporation, Broadcom Inc., Quanta Computer Inc., Hewlett Packard Enterprise Company, NEC Corporation, ASUSTeK Computer Inc., Super Micro Computer Inc., NetApp Inc., Arista Networks Inc., Marvell Technology Inc., Synopsys Inc., Pure Storage Inc., Extreme Networks Inc., Napatech A/S, Mitac Computing Technology Corporation, Aviz Networks Inc., Pica8 Inc.
North America was the largest region in the RoCE for AI workloads market in 2025. Asia-Pacific is expected to be the fastest-growing region in the forecast period. The regions covered in the remote direct memory access over converged ethernet (roce) for artificial intelligence (ai) workloads market report are Asia-Pacific, South East Asia, Western Europe, Eastern Europe, North America, South America, Middle East, Africa.
The countries covered in the remote direct memory access over converged ethernet (roce) for artificial intelligence (ai) workloads market report are Australia, Brazil, China, France, Germany, India, Indonesia, Japan, Taiwan, Russia, South Korea, UK, USA, Canada, Italy, Spain.
The remote direct memory access over converged ethernet (RoCE) for artificial intelligence (AI) workloads market consists of revenues earned by entities by providing services such as low-latency data transfer, high-throughput networking, congestion control and traffic management, and scalable distributed training support. The market value includes the value of related goods sold by the service provider or included within the service offering. The remote direct memory access over converged ethernet (RoCE) for artificial intelligence (AI) workloads market includes sales of high-speed Ethernet adapters, AI cluster interconnects, RDMA networking switches, data center network solutions, and high-performance GPU networking modules. Values in this market are 'factory gate' values, that is, the value of goods sold by the manufacturers or creators of the goods, whether to other entities (including downstream manufacturers, wholesalers, distributors, and retailers) or directly to end customers. The value of goods in this market includes related services sold by the creators of the goods.
The market value is defined as the revenues that enterprises gain from the sale of goods and/or services within the specified market and geography through sales, grants, or donations in terms of the currency (in USD unless otherwise specified).
The revenues for a specified geography are consumption values that are revenues generated by organizations in the specified geography within the market, irrespective of where they are produced. It does not include revenues from resales along the supply chain, either further along the supply chain or as part of other products.
The remote direct memory access over converged ethernet (roce) for artificial intelligence (AI) workloads market research report is one of a series of new reports from The Business Research Company that provides remote direct memory access over converged ethernet (roce) for artificial intelligence (AI) workloads market statistics, including remote direct memory access over converged ethernet (roce) for artificial intelligence (AI) workloads industry global market size, regional shares, competitors with a remote direct memory access over converged ethernet (roce) for artificial intelligence (AI) workloads market share, detailed remote direct memory access over converged ethernet (roce) for artificial intelligence (AI) workloads market segments, market trends and opportunities, and any further data you may need to thrive in the remote direct memory access over converged ethernet (roce) for artificial intelligence (AI) workloads industry. This remote direct memory access over converged ethernet (roce) for artificial intelligence (AI) workloads market research report delivers a complete perspective of everything you need, with an in-depth analysis of the current and future scenario of the industry.
Remote Direct Memory Access Over Converged Ethernet (RoCE) For Artificial Intelligence (AI) Workloads Market Global Report 2026 from The Business Research Company provides strategists, marketers and senior management with the critical information they need to assess the market.
This report focuses remote direct memory access over converged ethernet (roce) for artificial intelligence (ai) workloads market which is experiencing strong growth. The report gives a guide to the trends which will be shaping the market over the next ten years and beyond.
Where is the largest and fastest growing market for remote direct memory access over converged ethernet (roce) for artificial intelligence (ai) workloads ? How does the market relate to the overall economy, demography and other similar markets? What forces will shape the market going forward, including technological disruption, regulatory shifts, and changing consumer preferences? The remote direct memory access over converged ethernet (roce) for artificial intelligence (ai) workloads market global report from the Business Research Company answers all these questions and many more.
The report covers market characteristics, size and growth, segmentation, regional and country breakdowns, total addressable market (TAM), market attractiveness score (MAS), competitive landscape, market shares, company scoring matrix, trends and strategies for this market. It traces the market's historic and forecast market growth by geography.
Added Benefits available all on all list-price licence purchases, to be claimed at time of purchase. Customisations within report scope and limited to 20% of content and consultant support time limited to 8 hours.