Cover Image
Market Research Report

IDC's Worldwide Accelerated Computing Platforms Taxonomy, 2019

Published by IDC Product code 910908
Published Content info 18 Pages
Delivery time: 1-2 business days
Back to Top
IDC's Worldwide Accelerated Computing Platforms Taxonomy, 2019
Published: September 12, 2019 Content info: 18 Pages

This IDC study provides an overview of key enterprise-focused accelerated computing platforms definitions."Accelerated computing platforms have become a dominant segment of the worldwide server market. Server vendors, accelerator manufacturers, and cloud providers have all adding acceleration technologies such as GPUs and FPGAs to their infrastructure solutions portfolio" said Peter Rutten, research director, Infrastructure Systems, Platforms and Technologies Group at IDC. "These accelerators are designed to overcome fundamental limitations of general-purpose processors for highly parallelized machine learning/artificial intelligence, analytical, and other data and/or compute-intensive applications."

Table of Contents
Product Code: US45473719

IDC's Worldwide Accelerated Computing Platforms Taxonomy

Accelerated Computing Platforms Taxonomy Changes for 2019

Taxonomy Overview


  • Deployment Location
    • Traditional IT and Private Cloud - On- and Off-Premises
      • Private Cloud Deployments
    • Public Cloud - On- and Off-Premises
      • Accelerated Public Cloud Instances
      • Accelerator-Based Clouds
    • Public Cloud On-Premises (Service Delivery Edge)
    • Edge (IT, OT, and CT)
  • Accelerated Computing Platform Architecture
    • Integrated Platforms
    • Server Vendor (OEM) Integrated
    • VAR or SI Integrated
    • Discrete Unit (Add-On Card)
  • Accelerator Design
    • Accelerator Architecture
      • Graphics Processing Unit
        • Advanced Micro Devices
        • NVIDIA
        • Intel
      • Manycore Microprocessor
      • Coprocessor
      • Field-Programmable Gate Array
      • Application-Specific Integrated Circuit
      • Hybrid
    • Accelerator/Platform Interconnect
      • Peripheral Component Interconnect Express
      • NVLink (NVIDIA)
      • Infinity Fabric (AMD)
      • Computer Express Link
      • Other Emerging Interconnect Standards
    • Accelerator Form Factor
      • Discrete (Peripheral Component Interconnect Express Based)
      • Discrete (Peripheral Component Interconnect Mezzanine Based)
      • Discrete (Mobile Peripheral Interconnect Express Module Based)
      • Integrated on Board
      • Integrated System on a Chip
      • Open Compute Project Open Domain-Specific Architecture
      • FPGA-Specific Form Factors
    • Accelerator Memory Type
      • Synchronous Graphics Random Access Memory Double Data Rate
      • High-Bandwidth Memory
      • Embedded Dynamic Random Access Memory
      • Synchronous Dynamic Random Access Memory
      • Resistive Random Access Memory
    • Accelerator Thermal Design Power
    • Accelerator Core Count
    • Floating-Point Performance
      • Dynamic Range and Precision
      • Computational Performance
      • Comparing Accelerator Performance
    • Cooling
      • Active Cooling
      • Passive Cooling
  • Accelerated Computing Software
    • Accelerator-Specific SDKs and APIs
      • Graphics Virtualization
    • Open Standards-Based SDKs and APIs
      • Open Computing Language
    • Other SDKs and APIs
  • Workloads and Use Case Segmentation
    • Server Workloads
    • Embedded Workloads
    • Industry-Specific Use Cases
      • Artificial Intelligence Use Cases
      • Traditional Accelerated Server Use Cases
      • Data Analytics Use Cases
      • Additional Use Cases
        • Datacenter Hyper-Customization
      • Use Case to Server Workloads Mapping

Learn More

  • Related Research
  • Synopsis
Back to Top