Picture
SEARCH
What are you looking for?
Need help finding what you are looking for? Contact Us
Compare

PUBLISHER: IDC | PRODUCT CODE: 1919567

Cover Image

PUBLISHER: IDC | PRODUCT CODE: 1919567

IDC PeerScape: Practices for Leveraging AI, ML, and Agentic Testing in Adoption Strategies for AI Assurance and Business Optimization

PUBLISHED:
PAGES: 9 Pages
DELIVERY TIME: 1-2 business days
SELECT AN OPTION
PDF (Single User License)
USD 7500

Add to Cart

This IDC PeerScape describes practices for speeding adoption of AI and agentic testing to address five challenges - cultural resistance and role redefinition, lack of standards and tool sprawl blockage, technical debt and "build it" bias issues, security and compliance AI barriers, and evolving AI maturity and adaptive processes - resulting from conversations with several peer organizations including a global bank, a global software and industrial simulation provider, and an international services and software company. "One of the biggest barriers to software deployments and a core priority for organizations continues to be software quality since the high costs of software failure are prohibitive," said Melinda Ballou, research director, AI Assurance, ALM, Quality and Portfolio Strategies at IDC. "Increasing complexity and dynamically changing AI and agent technology demand responses that are thoughtful, astute, and rapid. This IDC PeerScape is the first in a series to help create an AI transition path for organizations by leveraging the experience of peer organizations. As we move into 2026, collaboration across enterprises to benefit from common experiences and AI strategies is more vital than ever."

Product Code: US50214823

IDC PeerScape Figure

Executive Summary

Peer Insights

  • Practice 1: Establish COE, Education, and Hands-On Workshops Underscoring AI Benefits and Reframe Roles to Position AI as an Enabler With Human Oversight, Rather Than a Threat
    • Challenge
    • Examples
    • Guidance
  • Practice 2: Combine Tools for Test Data Access, Management, and Execution; Build Shared Libraries and Metrics; Focus on API Tests; and Need Traceability Across Requirements, Changes, and Test Inventory
    • Challenge
    • Examples
    • Guidance
  • Practice 3: Evaluate TCO; Choose Vendors for Partnering Well, Excellent Road Maps, Support, and Governance; and Experiment in Sandboxes Under COE Oversight
    • Challenge
    • Examples
    • Guidance
  • Practice 4: Engage Security/Legal Teams Early; Define Cogent COE AI Policies and Processes and Multistage Approvals; and Choose Auditable, Compliant Platforms Standards
    • Challenge
    • Examples
    • Guidance
  • Practice 5: Target Initial Use Cases; Leverage Third-Party Support to Transition; and Harness Engaged, Knowledgeable Agile and AI-Informed Staff In-House for AI Assurance
    • Challenge
    • Examples
    • Guidance
  • Next Steps
Have a question?
Picture

Jeroen Van Heghe

Manager - EMEA

+32-2-535-7543

Picture

Christine Sirois

Manager - Americas

+1-860-674-8796

Questions? Please give us a call or visit the contact form.
Hi, how can we help?
Contact us!