Why Agentic AI needs standards and best practices

Antoni Kozelski
CEO & Co-founder
March 12, 2026
Group
Category Post
Table of content

The agentic AI market is on track to reach $45 billion by 2030, yet up to 95% of AI pilots never reach production. OpenAI’s Frontier Alliance, formed with BCG, McKinsey, Accenture, and Capgemini, and the Linux Foundation’s Agentic AI Foundation both signal the same conclusion: the central obstacle is no longer model capability; it is governance, workflow integration, and shared standards. This article explains what those gaps are, why they matter for mid-market organisations, and what principled standardisation looks like in practice.

What is an AI Agent

To understand why governance is so pressing, it helps to be clear about what distinguishes an AI Agent from a traditional AI system.

Traditional AI performs the one task it was designed for, such as image recognition, text generation, classification, etc, and waits to be prompted for each action. The system has no influence over what happens next.

An AI Agent is an entity that can perform tasks, decide what to do next, and execute actions, typically within a defined environment. A practical example: the order recommendation and completion agent we built for Mixam handles buyer questions about paper weight, cover finish, and size during a live conversation before completing the order. What separates it from a standard language model is its ability to query internal company data, carry state across the session, and take action, all without human intervention.

A large language model might suggest what to eat for lunch. But an agentic system checks dietary records, composes a nutritious meal plan, finds a supplier, and completes the purchase. The capability difference is substantial and so is the governance requirement.

The problem with missing standards

The bottlenecks surrounding effective agentic AI implementation have shifted from a question of model capabilities to one of governance and standards.

“There is far more demand for enterprise AI than any one company could address on its own.”
– Denise Dresser, OpenAI’s chief revenue officer

    The industry at large is now recognizing that AI governance and implementation capacity are the limiting constraints, rather than model quality. We cover this briefly in our response to Deloitte’s **“State of AI in the Enterprise”,** but below you can find a breakdown of the specific failure modes that best practices directly address.

    The abundance of unverified expertise

    Since the public release of large language models in late 2022, the market has been flooded with self-declared practitioners. Without external verification, certifications, foundation membership, published work, buyers cannot distinguish genuine engineering capability from surface-level familiarity.

    Roughly half of the projects we take on at Vstorm are rescue missions: situations where a prior vendor, almost certainly acting in good faith, could not deliver because they lacked the specialised knowledge that agentic systems require. OpenAI’s decision to certify dedicated practice teams within its Frontier Alliance partners reflects the same recognition: verified competence matters.

    Agentwashing

    Agentwashing is the agentic AI equivalent of greenwashing, in which a conventional automation system is marketed as an AI Agent to attract investment or justify a premium price. SEC’s enforcement action against Presto Automation illustrates the risk: the company claimed its order-taking was fully automated, while the majority of orders required human intervention from a third party. Standards create a shared definition of what “agentic” actually means and enforces accountability when that definition is not met.

    Regulatory exposure

    The EU AI Act is the most prominent current example of how regulation follows technology. GDPR fines have already reached billions of euros and comparable enforcement in the agentic AI space is a realistic near-term prospect. Organisations that have adopted shared standards and documented governance frameworks are in a better position to demonstrate compliance when regulators arrive. The alternative is retrofitting governance onto deployed systems, which is significantly more expensive and risky.

    Technology fragmentation

    Without agreed standards, every implementation team makes independent technology choices. The result is a landscape of incompatible components, vendor lock-in, and maintenance costs that compound over time. Standards do not eliminate choice, they establish a shared baseline that reduces the cost of interoperability and makes it easier to swap components, including underlying language models, as the field evolves.

    Vstorm’s internal answer to this problem is our TriStorm methodology, which provides a consistent framework for scoping, building, and deploying agentic systems. It is the internal equivalent of a standard, but it is built on our proprietary know-how, established over years of project implementation experience. Not every organisation has the time or depth to develop an equivalent from scratch.

    Following a proven model

    The challenge of setting standards for a fast-moving technical field is not new and models for overcoming it are already out there. The Linux Foundation, whose membership spans thousands of organisations and whose projects underpin critical global infrastructure, has taken the first steps in bringing a standard governance model to agentic AI.

    The Linux Foundation announced the formation of the Agentic AI Foundation, with platinum members including Amazon Web Services, Anthropic, Block, Bloomberg, Google, Microsoft, and OpenAI. Vstorm is proud to be the first AI consultancy accepted as a member — a position that allows us to contribute to standard-setting from the practitioner’s perspective and to ensure that the standards being developed reflect the realities of enterprise deployment, not only research-lab conditions.

    OpenAI’s Frontier Alliance parallels this need, with their chief revenue officer explaining why large consulting firms were chosen as partners: they bring deep knowledge of how enterprises actually operate, and no single organisation can meet demand alone.

    The same logic applies to standard-setting. Practitioners who have moved agentic systems through the full journey from pilot to production have observations that researchers and platform vendors do not. Both bodies are stronger for including them.

    What the convergence means for mid-market competitors

    The Frontier Alliance and the Agentic AI Foundation are both responses to the same diagnosis: the barrier to gaining value from agentic AI is no longer in what the models can do. It is whether organisations can integrate agents into real high-value workflows, govern their behaviour, and move from experimentation to scaled operations.

    “AI alone does not drive transformation. It must be linked to strategy, built into redesigned processes, and adopted at scale with aligned incentives and culture.”
    Christoph Schweizer, CEO of BCG

    For mid-market organisations, this creates a an obvious tension. The large consulting firms in the Frontier Alliance are primarily serving enterprise clients. The standards being established by the Agentic AI Foundation will be broadly applicable. But the capacity to act on those standards, to translate them into deployed, production-grade systems that integrate with existing infrastructure and comply with emerging regulation, requires a partner who has done it at the mid-market scale, not only in large enterprise engagements.

    That is the position we occupy at Vstorm. Our work is not just theoretical, but based on proven practices. The Mixam engagement is one of more than 30 production deployments we have achieved at Vstorm. And TriStorm is our field-tested answer to the governance and methodology problem that the Frontier Alliance and the Agentic AI Foundation are now addressing at the industry scale. And our early inclusion in the foundation means we get to bring that field experience directly into the standard-setting process.

    Summary

    Agentic AI has genuine transformative potential. The ability to replace form-filling and menu navigation with a direct conversation, or to replace a manual multi-step process with an agent that executes it end-to-end, is not speculative. We have built and deployed these systems.

    The arrival of the Agentic AI Foundation and the Frontier Alliance both signal that the technology is maturing past the experimental phase. Standards are forming. Governance expectations are being set. AI governance is shifting from an optional consideration to a prerequisite for enterprise adoption.

    For organisations that engage with that shift proactively; adopting principled frameworks, working with partners who build to open standards, and treating workflow redesign as integral to any agentic deployment; the probability of joining the 5% of initiatives that reach production increases substantially. That is the practical gain of taking standards seriously.

    Ready to see how standardized agentic AI can improve your business?

    Meet directly with our founders and PhD AI engineers. We will demonstrate real implementations from 30+ agentic projects and show you the practical steps to integrate them into your specific workflows—no hypotheticals, just proven approaches.

    Last updated: March 12, 2026

    The LLM Book

    The LLM Book explores the world of Artificial Intelligence and Large Language Models, examining their capabilities, technology, and adaptation.

    Read it now