Solving the (somewhat solved) “Implementation Gap”

Antoni Kozelski
CEO & Co-founder
Bartosz Gonczarek Autor
Bartosz Adam Gonczarek
Vice President, Co-founder
March 6, 2026
Group
Category Post
Table of content

OpenAI’s announcement of the Frontier Alliances, a multi-year partnerships with McKinsey, BCG, Accenture, and Capgemini, confirms what mid-market operators have faced for years: the bottleneck for agentic AI is not model intelligence, it is operationalisation. This article examines what the announcement reveals, where large consultancies fall short in practice, and how the TriStorm methodology Vstorm applies in mid-market engagements closes the gap that Enterprise-focused partnerships are only now beginning to address.

When Reuters reported on OpenAI’s new Frontier Alliances on February 23, 2026, the headline was about partnership scale. But the subtext was an open acknowledgement of a problem the industry has been slow to name: deploying agentic AI inside real organisations is harder than building the models that empower it.

We at Vstorm have been working on this problem from the mid-market side for several years. And the Frontier Alliances announcement is worth commentary, not because it changes the AI landscape, but because it confirms an implementation gap we have long observed and worked to bridge.

What the OpenAI announcement actually says

The Frontier Alliances pair OpenAI’s forward-deployed engineering teams with the Big Four of consulting firms, with BCG and McKinsey handling strategy and operating model redesign and Accenture and Capgemini taking on systems integration, data architecture, and lifecycle support. Reuters’ report on the announcement states, “OpenAI deepens partnerships with four consulting giants to push enterprise AI beyond pilot,” acknowledging, in between the lines, the existing bottlenecks for harnessing AI’s practical value in Enterprises.

This admission carries weight. Further illustrating this, OpenAI has also hired Denise Dresser, former CEO of Slack, as Chief Revenue Officer, a signal that enterprise adoption has moved from aspiration to a commercial priority. As CNBC noted, Fernando Alvarez, Capgemini’s chief strategy and development officer, said OpenAI is counting on its Frontier Alliances to help roll out its technology at scale.

“It’s not an easy task. If it was a walk in the park, OpenAI would have done it by themselves, so it’s recognition that it takes a village.”
– Fernando Alvarez, Chief Strategy and Development Officer of Capgemini, to CNBC

    What we have learned

    Here are five things we have learned since the official announcement on February 23rd:

    1. OpenAI is pairing its ‘forward-deployed engineers’ with consulting firms to help integrate AI agents into business processes.
    2. OpenAI makes Enterprise adoption a priority for the AI lab, as earlier cooperation between OpenAI and consulting firms led to experiments and Proofs of Concepts (POCs).
    3. This shift was made after OpenAI hired former Slack CEO, Denise Dresser, who has experience in Enterprise adoption, with an aim to create a path of AI adoption for OpenAI. Dresser believes that companies working with consulting firms over time, “will become self-sufficient on their own and ultimately be able to take their transformation forward,” (Reuters).
    4. OpenAI identified that the bottleneck for AI value is not model intelligence, but operationalization. That is why the OpenAI Frontier Alliances aims to:
      • Help McKinsey and BCG as ‘strategic design’ firms perform the model redesign, “rewiring” how a business functions to accommodate AI
      • Support Accenture and Capgemini in doing the ‘heavy lifting’ of data architecture, cloud security, and wiring the new frontier into legacy systems
    5. “OpenAI just told investors its agents will replace Salesforce, Workday, Adobe, and Slack,” Mimi Leinbach, MBA, contributing member at Women Defining AI and former SAP principal product manager, wrote on LinkedIn. “Enterprise software stocks dropped 3-9% on the news of a “unified semantic layer for agents” is coming to replace enterprise systems.”

    The intended outcome of this initiative is to shift AI from being used as a copilot, a productivity tool enhancing existing workflows, to AI operating as a qualified coworker capable of executing end-to-end tasks autonomously.

    What our experience at Vstorm says

    Let us break down the five points above in the context of what our AI Engineering Consultancy experience has shown:

    Point 1: Help is needed to integrate AI agents into businesses

    We agree that enterprises need help moving beyond subscribing to large language models and using them in copilot mode. The mid-market companies Vstorm works with have already demonstrated how to do just that.

    You can read more on our experience on this in our commentary on Deloitte’s “State of AI in the Enterprise,” published earlier this year.

    Point 2: Going beyond POC’s requires a push

    It is great that going beyond Proof-of-Concept is becoming a priority.  The middle-market companies that Vstorm works with already illustrate how to do just that, with their 5-10 months timeframe from project launch to production-grade Agentic AI.

    Our Mixam case study provides an example: a London-based on-demand printing platform that came to us with a clear hypothesis, that agentic AI could advise new customers on order options. And we helped them moved through architecture validation, engineering, A/B testing, and production deployment in a single Tristorm cycle. Four further iterations have followed, with each expanding the system’s capabilities. You may read the full case study here.

    The Frontier Alliances report confirms that enterprises are struggling to move at a comparable pace. From our engagements with enterprise-class customers, we can confirm this directly. But the exact reasons for the enterprise slow down run deeper than tooling and deserve further illumination, which we intend to address in the near future.

    Point 3: Cooperation with consultancies as the path for Enterprises to become self-sufficient on their transformation journey

    The announcement suggests that consulting partnerships will help organisations become self-sufficient in their AI transformation. This seems to us like wishful thinking. Big consultancies do work with Enterprises on their transformation journey, producing directions and suggestions which we often challenge in our work with our down-to-earth methodology.

    Up to this point, the results of their advisory, from our perspective, remained ‘unhinged’ from what Agentic AI technology offers and how it should be adopted. And their mixed track record of closing the gap between that direction and production-grade implementation reflects this, a point that their own clients often raise.

    The Frontier Alliances introduce closer technical alignment between OpenAI’s engineers and these consultancies. Whether that translates into genuine capability transfer, or into consulting engagements that remain dependent on OpenAI’s forward-deployed team, will determine the real value. The planned certification programme is a step in the right direction, but certification is not experience.

    Point 4: OpenAI identified that operationalization is a bottleneck in getting value from AI

    We agree. The models have been good enough for productive use since at least mid-2025. At Vstorm, we have validated this across 26 deployed solutions using OpenAI as the reference model. What remains an unknown is how much time the Big Four consultants and engineers will need to adapt to the new technological needs of operationalisation: connecting agents to live systems, enforcing guardrails that prevent hallucination and scope drift, building observability so that performance can be measured and improved, and accounting for the transformative knowledge transfer that determines whether teams actually use what has been built.

    Our continuous and ongoing recruitement for consultancy and engineering roles at Vstorm makes us cautious in this regards, last year we did not employ a single candidate with past experience in the Big Four, not because we did not consider it, but because the gap between what is needed and what their experience was seemed too large.

    Point 5: OpenAI’s aim to replace SAP and Salesforce by offering a unified semantic layer for AI agents

    The slow pace of agentic AI adoption in enterprises is not, at this point, a result of missing technology. The frameworks and models now at our disposal are more than capable of driving meaningful results, which our customer outcomes demonstrate. Adding another platform layer is unlikely to accelerate adoption on its own, even if it proves helpful.

    What could accelerate it is a change in how these transformations are led: like starting from operational problems, not technology capabilities, like deploying incrementally rather than attempting full-organisation rollout, and focusing on building internal ownership from day one rather than enforcing dependency on an external partner.

    “AI alone does not drive transformation. It must be linked to strategy, built into redesigned processes, and adopted at scale with aligned incentives and culture.”
    BCG CEO, Christoph Schweizer

      We agree. The question is whether large consultancies, historically strong in strategy and weak in hands-on implementations, will close that gap with a certification programme and co-deployed engineers, or whether mid-market companies, already moving faster, will continue to set the pace.

      Learn more about how implementing AI Agents can impact your business.

      Meet directly with our founders and PhD AI engineers. We will demonstrate real implementations from 30+ agentic projects and show you the practical steps to integrate them into your specific workflows—no hypotheticals, just proven approaches.

      Mid-market sets the pace with the TriStorm methodology

      The challenges OpenAI’s Frontier Alliances are now attempting to solve at enterprise scale are challenges that we at Vstorm have already been solving in mid-market engagements through our applied TriStorm methodology. TriStorm is our end-to-end framework for taking organisations from their starting point, whatever that may be, to production-grade agentic AI. The TriStorm moves through three phases, each building on the last:

      Phase 1: Demystification

      High expectations are boiled down into executable steps. Use cases are identified and ranked. The client team’s understanding of what agentic AI can and cannot do in their specific context grows. This is not a strategy document exercise: it ends with a prioritised backlog and a clear first use case ready for validation.

      Phase 2: Value creation

      The selected use case is prepared for agentic deployment. Experiments are run, a proof of concept is built, and results are validated against real operational data before any commitment to full deployment is made. This phase is designed to surface integration complexity, guardrail requirements, and edge cases before engineering investment scales up.

      Phase 3: Transformation

      Validated prototypes are pushed into production. Performance is measured, improvements are made, and successful deployments serve as the foundation for further agentic expansion. Critically, each cycle builds on the last: agents are deployed one at a time, and the scaffolding from earlier cycles accelerates subsequent ones.

      Two principles govern this methodology. First, it is iterative: we do not attempt full-organisation rollout from the outset. Second, it prioritises simplicity: simpler use cases are tackled first, with more complex multi-agent systems built on the proven foundation those early deployments create.

      The Mixam engagement illustrates both principles. What began as a single-agent proof of concept for guiding new customers through order options has evolved, across four Tristorm iterations, into a multi-agent system that is now a central part of Mixam’s customer experience. Each iteration was grounded in the measured performance of previously successful implementations.

      The mid-market advantage

      The Frontier Alliances are aimed at enterprises, organisations with the budget for multi-year consulting engagements and the organisational complexity that makes change management a major programme in and of itself.

      Mid-market operators, with typically 150 to 1,000 employees and established infrastructure, but without dedicated AI engineering teams, face a different set of constraints. They cannot field multi-year transformation programmes. They need results within a single fiscal year. And they often have an operational complexity, such as cross-departmental workflows, specialised domain knowledge, multiple data sources, that off-the-shelf tools cannot address.

      The gap in implementation speed is not accidental. Mid-market companies move faster because decision-makers are closer to operations, procurement cycles are shorter, and the organisations are agile enough to integrate new systems without navigating the layers of legacy systems that slow enterprise adoption. The TriStorm methodology is designed for exactly this environment: it is iterative, outcome-focused, and built with operational problems at its center rather than putting technology capabilities first.

      The Frontier Alliances announcement is a useful signal that the operationalisation problem is now recognised at the highest levels of the industry. For mid-market companies, that recognition changes little about what works. The path from ambition to production-grade agentic AI remains the same: start with a specific operational problem, validate before you build at scale, deploy incrementally, and measure everything.

      Ready to see how the TriStorm process can transform your business?

      Meet directly with our founders and PhD AI engineers. We will demonstrate real implementations from 30+ agentic projects and show you the practical steps to integrate them into your specific workflows—no hypotheticals, just proven approaches.

      Last updated: March 6, 2026

      The LLM Book

      The LLM Book explores the world of Artificial Intelligence and Large Language Models, examining their capabilities, technology, and adaptation.

      Read it now