Contact
A solitary figure facing a luminous threshold, symbolising the choice point at the heart of AI transformation
Insights··IMCC·7 min read

The AI transformation problem is not a technology problem. It's a narrative one.

$684 billion was invested in AI initiatives globally in 2025. More than 80% of it failed to deliver intended business value. The models worked. The compute existed. The failure was largely self-inflicted — and it started with the language.

The numbers are striking. The explanation is more important.

The scale of AI investment in 2025 was, by any measure, extraordinary. $684 billion committed globally to AI initiatives across enterprise, infrastructure, and research. A capital allocation of that magnitude reflects genuine conviction: boards, investors, and leadership teams concluded that AI represented a transformation opportunity significant enough to justify resources at a level previously reserved for foundational infrastructure shifts.

The returns did not follow. Over $547 billion of that investment — more than 80% — failed to deliver its intended business value. MIT's 2025 study is more specific about one component of the picture: 95% of generative AI pilots at large enterprises are failing to scale beyond proof of concept.

The instinct, when confronted with numbers like these, is to reach for a technology explanation. The models were not ready. The infrastructure was not mature. The use cases were not well-defined. Some of that is true at the margin. But it does not explain the scale or the consistency of the failure. The models, in most cases, worked. The compute existed. The technical capability that had been identified was, broadly, real.

The failure was largely a narrative one. And it was one that organisations constructed for themselves.

How the narrative problem was built

The sequence is now familiar enough to be considered a pattern. A leadership team, persuaded of the genuine potential of AI, commits to a transformation programme. To secure board approval, investor confidence, and organisational buy-in, they articulate that commitment in the language of transformation: step-change productivity, competitive reinvention, fundamentally different ways of working, delivered on a timeline that reflects urgency rather than reality.

The language is not dishonest in the sense of being fabricated. The underlying belief is usually genuine. But it is language that has been optimised for persuasion rather than accuracy — calibrated to move a room rather than to precisely represent what is technically achievable, in what timeframe, under what conditions.

That gap between what was said and what was deliverable becomes the problem. Not immediately — early in an AI programme, when the pilots are running and the possibilities feel tangible, the language of transformation sustains momentum. But as timelines extend, as scaling friction emerges, as the distance between the pilot environment and the production environment proves harder to close than expected, the people who were promised transformation begin to reassess.

They were told the organisation would be fundamentally different. It is not, yet, and the path to that outcome is longer and more complex than they were led to believe. The trust that would have given the programme the time it needed to succeed has been spent. And 54% of C-suite executives now say that adopting AI is actively creating internal division — a statistic that reflects not the difficulty of the technology, but the consequence of expectations that were never structurally achievable in the timeframes implied.

"The AI programmes that are struggling aren't struggling because the technology stopped working. They're struggling because the people who were told transformation was coming stopped believing it would. You can rebuild a technical approach. You cannot easily rebuild the internal credibility that was spent getting there."

Chief Digital Officer, global professional services firm

The trust mechanism — and why it matters more than the technology

There is a specific mechanism at work here that is worth understanding precisely, because it applies beyond AI to every significant organisational change programme.

When expectations are set that cannot be met — not through dishonesty, but through the normal optimism of transformation language — the people who were given those expectations do not simply revise them downward. They revise their confidence in the people who set them. The credibility that leadership spent to generate early momentum becomes a liability when the outcomes do not arrive on schedule. And without that credibility, the ongoing investment of attention, adaptation, and tolerance for disruption that a genuine transformation requires cannot be sustained.

This is why the narrative failure is more consequential than the technical failure. Technical problems are solvable within the programme. A compromised trust relationship with the board, the workforce, or the investors who approved the investment is not. It requires a different kind of work entirely — work that most organisations have not budgeted for and are not structured to deliver.

The organisations that have navigated large-scale AI adoption most effectively are not, in general, the ones that had the best technology or the most sophisticated implementation partners. They are the ones that set expectations which were honest about complexity, transparent about timelines, and structured around the delivery of demonstrable value at each stage rather than the promise of transformation at the end.

What honest AI communication actually looks like

The alternative to transformation language is not the absence of ambition. It is the translation of ambition into a form that can be sustained.

This requires, first, a clear-eyed assessment of what the technology can deliver in a defined timeframe — not what it might eventually deliver, not what a pilot in optimised conditions has demonstrated, but what is achievable in a production environment, with the organisation's actual data, processes, and capability, within the period being communicated.

It requires, second, a set of commitments that are specific and measurable rather than directional and qualitative. "AI will transform our operations" is a statement that cannot be evaluated, which means it cannot be trusted when it goes undelivered. "AI will reduce processing time in this function by this percentage, measurable at this point" is a commitment that can be assessed — and when it is met, builds the credibility for the next stage of the programme.

It requires, third, an honest account of what adoption actually demands from the people being asked to change. Transformation language tends to present AI adoption as something that will be done to or for an organisation. The reality is that it requires sustained effort, tolerance for disruption, and a genuine willingness to change established ways of working from the people it affects. Setting that expectation clearly, early, is harder than promising transformation. It is also the only version of the communication that holds up over time.

"The businesses getting this right have worked out that internal credibility is a resource that has to be managed as carefully as budget or timeline. You spend it when you need to, you protect it when you can, and you never spend it on a promise you are not certain you can keep."

Dominic Walters, CEO, IMCC

The compounding advantage of getting this right

The organisations that build their AI programmes on honest, precise, and staged communication are not just avoiding the failure mode that consumed the majority of 2025's $684 billion. They are building something with long-term commercial value: an internal and external reputation for delivering what they said they would deliver.

That reputation compounds. A board that has seen a programme deliver its first-stage commitments on time and on specification is a board that will approve the next stage of investment with less friction. A workforce that has been told clearly what is changing and why, and has seen that account prove accurate, is a workforce with more capacity to absorb the next change. An investor base that has received honest communication about a programme's trajectory — including its difficulties — is an investor base with more patience for the timeline the programme actually requires.

These are not soft outcomes. They are the structural conditions for transformation at scale. And they are built or destroyed by the quality of the communication that surrounds the programme — not by the quality of the technology inside it.

The models work. The compute exists. The question that will determine which organisations extract genuine value from AI at scale is not a technical one. It is whether they have the discipline to communicate about it honestly enough to preserve the trust that would give them the time to succeed.

IMCC works with organisations navigating complex technology and change programmes to build the narrative clarity and communications infrastructure that protects credibility and drives adoption. If this challenge sounds familiar, we would be glad to talk.