Why 95% of AI projects fail – and what you can do about it
A new study from the Massachusetts Institute of Technology has revealed that the vast majority of corporate generative AI pilots are failing to generate meaningful financial returns, despite widespread investment.
The report – The GenAI Divide: State of AI in Business 2025 – published by MIT’s NANDA initiative, found that 95% of pilots stall at early stages and never progress to scaled adoption. Only 5% of projects achieved rapid revenue growth.
In this blog, we explore the key findings, the potential pitfalls and how you can ensure your next AI implementation succeeds.
Key findings of the report
The executive summary makes for sobering reading.
It states: “Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising, result in that 95% of organizations are getting zero return. The outcomes are so starkly divided across both buyers (enterprises, mid-market, SMBs) and builders (startups, vendors, consultancies) that we call it the GenAI Divide. Just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable P&L impact. This divide does not seem to be driven by model quality or regulation, but seems to be determined by approach.
“Tools like ChatGPT and Copilot are widely adopted. Over 80 percent of organizations have explored or piloted them, and nearly 40 percent report deployment. But these tools primarily enhance individual productivity, not P&L performance… Most (projects) fail due to brittle workflows, lack of contextual learning, and misalignment with day-to-day operations.”
The report was based on multi-method research design that included a systematic review of over 300 publicly disclosed AI initiatives, structured interviews with representatives from 52 organisations, and survey responses from 153 senior leaders collected across four major industry conferences to gain a comprehensive review across various verticals and business sizes.
Why AI projects fail – and what you can do about it
1. It’s not AI’s fault…
Technology doesn’t necessarily ‘fix’ misalignment across a business – it only amplifies it. Automating a weak process doesn’t suddenly make the process stronger – it just speeds it up.
Aditya Challapally, the lead author of the report and a research contributor to project NANDA at MIT, said: “Some large companies’ pilots and younger startups are really excelling with generative AI. (Startups led by 19- or 20-year-olds, for example) have seen revenues jump from zero to $20 million in a year. It’s because they pick one pain point, execute well, and partner smartly with companies who use their tools.”
Rather than taking a step back and reviewing a process to discover the flaws within it, AI can create a pace that means damage is done before people realise what’s happened. This report confirms this: many AI projects fail because they don’t amend processes first, gather feedback from internal teams and don’t essentially fit the workflow or practical needs of the business’ end users.
2. Partner with the experts
One of the report’s key statistics is that external partnerships reach deployment about twice as often (around 67%) as internally-built efforts (roughly 33%).
Ash Wenn, Commercial Director at Perspicuity, said: “When considering AI implementations such as Copilot, the strongest results come when organisations take a holistic approach. This means equipping employees with the skills to communicate effectively with AI, setting clear expectations of what Copilot can deliver, and providing training so teams understand both the art of writing a good prompt and the technical capabilities of the tool. By harnessing employees’ deep knowledge of the business and combining it with the external expertise of consultants who know how to deliver successful implementations, organisations achieve a targeted approach with tangible outcomes.
“As part of this, we run persona-based and targeted workshops to uncover opportunities for greater efficiency, whether through Copilot itself or through automation.”
Planning an AI project? Get help from our experts.
Click here to speak to our team
3. People are already using AI tools (just not officially)
The report also highlighted the widespread use of ‘shadow AI’, with employees using unsanctioned tools like ChatGPT.
Employees at over 90% of surveyed companies already use personal AI tools like ChatGPT at work, while only about 40% of companies have purchased official licenses.
Shadow AI is the term used to describe when employees or users start using AI tools on their own, without getting the green light from their company’s IT department. It’s often done with good intentions, like trying to boost productivity or solve problems faster but because it happens under the radar, it can create serious risks around data security, compliance, and system integrity.
Conclusion
MIT’s report confirms that AI itself isn’t the issue – just because the majority of projects ‘fail’ in that businesses can’t prove an impact on their P&L doesn’t mean that it should be avoided. The challenge comes down to aligning the whole company to a single strategy so that barriers around company culture, organisational process and misaligned execution can be avoided and that clear desired outcomes can be tracked (along with the appropriate metrics to measure it).
It also highlights the benefits of partnering internal users – who’ll have qualitative data on practical end usage, along with specific company insights, too – with external implementation experts to get the best results. Without that combined experience, it’s easy to overlook integration challenges, cultural impact or how outdated workflows or processes might be impacted.
Got an AI project in mind? Partner with our experts. Click here to arrange a call.