Why 95% of AI Projects Fail and What Comes Next
- Brian Lakamp
- Sep 25
- 7 min read
Moving from Science Projects to Scalable Platforms
AI is everywhere in the headlines and in boardrooms, but many companies aren’t yet seeing results. A recent MIT study found that 95% of AI projects miss against their ROI targets. That’s a staggering number, and even if it’s overstated, it remains directionally notable.
A number of these failures are tied to “splash over substance” efforts, corporate science projects, and enterprise initiatives driven by boardroom anxiety around perceived inaction in a time of extreme change. The report highlighted that about half of the projects were tied to sales and marketing, and that internal-only projects had particularly high failure rates.
This high internal failure rate suggests a “Ready. Shoot. Aim.” application of new AI technology without sufficient consideration. Many of these early applications were undoubtedly clumsy attempts to glom AI onto existing processes without understanding AI’s strengths (and weaknesses). They also failed to reconsider and reshape processes in alignment with the new capability. The MIT report further supports this conclusion with the finding that projects undertaken with specialized vendors were far higher in success rate.
A knee-jerk reaction to these findings might lead one to conclude that AI does not and will not live up to its promise.
That’s a mistake.

Lessons From the Early Web and Mobile Eras
Let’s start by putting things in context. Think back to the mid to late 1990s, when companies scrambled to launch their first websites. Many of those sites were digital brochures and marketing forays that were clunky, static and underwhelming. They weren’t built to transform customer engagement. They were built to try to meet the moment by checking a box.
In the mobile era, enterprises rushed to build apps that were little more than a reproduction of the website, but in miniature. At that point, few understood how to unlock transformational evolution that changed customer offerings or drove efficiency by leveraging location awareness, push notifications, and real-time interactivity.
In every major platform shift, the early wave of enterprise adoption contains much that is more fluff than fruitful. There’s a rush of activity, followed by disappointment, and then, the real breakthroughs emerge.
AI is walking that same path today.
Why AI Projects Fail
Let’s dig into reasons why AI projects have stumbled and identify the recurring themes. That will help us understand the next phase of AI development and adoption, where we’ll start to see high-ROI successes proliferate.
FOMO-Driven Projects. Companies often spin up AI pilots because they feel they need to, and not because they’ve identified the problems most worth solving, those with meaningful ROI potential. (The MIT study underscores that, noting that the bulk of AI initiatives centered around sales and marketing, while the biggest ROI opportunities are tied to back-office operations.)
Lack of Clarity and Context. AI is incredibly powerful, but without focused instructions that are grounded in industry-specific and application-specific workflows, it’s a race car without a track. The horsepower is there, but there’s nowhere to unleash it safely.
Data Cleanliness. AI models are what they eat. Poor input data leads to poor output quality. That is true on both upfront training and on inference requests. More needs to be invested in data sourcing, data normalization, and data cleansing in order to achieve accurate and impactful results.
Hallucinations. Even with perfect context and data as inputs, hallucinations will occur because LLMs are probability engines. LLMS are not deterministic, and they need guardrails and governance to manage output quality.
Integration Gaps. Beyond the LLMs themselves, AI tooling is still nascent. Many enterprise efforts are still initiatives that run in isolation. Integrations that elegantly bridge production systems, supporting tools, governance frameworks, and compliance requirements are not yet common. That makes success a difficult target to hit.
Underestimating Change Management. AI doesn’t just change software. It changes how people work. Without aircover from clear-eyed executive teams and a plan to manage team workflow evolution, even the best AI solution will falter. In fact, Julie Sweet, the CEO of Accenture, noted in a recent interview that “”They [CEOs] recognize it’s less about the technology, and more about the willingness to truly reinvent the work, the workforce.”
Short-story, many of the failures aren’t about the AI and LLMs. It’s about the planning and structure around it.
Where AI Is Working
Despite the high failure rate, there are bright spots that light the path forward. The most promising successes share a trait. They are built to solve specific problems and go “unreasonably deep” on the nuances and workflows of a specific vertical.
Take Harvey, the AI legal startup. Instead of building a general-purpose assistant, Harvey focused on the workflows of law firms and corporate legal teams. By embedding itself in the drafting, research, and contract review processes, with controls and guardrails unique to that vertical, Harvey created something powerful that lawyers could actually trust and use… to the tune of $100M in ARR in just 3 years.
There are other meaningful AI-powered platforms that have gained traction and adoption by being hyper-focused. Cursor is a well-documented success story built on delivering incredibly powerful tooling that is hyper focused on using AI to make engineers 10x more productive. Sierra, founded by Bret Taylor (creator of Google Maps, former CTO at Facebook, co-CEO at Salesforce), is leveraging AI to optimize customer operations in call centers.
We’re going to see more and more of that.
These examples underscore the point that AI is not a magic wand. It’s a tool. Its power comes when it is aimed at a specific scope with a well-defined workflow, and with the right context and guardrails.
For those of you that are interested to dig deeper on the subject of AI context and guardrails in an agentic world, I recommend following Aaron Levie on LinkedIn or X. Aaron, the CEO of Box, posts insightful, considered commentary on a daily basis, like the post below, from August 20th.
“Most enterprises will need highly tuned agents with a high degree of domain understanding, tool use, proprietary data from that industry, and access to internal data. Then they’ll need implementation hand-holding, support that is tailored to that use-case, integrations with the ecosystem partners of that industry or workflow, and so on.”
The Shift Toward Agentic Architectures
In the early internet, standards like HTML, CSS, and JavaScript gave developers a shared language. Additional frameworks and tooling evolved to make it possible to build scalable systems for enterprise. Eventually, this scaffolding enabled the rise of new UI capabilities, e-commerce, social media, and cloud services.
AI is just beginning to develop its generation of such scaffolding. The key, overarching concept is agentic architectures, which are frameworks that allow AI systems (agents) to be directed, coordinated, integrated and reviewed.
Instead of AI acting like a disconnected chatbot or a standalone proof-of-concept, agentic architectures provide:
Orchestration frameworks that define how AI agents operate, coordinate and interact with humans, other agents and tooling in structured ways.
Context repositories so enterprises have predictable, scalable tooling to convey business rules, compliance requirements, and industry norms to agents and AI engines.
Governance layers to ensure security, compliance, and reliability.
With these in place, AI stops being unpredictable horsepower and starts becoming a reliable engine for transformation. The migration to agentic architectures is also a fundamental, generational shift in how systems are architected. It is a megatrend that will define the next 10+ years.
Momentum Is Building
This shift beyond “AI 1.0” to agentic architectures isn’t theoretical. It’s happening today, and much more rapidly than I expected.
In November 2024, Anthropic’s release of the Model Context Protocol (MCP) laid a foundation for how AI systems can invoke tools and access datasets to exchange context and execute instructions in standardized ways.
Earlier this year, Google published A2A (agent-to-agent) communication standards that define how AI systems talk to each other, not just to humans.
OpenAI and others are continually rolling out new frameworks and tooling to support these architectures at blistering speeds, giving enterprises a practical path to move from experiments to production systems.
In fact, just last week Google announced Agent Payment Protocol (AP2) to provide tooling around an agentic payment system.
What Leaders Should Do
If you’re an executive with focus on operating a business, it is incredibly hard to stay abreast of these changes and fully internalize the consequences for decisioning.
As a result, there is a temptation for business leaders to shortcut strategy with phrases like “...we JUST need AI to...”. Resist that simplification and that counterproductive shortcut. The better move is to:
Identify the right workflow. Spend the time to identify meaningful back office workflows that are inefficient. Ask: “Where can my company reduce bottlenecks, find time, and/or improve service levels?”
Start modestly. Unless you already have success in hand, start small. Find a meaningful, but not overly complex, workflow with straightforward decisioning. Go for that first, rather than starting with something that requires coordination of an army of new agents to solve it successfully.
Assemble the right team. Your existing, in-house engineers are probably not AI-native, nor up to speed on the latest in agentic architectures. Find a partner that is sincerely and truly focused on agentic architecture and pair them with your internal team.
Invest in data cleanup. Make sure you’re laying the data foundation by structuring and cleansing your data to maximize agentic outcomes. After all, what comes out is only as good as what goes in. Don’t expect your “AI” to navigate data slop.
Start documenting context. Ask your team to start defining the rules and constraints around the target workflow you identify. In all likelihood, much of the organizational logic for that workflow is not documented anywhere. An agentic approach needs to capture that logic as input context.
Beyond these recommendations, technical leaders also should start thinking about how teaming and technical delivery will evolve. The migration to agentic will be every bit as consequential as the migration to SAAS, and it will demand entirely new ways of thinking.
The Takeaway
The 95% failure rate referenced by the MIT study is not a sign that AI is overhyped or fundamentally failing on its promise. It is a natural byproduct of an early-stage revolution. We saw that with the web and mobile. Each time, the shortfalls were precursors to breakthroughs and transformation that redefined industries.
The same will be true with AI and agentic architecture. Out of today’s noise will emerge tomorrow’s standards. Out of the graveyard of pilots will emerge the architectures that underpin a decade of transformation.
The organizations that thrive won’t be the ones that chased the most demos. They’ll be the ones that identified the right starting point, selected solid partners, built the scaffolding, and harnessed AI’s horsepower with clarity and context.