The Real Reason AI Projects Fail (and It's Not the Model)

November 27, 2025
2 min read
In his latest blog post, our CTO Miguel Mendoza draws on July 2025 Massachusetts Institute of Technology’s GenAI Divide report to explain the structural reasons behind stalled AI initiatives, and what it really means to design products that are ready to use AI well.

Back in the late 90s, everyone needed a website; it didn't matter what it did, just that it existed. A few years later, the same thing happened with social media — having a page was proof that your company was "real," even if it didn't actually serve your goals.

Twenty-five years later, the same logic drives the AI gold rush: companies attach a chat interface to whatever they are doing, not because they are solving a problem, but because they can. And just like those websites and social pages, most will end up abandoned.

Fast forward to July 2025, a new MIT report, "The GenAI Divide", shows that despite the massive investments 95% of AI pilots go nowhere.

What MIT Found:

These are stated as the core barriers to success in the MIT report:

  • The Learning Gap and Lack of Adaptability: The central issue holding back AI is that most tools do not learn, retain feedback, adapt to context, or improve over time. For high-stakes work, users require a system that accumulates knowledge and improves over time, which current systems often lack.
  • Poor Workflow Integration and Alignment: Tools fail because they have brittle workflows and suffer from misalignment with day-to-day operations. Custom solutions stall due to integration complexity and a lack of fit with existing workflows. If the AI tool cannot adapt to evolving processes, organizations often revert to traditional methods, such as spreadsheets.
  • Lack of Memory and Contextual Awareness: Users abandon generic tools for mission-critical work due to their lack of memory. They fail to retain knowledge of client preferences or learn from previous edits, requiring excessive manual context input for each session and repeating the same mistakes
  • Preference for Internal Builds Over Partnerships: Pilots built via internal development (internal builds) have substantially lower success rates and are twice as likely to fail compared to those built via strategic external partnerships. The organizational design, where companies choose to build rather than buy, is cited as the dominant barrier.
  • Investment Bias: Budgets tend to favor visible, top-line functions (like sales and marketing) over high-ROI back-office functions. This bias directs resources toward visible but less transformative use cases, perpetuating the divide.
  • Poor User Experience (UX) and Output Quality: While generic tools are praised for flexibility, users express skepticism of custom or vendor-pitched AI tools, describing them as brittle or overengineered. Top barriers to scaling include concerns about model output quality and poor user experience.
  • Lack of Trust in Vendors: Despite high interest in AI, there is notable skepticism toward emerging vendors. Establishing trust is a significant challenge due to the flood of options, causing buyers to heavily rely on peer recommendations and referrals rather than functionality alone.

This is where most analyses typically end, but there’s a deeper pattern: most AI initiatives fail long before model performance even becomes a concern. Many pilots collapse because the intention behind them is unclear, misaligned, or simply not tied to a concrete outcome. In that situation, no amount of technical improvement meaningfully changes the trajectory.

Apart from the organizational breaches, the report emphasizes “learning gap” and “memory retention” as the most significant drivers of success or failure. While these are undoubtedly desirable features in a product using AI, we’ve been using software tools for years that don’t learn or retain significant user flows. This “learning gap” might not be technical at all. The real question is: who isn’t actually learning?

philippe-bout-93W0xn4961g-unsplash.jpg
Photo by Philippe Bout on Unsplash.

Who's Really Not Learning?

"The primary factor keeping organizations on the wrong side of the GenAI Divide is the learning gap, tools that don't learn, integrate poorly, or match workflows. Users prefer ChatGPT for simple tasks but abandon it for mission-critical work due to its limited memory. What's missing is systems that adapt, remember, and evolve, capabilities that define the difference between the two sides of the divide."

This MIT report refers to it as a "learning gap," but this learning gap is likely not inherent in the AI model itself, but rather in the purpose. When you are trying to solve a fictional problem, there is no intelligence, whether artificial or human, that can address it and positively impact your business. Many projects fail to scale because they are not grounded in real problems or treat AI as a superficial add‑on.

It's a simple idea, but a powerful one: the success of AI doesn't depend on the model; it depends on whether the problem is real.

nahrizul-kadri-OAsF0QMRWlA-unsplash.jpg
Photo by Nahrizul Kadri on Unsplash.

AI-Ready

At Vizzuality, we don’t tack AI onto products just for the sake of it. We design products that can use AI when it makes sense and from the start.

We call it "being AI-ready" and for us it means building systems with the foundations AI needs: well-defined workflows, clean data structures, clear context, and feedback paths that let products evolve over time. Every product we build starts with that principle — not because it’s trendy, but because it’s what delivers value.

If you’re exploring how to make AI work in practice, reach out! We’d be glad to show you how we approach it.

____________________________________________________________________________________________________________________________________________________________________________

MIT Report: The GenAI Divide: State of AI in Business 2025

Author:
Miguel Mendoza
Miguel Mendoza
CTO

You may also like...

Want to make a difference together?

Let's talk