Market volatility over the last few weeks has reignited debates about whether we’re in an AI bubble. With major tech companies losing nearly a trillion dollars in market value, I get why it’s important to examine the state of AI investments and development critically. As someone deeply involved in AI development and commercialization, I think the situation is more nuanced than headlines suggest.
Arguments in Favor of The Bubble
I will concede a few things.
First, we likely are indeed in the “trough of disillusionment” phase for AI, as described in Gartner’s Hype Cycle. This isn’t unexpected. The initial AI breakthroughs were a huge leap forward, sparking inflated expectations. Now, we’re grappling with the challenges of implementation and scaling. It’s a natural part of the cycle, and often preceding more substantial, practical advancements.
Another understandable (but wrong) criticism has been that “Generative AI has not found its use case yet”. This is not true (and GenAI is not crypto), but it is understandable. One thing that surprised me as GenAI became more mainstream is that not everyone found it intuitive to use. There are plenty of things that even the currently-smartest models can’t handle reliably, too. So it makes sense that some people boot it up, try a thing or two, and conclude it’s not useful to them. But for a significant number AI-fanatics like me, that is already a false statement.
Compute costs present another significant challenge. Training and running large generative models requires massive computational resources, often costing millions of dollars. This isn’t just a one-time expense either — ongoing inference costs for deployed models can quickly add up, especially at scale.
As the field advances, the computational demands are growing exponentially. Each new breakthrough often requires an order of magnitude more compute than its predecessor. Combined with the fact that generative models are becoming more commoditized, the barrier of entry for small purely-generative technologies rises significantly.
The AI Startup Landscape: A Complex Picture
The AI startup landscape is evolving rapidly, revealing a complex picture of consolidation, diversification, and challenges at different scales:
- Smaller GenAI startups developing their own generative models are being reabsorbed by larger tech companies. For instance, CharacterAI (my first LLM experience that I’ll never forget, months before ChatGPT launched), is now being reintegrated into Google. This trend is driven primarily by the need for substantial compute resources to compete effectively in generative model development.
- Even smaller startups, those which are sometimes called “LLM-wrappers” are beginning to falter. These companies, which primarily build interfaces around existing models via paid APIs or open source models, are struggling due to a lack of defensibility and insufficient added value. As AI capabilities become more commoditized and models become more inherently powerful, simply providing a user interface for a generative model is no longer enough to sustain a business.
- Conversely, we’re seeing movement between major players that actually indicates a healthy diversification. The recent shift of talent from OpenAI to Anthropic, for example, suggests that the field is dynamic and competitive, with multiple strong players emerging. This landscape shows us a few things: smaller companies are consolidating, basic AI interface providers are struggling, but at the top, we’re seeing some real competition heating up.
The Path to AGI: Clearer than Ever
But while we may be experiencing a bit of a contraction of the AI Startup Bubble, the journey towards Artificial General Intelligence (AGI) is more clearly defined than ever. For the first time in history, there is a visible path of knowable problems that would at least get us closer:
- Moving from prompt-response interfaces to Agentic AI models with internal monologues and chain-of-thought and reasoning/planning loops.
- Addressing self-retuning based on a model’s “lived” experience and addressing long-term memory constraints.
- Advancing towards multi-modal models that can process, map, and reason between various types of data.
Additionally, in contrast to the compute problem, GenAI models are becoming more efficient over time. While compute resources are currently a significant concern, ongoing improvements may minimize the issue in the future. OpenAI’s most recent model, GPT-4o, is significantly cheaper to run than its predecessor GPT-4. The same is true for Antropic’s Sonnet 3.5 model in comparison to the former Opus 3.0.
As we progress down this path, I believe the question of ‘what is consciousness’ will come to dominate mainstream discourse. It will be increasingly explored, revealing its complexity, before potentially being relegated to the realm of ideology and philosophy as an unanswerable query. But at that point one thing is for sure: no one will be asking if we’re in an AI Bubble.
All of that is to say that while we may be experiencing fluctuations in the GenAI startup space, that’s distinct from claiming we’re in a larger AI Bubble. The technology, given enough time, will in my opinion completely transform the trajectory of humanity as a species. This isn’t hyperbole; it’s a realistic assessment based on the potential of AGI and its successors. The implications extend far beyond short-term market fluctuations or even medium-term job displacement. We’re looking at a fundamental rethinking of human priorities, societal structures, and eventually our understanding of intelligence, consciousness, and spirituality.
What Makes AI Products Defensible?
Revisiting an important point from earlier: the defensibility of GenAI technology. Whether you’re an entrepreneur or an investor, that’s something you’ll be thinking about a lot over the next few years.
I’ve worn many hats at BlastPoint over the years, but at my core I’m a Product guy. This perspective means I think a lot about what it takes to commercialize AI technologies.
The future success of GenAI products and investments hinges on their ability to integrate with other data, software, and processes in ways that add proprietary value. This is crucial due to the increasingly robust competition at the top and the lack of defensibility for model-wrapper companies.
For AI startups and products to survive and thrive, they need to go beyond simply wrapping LLMs (and frankly, most of them will have to go beyond training general generative models, too). They have to:
- Develop unique, proprietary datasets or algorithms that can’t be easily replicated.
- Focus on specific industry verticals where they can develop deep expertise and create tailored solutions.
- Create seamless integrations with existing enterprise systems and workflows, adding value beyond basic AI interactions.
- Continuously innovate to stay ahead of the commoditization of AI capabilities.
- Leverage GenAI to enhance core product offerings rather than relying on it as the sole value proposition.
At BlastPoint, we’ve embraced this philosophy. We’ve integrated GenAI to enhance our core analytics and data capabilities, rather than as a standalone value proposition. This approach has sharpened our product’s edge, making it more intuitive for users and boosting our internal efficiency. We’re not just slapping GenAI on top — we’re using it to amplify what we already do best.
Bubble Or Not?
While we’re experiencing fluctuations in the AI startup space and market volatility, the fundamental value and transformational potential of AI remain strong. The current challenges — particularly for smaller startups — represent growing pains in a transformative technology.
Yes, there will be consolidations and failures in the AI startup space, but we’re also seeing healthy competition and diversification among major players. The long-term outlook for AI development and integration remains overwhelmingly positive for those who can add true value.
The key for businesses and investors is to look beyond the hype and focus on practical, value-adding applications of AI. It’s not about jumping on every AI trend or simply providing an interface to existing models, but about thoughtfully integrating AI capabilities to solve real problems and create tangible, defensible value.
So, are we in an AI bubble? Not exactly. We’re in a period of adjustment and consolidation, but the fundamental value and potential of AI remain stronger than ever. The AI revolution is just beginning. Those who can weather the current turbulence, avoid the pitfalls of over-simplistic AI integration, and position themselves strategically will be well-placed to thrive in the AI-driven future that’s rapidly unfolding before us.
Unlock New Levels of Efficiency & Innovation with BlastPoint
Schedule a Demo
Written by Tomer Borenstein
CTO and Co-Founder of BlastPoint, a Customer Intelligence Company for Highly Regulated Industries.