The recent release of GPT-5 has generated significant buzz in AI sphere, and it has been interesting to see the results of the benchmarks and tests done on this new frontier LLM.
While I haven't really had time to test it thoroughly myself, I've been struck by something else entirely: the wave of disappointment from people complaining that this latest model still hasn't achieved AGI, even though most do not deny it is a significant incremental improvement.
Sam Altman has done what he does best: building anticipation and too much hype around his company's products. Yet, this does not justify why so many people expected these new models to surpass some imaginary AGI threshold, especially since we haven't defined what that threshold is.
I'm not going to attempt the impossible task of defining what AGI actually means. That debate has consumed far smarter minds than mine without resolution. But regardless of how you define artificial general intelligence, I think we're asking the wrong question. Instead of wondering why we don't have AGI yet, we should be asking: Why are we in such a rush to get there?
Here's what I believe: we're nowhere near ready for AGI, whatever form it might take. Before we start lamenting the absence of super intelligent systems, maybe we should focus on the more fundamental challenge. We haven't even figured out how to properly use the remarkable AI we already have.
1. Current AI is massively underutilized
I'll be blunt: most people have absolutely no idea what AI actually does or how to use it effectively, even among "AI experts". We have barely scratched the surface of what current frontier models can accomplish:
- People treat AI like Google - They ask basic factual questions instead of leveraging complex reasoning capabilities
- No understanding of core capabilities - Most users and businesses don't understand the full range of AI capabilities from traditional AI models to GenAI and agentic frameworks (I personally don't)
- Terrible prompting skills - People stick to one-sentence requests instead of learning effective prompting techniques or iterative conversations
- Missing the automation potential - Teams don't identify which complex workflows could be changed by AI integration
- Using a supercomputer as a calculator - Many teams use AI just to rewrite emails and summarize basic documents
- No systematic approach - Organizations implement AI without strategy, governance, measurement, or understanding of what problems they're trying to solve
- Frustration from wrong expectations - People get disappointed asking AI to do things it's bad at while missing the many things it could actually do. Same for businesses, who are tired of consultants and providers overselling and not delivering
Walk into any office or ask any user and you'll see this pattern almost everywhere.
2. AI risks remain unaddressed
We also haven't seen all the ways current systems can go wrong, let alone developed robust solutions for the problems we are already facing:
- Misinformation at scale - AI-generated content is already polluting information ecosystems faster than we can detect or counter it
- Job displacement without transition plans - Industries are being disrupted but we have no coherent strategy for retraining displaced workers
- Bias amplification - AI systems are perpetuating and scaling human biases in hiring, lending, healthcare, and possibly criminal justice
- Privacy erosion - Personal data is leaked and is being used to train models without clear consent frameworks or protection mechanisms
- Security vulnerabilities - AI systems can be manipulated, jailbroken, or used to create sophisticated cyberattacks
- Regulatory chaos - Governments worldwide are scrambling to create AI governance frameworks with little coordination
- Academic integrity collapse - Educational institutions are struggling to maintain standards as AI makes cheating effortless
- Creative industry disruption - Artists, writers, and creators are seeing their work used without permission to train competing systems
- Addiction and dependency - We're already seeing people become over-reliant on AI for basic cognitive tasks and even for basic social contact
The uncomfortable truth is that we're implementing powerful technology faster than we can understand its consequences. We're still debating fundamental questions about AI governance, safety, and ethics. And somehow we think we're ready for something exponentially more powerful?
3. Focus on optimizing what already exists
Instead of chasing the AGI dream, we have massive untapped opportunities to make existing AI better, more accessible, and more useful. The potential gains from optimizing what we already have could keep us busy for years:
- Technical improvements and efficiency - Making AI more efficient, reducing energy consumption, reducing cost and getting better performance from existing hardware
- Fine-tuning for specific domains - Customizing models for healthcare, legal work, education, and other specialized fields instead of relying on general-purpose systems
- Better integration tools - Building seamless workflows that connect AI to existing business systems and databases
- Improved user interfaces - Creating intuitive ways for non-technical users to interact with and benefit from AI capabilities
- Training and education programs - Teaching people and organizations how to effectively prompt, collaborate with, and implement AI systems
- Implementation support - Helping businesses, especially small ones, actually achieve the productivity gains AI promises rather than just buying subscriptions
- Safety and alignment research - Making current models more reliable, truthful, and aligned with human values and building better governance and oversight tools
- Accessibility improvements - Ensuring AI tools work for people with disabilities and across different languages and cultures
- Cost reduction - Making powerful AI capabilities affordable for smaller organizations and individual users
The irony is that while some are complaining about not having AGI, we're sitting on technology that could transform how most people work and live. We just need to focus on making it actually work for them instead of rushing toward the next shiny object.
Conclusion
If AGI is actually achievable, I don't think most people grasp how profound an impact it would have on everything we know. We're talking about systems that could fundamentally reshape economics, politics, work, and society itself at a level we have not seen before. Our societies, institutions and economies aren't yet prepared for this kind of disruption.
Here's what I think we should do instead: focus on catching up. Let's master the remarkable technology we already have. Let's solve the problems it's creating. Let's ensure everyone can benefit from current AI capabilities before we focus on building something that could make all our current challenges look trivial. The future will get here fast enough. We don't need to sprint toward it while leaving most of humanity behind.
Personally, I am happy that we haven't achieved "AGI".



