Back to BlogAI Strategy

What AI providers can learn from telcos

2025-10-1512 min readBy Hamza Jadouane
What AI providers can learn from telcos

Every AI company today is wrestling with the same questions. Should we charge per token, per model, or per month? Why does every model improvement just lead to demands for an even better one? How do we stop wrapper applications from making higher margins than we do on our own infrastructure? Should we focus on consumers, enterprises, or developers? How do we reduce churn when switching costs are so low?

These questions feel urgent and unprecedented in the AI world. But they seemed deeply familiar to me. With a modest background working for telco clients, I recognized every single one of these dilemmas.

Telcos have been asking parallel questions for decades. Should we charge per minute, per gigabyte, or unlimited? Why does every network upgrade just create demand for more bandwidth? How do we stop over-the-top services like Netflix from capturing all the value while we carry their traffic? Should we focus on consumers, enterprises, or wholesale? How do we reduce churn when customers can easily port their numbers?

Both industries provide foundational infrastructure that enables massive value creation by others. Both face the peculiar economics of infrastructure: huge upfront investments, constant pressure to upgrade, and the challenge of capturing fair value from what you enable. Both must serve everyone from individual consumers to massive enterprises while competing against players who cherry-pick the most profitable segments.

AI companies are essentially speedrunning the telco experience. What took telecommunications companies decades to encounter is hitting AI companies in just 3 to 5 years. The good news? AI providers don't have to learn these lessons from scratch. Telcos have already tried almost everything, and their successes and failures offer a roadmap for navigating the infrastructure provider's dilemma.

1. The Infrastructure Investment Cycle

The most exhausting reality of being an infrastructure provider is that success creates its own punishment. Every improvement in capability doesn't satisfy demand; it reveals new demand that couldn't exist before.

Telcos know this cycle intimately. 3G enabled mobile internet, which made people want faster mobile internet. 4G enabled video streaming, which made people want higher quality video streaming. 5G enables applications we're still discovering, but already customers expect even better performance. Each generation required billions in investment, years of planning, and massive operational complexity. And the reward for successfully deploying it? Immediate pressure to build something better.

AI companies are living the same cycle, just faster. GPT-3 enabled new use cases, which required GPT-4. Claude 2 created demand for Claude 3. Every model release immediately triggers complaints about limitations that users didn't even know they cared about until the previous limitation was removed. The reward for training a better model is demand for an even better model.

Both industries also face the peculiar challenge of investing in infrastructure before knowing what it will actually be used for. Telcos spent fortunes on 5G without clear killer applications. Similarly, AI companies train massive models without knowing if people will use them for coding, writing, analysis, or something nobody's thought of yet. You build it, hope they come, and then scramble to support whatever they actually do with it.

The quality demands compound the investment pressure. Telcos learned that users don't care that your network is 100 times faster than it was a decade ago; they care that their video call dropped. AI companies are learning that users don't care that the model is dramatically more capable than last year; they care that it was slow to respond just now. Both industries face the infrastructure provider's curse: reliability is expected, improvements are quickly forgotten, and failures are unforgivable.

This creates a brutal financial dynamic. The capital requirements never decrease. The operational complexity only grows. And unlike software companies that can reach global scale with minimal marginal costs, every improvement in infrastructure requires real resources: more spectrum, more towers, more fiber for telcos; more GPUs, more training time, more data centers for AI companies.

The pace itself deserves questioning. Telcos moved from 3G to 5G in roughly 2 decades, and many argued that was too fast, that we barely extracted the value from each generation before moving to the next. AI is attempting similar leaps every 2 years. This compressed timeline has created conditions for a potential AI bubble, with massive investments pouring in before business models are proven or value chains established. The infrastructure investment cycle that was already arguably unsustainable for telcos is being compressed into timeframes that border on absurd. But slowing down means ceding ground to competitors. Both industries are trapped in a prisoner's dilemma where everyone might benefit from a slower pace, but nobody can afford to be the first to blink.

Telcos eventually found partial escapes from this trap. Network sharing agreements let competitors share towers and infrastructure in certain areas while competing on service. Infrastructure carve-outs created specialized companies focused solely on building and maintaining networks, separating the capital-intensive infrastructure from customer-facing operations. Some markets saw wholesale-only network providers emerge, serving all retail operators equally. These models reduced duplicative investment and let companies focus on their strengths. AI companies might eventually explore similar approaches: shared training infrastructure, specialized GPU clusters serving multiple model developers, or industry consortiums spreading costs across participants. But the industry is still too young and moving too fast for these structures to fully develop, and the competitive dynamics around proprietary model advantages make cooperation challenging.

2. Pricing and Commoditization

The pricing journey of both industries follows an eerily similar arc: start simple, get complicated, then desperately try to get simple again while fighting commoditization.

Telcos began with straightforward pricing. Minutes for voice. Messages for texts. Megabytes for data. Then competition and market segmentation drove complexity. Unlimited plans, family plans, corporate plans, prepaid, postpaid, bundles with phones, bundles with content, roaming packages, international add-ons. The pricing matrix became so complex that companies employed entire teams just to manage it, and customers needed comparison websites to understand what they were buying.

AI companies are speedrunning this same evolution. What started as simple API pricing per token has exploded into a maze of options. Free tiers with rate limits. Plus subscriptions. Pro subscriptions. Team subscriptions. Enterprise contracts. Per-token pricing that varies by model. Fine-tuning costs. Embedding costs. Retrieval costs. Different prices for input versus output tokens. Volume discounts. Committed use discounts. The pricing pages that were one paragraph two years ago now require documentation.

The free tier trap deserves special attention. Telcos learned that once you offer unlimited anything, you can never take it back. AI companies are learning the same lesson with free tiers. They're necessary for adoption and developer experimentation, but they create an expectation that the core service should be free. They attract users who will never convert to paid. They consume resources and support costs. But cutting them triggers user revolt. Both industries discovered that "free" is a price point you can never escape from.

Then comes the commoditization pressure. For telcos, the fear was becoming a "dumb pipe" where customers saw no difference between providers except price. A megabyte from Telco A was identical to a megabyte from Telco B. The only differentiation became network quality and customer service, both expensive to maintain and hard for customers to evaluate until something went wrong.

AI companies face the same trajectory. As models converge in capability, what differentiates one provider's GPT-5-class model from another's? Speed? Reliability? Price? The core product risks becoming commoditized, with margins racing to zero. The fear isn't theoretical. It's what happened to cloud computing, where AWS, Azure, and Google Cloud compete largely on price for standardized services.

Telcos tried various escapes from commoditization. Value-added services like visual voicemail, mobile insurance, and content bundles. Moving up the stack into applications and services. Brand differentiation through marketing. Some worked temporarily, but the gravitational pull toward commodity pricing proved hard to escape. The successful telcos learned to embrace operational efficiency and scale rather than fight commoditization.

AI companies are trying similar strategies with their own twists. Custom models, specialized tools, better interfaces, enterprise features, compliance certifications. They're also attempting vertical integration, building code editors, browsers, and development environments to control the user experience and increase switching costs. Whether these create lasting differentiation or just delay commoditization remains to be seen. But if telco history is any guide, the infrastructure providers that survive will be those who achieve massive scale and operational excellence, not those betting on maintaining premium pricing through differentiation alone.

3. Value Creation vs Value Capture

The cruelest irony of being an infrastructure provider is watching others build fortunes on top of what you built, while you struggle to capture a fraction of the value you enabled.

For telcos, the pain came from over-the-top (OTT) services. Netflix consumes massive amounts of bandwidth, creating the need for network upgrades that telcos must pay for, while Netflix captures all the subscription revenue. YouTube drives data consumption that requires expensive infrastructure investment, but the advertising dollars go to Google. WhatsApp destroyed SMS revenues overnight while using telco networks to do it. The companies capturing the highest margins were precisely those who didn't have to build or maintain any infrastructure.

AI companies are experiencing the same dynamic with wrapper applications. A developer can build a specialized tool using API calls, charge customers premium prices for a narrow solution, and achieve better margins than the AI company that trained the model. Legal document reviewers, coding assistants, writing tools, customer service bots,... These applications capture value by solving specific problems while the AI infrastructure provider handles the complex and costly work of model training and inference.

The response options are limited and fraught. Compete with your own developers? Telcos tried launching their own streaming services and messaging apps, usually failing while poisoning developer relationships. Charge more for API access? You risk killing the ecosystem that drives demand. Accept your role? That means accepting lower margins than the businesses you enable.

Some telcos tried to charge OTT services for network usage, arguing that heavy bandwidth users should pay more. Net neutrality rules killed most of these efforts, but even where they succeeded, the amounts captured were tiny compared to the value flowing through the network. AI companies attempting similar strategies through usage-based pricing find that aggressive pricing pushes developers to competitors or open-source alternatives.

The most successful adaptations involved acknowledging the reality rather than fighting it. Telcos that accepted their infrastructure role focused on operational efficiency and scale. They stopped trying to compete with OTT services and instead optimized for being the best pipe. Smart telcos went further, they integrated OTT services into their offerings, bundling Netflix or Spotify with data plans. Some partnered directly with content providers, offering zero-rated data for specific services or co-marketing arrangements. They found success in genuinely complementary services, cloud infrastructure, security, enterprise services, where their capabilities aligned with customer needs.

AI companies are still in the fighting stage. Building their own applications, acquiring wrapper companies, trying to move up the stack. History suggests most of these efforts will fail. The skills needed to build and operate infrastructure are fundamentally different from those needed to build great user-facing applications. The question isn't whether AI companies will capture all the value they create, they probably won't. It's whether they can capture enough to sustain the infrastructure investment cycle while others get rich on top of their platforms.

4. Organizational Complexity

Running an infrastructure company means never being able to focus. You're simultaneously operating in completely different businesses with conflicting demands, incentives, and success metrics.

Telcos learned this painful lesson over decades. The network engineering team needs billions for 5G rollout while the marketing team needs millions for customer acquisition. The enterprise sales team wants custom solutions and SLAs while the consumer team wants simplicity and low prices. The wholesale division sells network capacity to MVNOs who then compete with your retail division. The R&D team works on technology that won't generate revenue for years while investors demand quarterly growth. Every division is critical, but their goals are fundamentally at odds.

AI companies compressed this organizational chaos into just a few years. The research team wants to push model capabilities regardless of cost. The infrastructure team warns about GPU constraints and inference costs. The enterprise sales team promises custom models and dedicated instances. The consumer team needs simple, reliable service at scale. The API team serves developers who might become competitors. The safety team wants to slow down while the product team wants to ship faster. Success in one area often means failure in another.

The customer segmentation challenge alone creates organizational headaches. Serving consumers means optimizing for simplicity, viral growth, and low support costs. Serving enterprises means complex contracts, compliance certifications, and high-touch support. Serving developers means detailed documentation, reliable APIs, and predictable pricing. Each segment requires different teams, different approaches, and different metrics. But you need all three segments to justify the infrastructure investment.

Resource allocation becomes a constant battle. Should you invest in making the current model faster or training the next model? Should you focus on consumer features or enterprise requirements? Should you prioritize reliability or capability? Telcos faced identical trade-offs: network expansion versus network quality, consumer versus enterprise, current technology versus next generation. There's never a clear answer, only trade-offs that leave someone unhappy.

The talent problem compounds everything. Infrastructure companies need world-class researchers, engineers who can optimize inference at scale, product designers who understand consumers, sales teams who can navigate enterprise procurement, support teams who can handle everything from confused grandparents to demanding developers. These people have different cultures, different expectations, and different definitions of success. Getting them to work together toward common goals while respecting their different approaches is nearly impossible.

Telcos tried various organizational structures. Separate divisions with P&L responsibility. Matrix organizations with shared resources. Spinning off certain functions into separate companies. Creating internal startups for new initiatives. Most restructured every few years, searching for the perfect balance that never quite materialized. The successful ones learned to manage the tensions rather than solve them, creating systems for making trade-offs rather than hoping to eliminate them.

AI companies are experimenting with their own solutions. Some maintain rigid separation between research and product. Others integrate everything under unified leadership. Some outsource certain functions while others insist on controlling everything internally. But the fundamental tension remains: you're running multiple businesses that happen to share infrastructure, and what's good for one is often bad for another.

The most honest assessment might be that organizational complexity is unsolvable for infrastructure providers. It's not a bug but a feature of the business model. The question isn't how to eliminate it but how to manage it well enough to function while accepting that some level of internal conflict and inefficiency is the price of being in the infrastructure business.

Conclusion

Of course, these industries have significant differences. The most fundamental is geography. Telcos are inherently local businesses, constrained by spectrum licenses, physical infrastructure, and national regulations. AI companies operate globally from day one, serving customers worldwide from centralized data centers. Telcos took decades to navigate international expansion through acquisitions and partnerships. AI companies were born global.

The pace of change differs dramatically too. Telco infrastructure evolved over decades, with years between major upgrades. AI capabilities leap forward every few months. Telcos had time to digest each technological shift. AI companies barely finish deploying one model before needing to train the next. And where telcos dealt with relatively predictable technology roadmaps, AI companies face fundamental uncertainty about whether current approaches will even continue working.

But despite these differences, the core challenges remain strikingly similar. Both industries face the infrastructure provider's dilemma: massive upfront investments, constant pressure to upgrade, value captured by others, organizational complexity, and the gradual slide toward commoditization.

The telco playbook suggests what's coming for AI: consolidation to a handful of major players as scale economics take hold. Infrastructure sharing agreements as the costs become unbearable for any single company. Regulatory frameworks once the infrastructure becomes essential. Strategic partnerships with the ecosystem rather than competition. And ultimately, acceptance that being an infrastructure provider means capturing a fraction of the value you enable.

The most successful telcos were those that accepted their infrastructure role early and optimized for it. They achieved massive scale, operational excellence, and sustainable partnerships. Those that fought reality longest, trying to be content creators, device manufacturers, or application developers, suffered most. AI companies are at an inflection point where they can choose to learn from this history or repeat it.

I've already written about how AI is advancing too fast for society to keep up. This article provides yet another reason why the breakneck pace needs to slow down.

Frequently Asked Questions

Should my company treat AI as strategic infrastructure or as a commodity we buy from vendors?
Both, depending on the layer. The raw model capacity is already commoditizing and there is no prize for running your own GPUs unless that is your business. What stays strategic is the data, the workflows, and the integration into your products. Sorting which AI layers to own and which to rent is core to the roadmap work I do at Verum Services.
How do I evaluate AI vendors without getting locked into a bad long term contract?
Assume the pricing and capability landscape will shift every six months, and negotiate accordingly. Push for portability, clear exit clauses, and transparent usage reporting, because the telco history shows that the most expensive lock-ins are the ones nobody planned for. If a vendor will not let you run a parallel pilot with a second provider, that is a signal about the relationship.
How do I decide whether to build AI features in-house or buy them?
Build what protects your margin and differentiates your product. Buy what is moving fast, generic, and commoditizing. A serious build versus buy review should include total cost of ownership, talent availability, and how quickly the vendor market is evolving, not just a one shot comparison of prices today.

More Articles

Need AI Strategy for Your Business?

From strategy to quick MVPs, I help businesses figure out where AI fits and what to build.