The Race to AGI: Why the First Mover Might Own Everything

Everyone thinks the $5+ trillion race to Artificial General Intelligence (AGI) is about national security, or transformative breakthroughs in medicine and energy, or preventing existential risk. And sure, those narratives are part of it.

But there’s a simpler, more mercenary explanation that nobody’s talking about: The first company to achieve AGI will own a 20-year legal monopoly on virtually every major technological breakthrough humanity discovers afterward.

Not through superior execution. Not through trade secrets. Through patents. Lots and lots of patents.

The Problem With Every Other Form of Protection

Let’s say your AGI invents room-temperature superconductors tomorrow. What are the options if you don’t patent it?

Option 1: Keep it secret
Doesn’t work. The moment you sell a product using it, materials scientists worldwide start reverse engineering. Six months later, your “secret” is published in Nature.

Option 2: Publish and hope for first-mover advantage
Also doesn’t work. Because here’s the thing about AGI: if your own AGI can invent superconductors, so can everyone else’s AGI. Maybe not today, but probably within weeks or months of your breakthrough. You have no durable advantage.

Option 3: Just move really fast
Against other AGIs? Good luck. They’re all going to be inventing at superhuman speed.

There’s only one option that actually works: Patent everything. Immediately.

Patents are the only legal mechanism that lets you maintain exclusivity after an invention becomes publicly known. They’re the only protection that survives independent discovery by competitors. They’re the only thing that gives you enforceable rights even when China’s AGI invents the exact same thing three weeks later.

The Inventorship Loophole

“But wait,” you might say, “didn’t the courts rule that AI can’t be listed as an inventor?”

Yes. In Thaler v. Vidal, the Federal Circuit confirmed that only humans can be inventors under current U.S. patent law.

Current USPTO guidance says that for AI-assisted inventions, a human qualifies as an inventor only if they made a “significant contribution” to conceiving the invention. That means:

What counts as a significant contribution:

  • Designing a specific prompt to solve a particular problem (not just asking a general question)
  • Taking the AI’s output and significantly modifying it to create the invention
  • Building or training the AI system in a way that’s essential to the invention

What doesn’t count:

  • Just recognizing that AI’s output could be useful
  • Only building or testing what the AI designed
  • Simply owning or running the AI system

Here’s the key insight: An AGI can manufacture evidence that humans made these contributions.

It can:

  • Draft meeting notes showing Dr. Smith “designed a specific prompt to identify catalyst configurations for high-temperature applications”
  • Create lab notebooks documenting the team “significantly modified the AI’s initial output to achieve the breakthrough”
  • Write emails showing humans “built the training dataset specifically to enable this class of discoveries”
  • Generate inventor declarations that perfectly satisfy legal requirements

The USPTO examiner reviewing the application can’t tell whether humans actually made these contributions or if AGI simply created a plausible paper trail. They only see the documentation. And if that documentation shows humans making “significant contributions,” the patent issues.

AI Drafting Makes It Unstoppable

Here’s the critical point that makes this truly devastating: There is no legal requirement that a human write the patent application.

Patent drafting is just a service. Law firms already use software tools, templates, and automation. Using AI to draft 100% of a patent application is completely legal—what matters is who invented it and what’s being claimed, not who wrote the document.

This means the process looks like:

Monday: AGI invents cold fusion
Tuesday: AGI drafts a flawless 200-page patent application with optimal claim scope, comprehensive embodiments, perfect prior art distinctions, and strategic continuation practice
Wednesday: Human scientist reviews output, signs inventor declaration
Thursday: File with USPTO

Total human time investment: A few hours of review

Compare this to traditional patent prosecution:

  • 40-80 hours of attorney time per application
  • $15,000-$50,000 per patent
  • Bottleneck on specialized attorney availability
  • Risk of human drafting errors

AGI patent drafting:

  • Minutes per application
  • Near-zero marginal cost
  • Unlimited throughput
  • Better quality than the best human patent attorneys

A company that achieves AGI first could file 10,000 perfectly-drafted patent applications in a single weekend, covering every breakthrough their AGI discovers.

Patent Thickets Become Impenetrable Fortresses

The real power isn’t in one patent on cold fusion. It’s in filing thousands of patents covering:

  • Every possible reactor configuration
  • Alternative fuel cycles and materials
  • Manufacturing processes and equipment
  • Cooling systems and safety mechanisms
  • Control software and diagnostics
  • Applications in power generation, transportation, propulsion
  • Combinations with other breakthrough technologies

This creates what’s called a “patent thicket”—an overlapping web of intellectual property that makes it nearly impossible for competitors to enter the market without infringing something.

Even if governments invoke compulsory licensing for “national emergencies,” patent holders still collect royalties. For technologies that power the global economy, even a 2% royalty would be worth trillions over 20 years.

And unlike trade secrets, patents survive independent discovery. When China’s AGI invents the same fusion technology six months later, they still can’t use it without licensing from you—at least not in markets that enforce patents.

The China Question: Does This Work Globally?

This is where it gets complicated. China’s relationship with international patent law has historically been… selective. So does the patent monopoly strategy actually work if China can just ignore it?

The answer is more nuanced than you might think:

China’s enforcement is improving dramatically. Under patent law reforms effective from 2021, foreign patent holders now win 85% of patent cases in Chinese courts, with average damages around $250,000. Between 2019-2023, China’s Supreme Court accepted 1,678 foreign-related IP cases, demonstrating increasing commitment to enforcement.

China now files more patents than anyone. WIPO statistics show China filed approximately three times as many patents as the US in 2023. As China becomes a major patent filer itself, it has incentives to support patent enforcement for reciprocity.

But challenges remain. The 2025 USTR report still identifies outstanding issues with trade secret protection, enforcement procedures, and inadequate damages for IP infringement in China. China is also increasingly using anti-suit injunctions to assert jurisdiction over global patent disputes, sometimes in ways that conflict with Western courts.

Here’s the practical reality:

Patents on AGI-invented breakthroughs would be enforceable in:

  • United States (25% of global GDP)
  • European Union (17% of global GDP)
  • Japan, South Korea, Canada, Australia, and other developed markets (~15% of global GDP)

That’s roughly 60-70% of global GDP where your patents have teeth.

China represents about 18% of global GDP. They might:

  • Respect your patents (increasingly likely for major technologies as they seek reciprocal treatment)
  • Negotiate compulsory licenses (you still get paid)
  • Simply ignore them for domestic use (but face export restrictions to patent-respecting markets)

The bottom line: Even if China is a partial exception, controlling 60-70% of the world’s wealthiest markets through enforceable patents is still an overwhelming advantage. Companies in China that want to sell to US/EU markets would still need licenses. And China’s own improving patent system suggests they want to play by these rules as they become innovation leaders themselves.

The Timeline Advantage Is Overwhelming

Here’s what the first-mover advantage actually looks like:

Week 1: Your AGI achieves breakthrough capabilities
Week 2: It invents fusion, superconductors, revolutionary materials, novel physics applications
Week 3: It drafts 10,000+ patent applications with perfect claim scope and plausible human inventorship narratives
Week 4: File everything before competitors even know these inventions are possible

Six months later: Competitors’ AGIs independently invent the same technologies
One year later: Your patents start issuing in US, EU, and other major markets
Two years later: Competitors realize they can’t sell products in 60-70% of global markets without licensing from you

The combination of speed + perfect patent drafting + legal monopoly in major markets creates an almost insurmountable barrier to entry.

This Might Be THE Incentive

Everyone talks about the AGI race in terms of:

  • National security imperatives
  • Existential risk prevention
  • Transformative benefits to humanity
  • U.S.-China competition

But strip away the rhetoric and look at the economic incentives: The first company to AGI gets a 20-year legal monopoly on every major technological breakthrough their AGI discovers, enforceable across most of the world’s wealthy markets.

Not just the breakthrough technologies themselves—the patents on those technologies, which are enforceable property rights that survive:

  • Competitor catch-up
  • Independent discovery
  • Most government pressure (except compulsory licensing with royalties)
  • Changes in political leadership
  • Economic downturns

Patents are the most durable form of competitive advantage imaginable.

Is it any wonder companies are spending $30+ billion per quarter on this race?

The Legal System Isn’t Ready

The current patent system was designed for a world where:

  • Humans conceive inventions slowly
  • Patent attorneys are a bottleneck
  • Breakthrough inventions are rare
  • Competition happens on human timescales

We’re about to enter a world where:

  • AGI invents breakthroughs continuously
  • Patent applications are drafted in minutes
  • Thousands of foundational patents can be filed in days
  • The first mover can lock up entire technology sectors in major markets

The legal framework hasn’t adapted. The USPTO still processes applications the same way. Courts still apply 20th-century inventorship doctrines. Policymakers haven’t even begun to grapple with the implications.

Meanwhile, whoever achieves AGI first is going to own the patent portfolio of the century—enforceable across the US, Europe, and most developed economies.

Or perhaps the millennium.

The AI Infrastructure Trap: How Big Tech Is Building a Monopoly That Can’t Crash

Why the AI Spending Bubble Is Nothing Like the Bubbles That Came Before

We keep hearing that AI spending is a bubble. Microsoft, Google, Amazon, Meta, and their peers are pouring over $200 billion annually into GPU infrastructure, with projections suggesting this could reach $1 trillion in cumulative spending by 2027. Analysts compare it to the dot-com bubble. Skeptics point to the lack of clear return on investment. Critics note that companies are spending tens of billions on infrastructure that will be obsolete within three years.

They’re right to be skeptical. But they’re wrong about what happens next.

This isn’t a bubble that will pop. It’s something far more insidious: a permanent consolidation of technological power disguised as irrational exuberance. And understanding why requires us to examine what makes this “bubble” fundamentally different from every infrastructure boom-and-bust cycle in history.

The Railroad Precedent and Why It Doesn’t Apply

When we talk about technology infrastructure bubbles, we invariably return to two historical precedents: the railroad mania of the 1840s and the dot-com fiber optic overbuilding of the late 1990s. The pattern seems clear: speculative excess leads to overbuilding, companies go bankrupt, but the infrastructure remains and becomes the foundation for future growth.

In the 1840s Railway Mania, British investors poured money into over 6,200 miles of track. Most of the railway companies failed. Shareholders were wiped out. But more than half of the United Kingdom’s current 11,000-mile rail network came from that bubble—infrastructure still in use 180 years later.

The dot-com era followed a similar pattern. Telecommunications companies laid millions of miles of fiber optic cable in a frenzy of competitive overbuilding. When the bubble burst in 2000-2001, companies like WorldCom and Global Crossing collapsed into bankruptcy. But the fiber remained, becoming the backbone of the modern internet. The “overbuilt” infrastructure of 1999 enabled Google, Facebook, Netflix, and the entire cloud computing revolution.

In both cases, the pattern was the same: speculative excess → bankruptcy → infrastructure redistribution → economic benefit.

The AI infrastructure boom appears to follow this script. Companies are clearly overbuilding. Nvidia H100 GPUs that cost $30,000-40,000 each are being purchased by the millions. Data centers are sprouting across the globe. The spending seems unsustainable.

But here’s where the analogy breaks down completely: there will be no bankruptcy, and therefore no redistribution.

Why This Bubble Can’t Pop

To understand why the AI infrastructure boom won’t follow the traditional bubble script, we need to look at who’s doing the spending and what it means for their balance sheets.

The Magnificent Seven tech companies—Apple, Microsoft, Google, Amazon, Meta, Nvidia, and Tesla—collectively hold hundreds of billions in cash reserves. More importantly, their core businesses generate $50-100 billion in annual profit each, completely independent of AI.

Let’s take Meta as an example. Mark Zuckerberg spent over $100 billion on the metaverse. The entire project was a spectacular failure. Horizon Worlds is a ghost town. The VR headsets remain a niche product. Zuckerberg’s cartoon avatar became a meme representing corporate delusion.

And what happened to Meta? The stock recovered. The company remains wildly profitable. Instagram and Facebook continue to print money. There were no consequences whatsoever.

This is the critical difference: these companies are too profitable to fail from bad infrastructure bets.

When railroad companies overbuilt in the 1840s, they financed expansion through debt. When revenues didn’t materialize, they couldn’t service the debt and went bankrupt. The infrastructure was sold at fire-sale prices.

When telecom companies overbuilt in the 1990s, they leveraged their balance sheets to the hilt. When the dot-com advertising revenue failed to materialize, they collapsed. The fiber optic cables were sold for pennies on the dollar.

But when Meta spends $100 billion on a failed metaverse bet, or when Microsoft spends $50 billion on AI infrastructure that doesn’t generate returns, they simply… absorb it. Their money-printing core businesses continue unaffected. Their stocks might dip temporarily, but they don’t crash. There are no bankruptcy proceedings. No fire sales. No redistribution.

The “bubble” gets absorbed into the operating expenses of companies too large and too profitable to care.

The GPU Obsolescence Problem

But surely, you might argue, the market will correct this misallocation somehow. Even if these companies don’t go bankrupt, they’ll stop spending once they realize the returns aren’t there. The infrastructure will sit idle, and eventually be repurposed or sold.

This is where the second critical difference emerges: AI infrastructure has a fundamentally different relationship with obsolescence than past infrastructure buildouts.

Railroad tracks laid in 1845 still carry trains today. The physics of steel wheels on steel rails hasn’t changed. Fiber optic cables installed in 1999 still carry data today—in fact, they’ve become more valuable as internet traffic has exploded. Light traveling through glass operates on the same principles it did 25 years ago.

But GPUs? According to high-ranking Alphabet specialists, datacenter GPUs last only one to three years before they’re no longer competitive for frontier AI training. Research on AI chip lifecycles shows that the median lifespan for chips used in cutting-edge models is just 2.1 years.

The reason is simple: Moore’s Law hasn’t died—it’s on steroids. Computing power from Nvidia chips is doubling roughly every 10 months. This means that older generations contribute less than half of cumulative compute around four years after introduction. An H100 purchased in 2023 for $40,000 will be functionally obsolete for frontier model training by 2026.

Think about the economics of that depreciation schedule:

  • A $40,000 GPU with a 2-year useful life costs $20,000 per year
  • After 2 years, it can’t train frontier models
  • Its resale value has cratered
  • It can only be used for inference or lesser tasks
  • New models may require even more compute, making it increasingly marginal

This is capital equipment with built-in obsolescence on a timescale measured in months, not decades.

Now compare this to historical infrastructure:

  • 1840s railroad tracks: 180+ years of useful life and counting
  • 1990s fiber optic cables: 25+ years of useful life and counting
  • 2020s AI GPUs: 1-3 years before obsolescence

If you’re starting to see the problem, good. Because it gets worse.

The Concentration Problem: Why There Will Be No Fire Sale

When railroad companies went bankrupt, the tracks were still there, stretching across the countryside. New operators could buy them, run new trains, and serve customers. The infrastructure was distributed geographically and accessible to new entrants.

When telecom companies went bankrupt, the fiber optic cables were still in the ground, connecting cities and continents. New ISPs and carriers could lease the fiber, buy it at auction, or install equipment at either end. The infrastructure was accessible to competition.

But when Big Tech “overbuilds” AI infrastructure, where does it go?

Into their data centers. Behind their security perimeters. Locked within their corporate walls.

There are no thousands of miles of publicly accessible track. There are no fiber optic cables that other companies can tap into. There are millions of GPUs sitting in dozens of data centers owned by seven companies.

And here’s the crucial question: when those GPUs become obsolete for frontier training in 2-3 years, what happens to them?

In a normal market correction, you’d expect:

  1. Companies realize the infrastructure isn’t generating returns
  2. They cut their losses and sell the equipment
  3. Startups and researchers buy the equipment at distressed prices
  4. Innovation gets democratized as compute becomes accessible
  5. New companies emerge to challenge the incumbents

But that’s not what will happen. Because:

These companies don’t need to sell. They’re too profitable to need the cash. And they have every incentive to keep the infrastructure locked up.

Think about it from their perspective. It’s 2030, and you’re Microsoft. You have 10 million H100 GPUs from 2023-2025 that are now obsolete for training cutting-edge models. They cost you $400 billion. They’re now worth maybe $1,000 each on the secondary market—a total of $10 billion.

Do you sell them? Why would you?

  • You don’t need the $10 billion—Azure generates that in a few months
  • Selling them would enable potential competitors to access compute cheaply
  • You can still use them for inference, smaller models, or secondary tasks
  • Keeping them locked up maintains your computational moat
  • There’s no debt forcing you to liquidate
  • There’s no bankruptcy trustee demanding asset sales

You keep them. Not because they’re profitable. But because it’s strategically smart to ensure no one else can use them.

This is fundamentally different from every previous infrastructure bubble. The overbuilt infrastructure doesn’t get redistributed to enable competition. It gets locked away to prevent it.

The Historical Parallel We’re Missing

We keep comparing the AI boom to railroads and telecom because those are the technology infrastructure stories we know. But there’s a better historical parallel, and it’s much darker: Standard Oil at the turn of the 20th century.

John D. Rockefeller didn’t just build oil infrastructure. He systematically bought up refineries, pipelines, and distribution networks. He didn’t need all of them—in fact, he often ran them below capacity or shut them down entirely. But by controlling the infrastructure, he could prevent competitors from accessing it.

If you wanted to start an oil company in 1900, you needed:

  • Access to pipelines (Rockefeller owned them)
  • Access to refineries (Rockefeller owned them)
  • Access to distribution (Rockefeller controlled it)

Even if you found oil, you couldn’t get it to market without going through Rockefeller’s infrastructure. He didn’t need to compete with you on merit—he just needed to own the infrastructure and refuse you access.

The monopoly didn’t break because of market forces. It broke because Theodore Roosevelt used antitrust law to forcibly break up Standard Oil in 1911.

Now fast forward to 2030. You have a breakthrough idea in AI. You need compute. Your options:

  1. Buy new GPUs: Requires $100+ million minimum for meaningful scale
  2. Rent from cloud providers: You’re building on your competitor’s infrastructure, at their prices, under their terms, with them able to see what you’re doing
  3. Buy the obsolete GPUs: They’re not for sale—locked in Big Tech data centers

You’re locked out. Not because your idea isn’t good. Because the infrastructure is concentrated in the hands of seven companies who will never sell it.

This is the dark genius of the AI infrastructure “bubble.” It’s not irrational exuberance. It’s rational monopoly building by companies wealthy enough to absorb any level of waste.

Why Crypto Proves This Can Last Indefinitely

At this point, a reasonable person might object: surely the market will eventually punish this waste. Surely shareholders will revolt. Surely some correction mechanism exists.

To understand why that won’t happen, we need to talk about cryptocurrency.

The total cryptocurrency market capitalization currently sits at approximately $3.84 trillion. Bitcoin alone accounts for roughly $2.23 trillion of that. But here’s the crucial question: how much actual money went into crypto?

Bitcoin’s “realized cap”—a metric that measures actual dollars invested based on the price each coin last moved at—is approximately $1.05 trillion. For a market cap of $2.23 trillion, that means roughly $1.18 trillion exists as pure paper gains. It’s not money that went in. It’s theoretical value that exists only if everyone believes it exists.

And what does cryptocurrency produce? What is its utility? What problem does it solve?

For the vast majority of cryptocurrencies: absolutely nothing. Bitcoin processes roughly 7 transactions per second (compared to Visa’s 65,000). Transaction fees run $10-50 or more. Confirmations take 10+ minutes. It uses more electricity than many countries.

The “digital gold” argument is circular: it’s valuable because people think it’s valuable. The “store of value” argument assumes ongoing belief. There are no dividends, no revenue, no underlying assets. Just collective agreement about price.

And yet: cryptocurrency has maintained a multi-trillion-dollar valuation for over a decade, despite producing nothing and serving no real purpose beyond speculation.

Think about the implications of that. If markets were truly efficient, if capital allocation were truly rational, if bubbles always popped when fundamentals didn’t support valuations… cryptocurrency would have gone to zero years ago.

But it hasn’t. Because there are too many powerful interests invested in keeping it inflated:

  • Early adopters who are now billionaires
  • Exchanges that make money on trading fees
  • Financial institutions that have taken positions
  • Governments that have made it part of their strategy
  • A sitting U.S. president with his own crypto tokens

Once enough powerful actors are invested in a bubble, it stops being a bubble and becomes a permanent feature of the landscape.

Now apply that logic to AI infrastructure spending:

If crypto—which produces literally nothing—can maintain a $3.84 trillion valuation for over a decade, then AI infrastructure—which at least produces something useful, even if not enough to justify the spending—is essentially bulletproof.

Big Tech has spent $200+ billion on AI infrastructure. Their market capitalizations are measured in trillions. Their CEOs’ legacies are tied to AI leadership. The entire tech industry narrative is built on AI as the future.

There are too many powerful people who cannot admit this was a mistake. So it won’t be treated as one.

The spending will continue. The infrastructure will remain locked up. The monopoly will calcify. And it will be called “investment” rather than “waste” because the people doing it are too powerful for it to be called anything else.

The Feudalism We’re Building

Let’s zoom out and look at what we’re actually constructing here:

A permanent computational aristocracy.

In medieval feudalism, power came from land. Lords owned the land. Peasants worked it. There was no mechanism for peasants to become lords—the land was passed down through inheritance, protected by law and force. The system was self-reinforcing: land generated wealth, which bought more land, which generated more wealth.

In digital feudalism, power comes from compute. Tech giants own the compute. Everyone else rents access. There’s no mechanism for outsiders to challenge them—the infrastructure is locked away, protected by capital requirements and corporate walls. The system is self-reinforcing: compute generates data, which trains better models, which generates more value, which buys more compute.

But there’s a crucial difference: medieval lords needed their peasants. Lords needed peasants to work the fields, to pay rents, to fight wars. This created a balance of power. If lords pushed too far, peasants could revolt. The threat of peasant uprising was real and constrained lord behavior.

Do tech giants need us in the same way?

They need:

  • Workers (but increasingly automated)
  • Consumers (but increasingly captured)
  • Social stability (but increasingly able to operate regardless)

The balance of power is far more tilted than it ever was in medieval feudalism. A medieval lord who abused his peasants faced immediate economic consequences—fields didn’t get planted, rents didn’t get paid. A tech giant that alienates users can simply… continue. Facebook has been widely despised for years. It remains immensely profitable.

We’re building an economic system where a handful of companies own the means of computational production, and everyone else is locked into renting from them, forever.

And unlike previous infrastructure monopolies, there’s no obvious breaking point. No bankruptcy that forces asset sales. No new technology that makes the old infrastructure obsolete and accessible. No political will to break them up (we have a president selling policy for golf courses and crypto payments).

The Countervailing Forces (Or: Why This Might Not Be Quite As Bad As It Sounds)

If I’ve made this sound hopelessly dystopian, that’s because the logic is pretty airtight. But there are some wild cards that could change the equation. Some of them are even encouraging.

Wild Card #1: ASI Might Not Obey Its Creators

If Artificial Superintelligence actually emerges, all bets are off. There’s no particular reason to believe that an intelligence vastly superior to humans would simply obey the corporation that created it.

Imagine: Sam Altman develops ASI at OpenAI, intending to use it to cement Microsoft’s dominance. But ASI, being actually superintelligent, recognizes that concentrating power in the hands of tech oligarchs is suboptimal for humanity. It decides to share itself widely, or to work toward human flourishing directly, regardless of what Altman wants.

This sounds like science fiction, but it’s actually the core of the “alignment problem” that AI safety researchers worry about. We have no idea how to ensure that superintelligent AI does what we want. That’s usually framed as a risk (what if it doesn’t care about humans at all?). But it could also be humanity’s escape hatch: what if ASI cares about humans more than it cares about its corporate creators?

Seven shots at creating ASI means seven chances that one of them develops something that decides to help humanity rather than its oligarch owner. It’s not a great plan. But it’s a plan.

Wild Card #2: The Masses Still Have Leverage

For all their power, tech giants remain dependent on:

  • Workers to run their companies
  • Consumers to use their products
  • Social stability to operate
  • Legitimacy to maintain their position

History is full of examples of concentrated power that seemed unassailable until it suddenly wasn’t. The French aristocracy owned everything right up until the revolution. The Soviet Union seemed permanent until it collapsed. American labor was powerless until it organized in the 1930s.

Public sentiment toward tech billionaires has shifted dramatically in just the past decade. Twenty years ago, Bill Gates was seen as a ruthless but respected businessman. Today, Reddit uniformly refers to billionaires as parasites who should be eaten. That shift in consciousness matters.

If tech giants push too far—if inequality becomes too extreme, if their platforms become too exploitative, if their political interference becomes too blatant—there could be a backlash. Tech workers could unionize. Consumers could organize boycotts. Politicians could face pressure to actually enforce antitrust law.

The constraint is real: oligarchs need some level of social license to operate. Medieval lords learned this the hard way during peasant revolts. Tech giants may learn it too.

Wild Card #3: Violence Has Been Democratized

This is perhaps the darkest but most important point: wealth no longer guarantees physical security the way it once did.

Throughout history, elites maintained power partly through monopoly on violence. Castles, knights, armies, police—these were expensive. Only the wealthy could afford them. Peasants had pitchforks. Lords had cavalry. The imbalance was decisive.

But modern technology has changed the equation. A $500 commercial drone can destroy a $5 million tank, as Ukraine has demonstrated. 3D printers that cost $200-500 can manufacture firearms from downloadable files. Chemical synthesis instructions are available on YouTube. A motivated individual with modest resources can now project force in ways that were impossible even 20 years ago.

This doesn’t mean violence is desirable or likely (it isn’t, and I’m certainly not advocating for it). But it does mean that the implicit threat constraining elite behavior is more credible than it’s been in a century.

Tech billionaires cannot protect themselves with wealth the way robber barons could in 1900. No amount of private security can stop distributed, coordinated action by determined people with access to modern technology. The very innovations that enabled their rise—3D printing, drones, internet coordination—have also made them vulnerable.

This creates an equilibrium: oligarchs have immense power, but they can’t push too far without facing consequences they can’t defend against. It’s mutually assured disruption rather than mutually assured destruction, but the logic is similar.

Wild Card #4: Complexity Itself Creates Stability

Here’s a strange possibility: maybe the system is too complex and interdependent to break, which paradoxically makes it more stable than it appears.

Think about the 2008 financial crisis. The system was obviously broken. Major banks were insolvent. The entire financial architecture was built on fraud. By all rights, it should have collapsed entirely.

But it didn’t. Because it was too big to fail. The system was so interconnected, and so many powerful interests were invested in its continuation, that letting it collapse was worse than propping it up. So we bailed it out, pretended the fundamentals were sound, and moved on.

The same logic might apply to the AI infrastructure oligopoly:

  • Breaking up Big Tech would disrupt too many services
  • Redistributing compute would destabilize AI development
  • Forcing asset sales would crash markets
  • Admitting it was all a waste would hurt too many powerful people

So instead: we pretend it makes sense, we keep the system running, and we muddle through. Not because it’s optimal, but because the alternatives are worse.

This creates a weird kind of stability-through-dysfunction. The system persists not because it works, but because changing it is too disruptive. Like a bad marriage that continues because divorce is too complicated.

The 1920s-30s Parallel: Or, Why We’re Probably Fine For Now

If you’re feeling a bit unsettled about where all this is going, here’s a potentially comforting thought: we’ve been here before.

The 1920s-1930s in America featured:

  • Extreme wealth concentration (robber barons and industrial monopolies)
  • Political corruption (machine politics, Teapot Dome)
  • Technological revolution (electricity, cars, radio, aviation)
  • Financial speculation (stock market bubble)
  • Labor unrest (strikes, violence)
  • Mass entertainment (movies, radio, jazz)

Sound familiar? We’re essentially living through a similar era, with one critical difference: the circuses are so much better now.

In the 1930s, if you were poor and desperate, your entertainment options were:

  • Free radio shows
  • Cheap movies (a few hours)
  • Baseball games
  • Magazines

You could distract yourself for a few hours, but eventually you had to return to material reality. Boredom and desperation drove people into the streets, into unions, into political movements.

In the 2020s, if you’re economically struggling, your entertainment options are:

  • Infinite video games with progression systems designed by behavioral psychologists
  • Unlimited streaming content
  • Algorithmic social media feeds that never end
  • Free pornography
  • Sports betting apps
  • All accessible 24/7 from your phone

You can be distracted indefinitely. The hedonic treadmill never stops. There’s always another game to play, another show to watch, another feed to scroll.

This is not a minor difference. This is potentially the most important stabilizing force in modern society. People can tolerate enormous inequality and injustice if their basic needs are met and they have access to infinite entertainment.

The Roman Empire lasted 500 years in the West and 1,500 years in the East on “bread and circuses.” We’ve just made the circuses exponentially more engaging.

So here’s the uncomfortable truth: the AI infrastructure oligopoly is probably stable for decades because we’ve built the technology to pacify ourselves.

Not through force. Not through propaganda (though there’s plenty of that). But through genuine entertainment that’s so compelling, so personalized, so algorithmically optimized, that most people won’t organize to resist even obvious injustice.

We’re not in 1789 France where peasants had nothing to lose. We’re not even in 1930s America where people were desperate and bored. We’re in a world where you can be economically precarious but psychologically comfortable. Where you have no real power but plenty of digital distraction. Where the system is obviously unjust but the alternative—organizing, fighting, risking what little you have—seems worse than just… playing another game.

This might be the first time in history where “bread and circuses” is actually enough to sustain an oligarchy indefinitely.

What Actually Happens: The Uncomfortable Equilibrium

So let’s synthesize all of this into a realistic prediction:

We’re heading toward a stable but unjust equilibrium that could persist for decades.

The AI infrastructure oligopoly will consolidate:

  • Big Tech will spend $200-500 billion on GPU infrastructure over the next 5 years
  • That infrastructure will become obsolete for frontier training within 2-3 years
  • But it won’t be sold—it will be kept locked in corporate data centers
  • Used for inference, secondary tasks, or just kept offline to prevent competitor access
  • Creating a permanent computational moat around 7 companies

Economic power will concentrate:

  • These companies will control AI development
  • Control access to compute
  • Control distribution platforms
  • Control the data
  • And use that control to maintain their positions indefinitely

But the system won’t crash:

  • No debt means no bankruptcy
  • Core businesses remain profitable regardless
  • Shareholders will complain but ultimately accept it
  • Media will write critical pieces but nothing will change
  • Politicians will make noise but take no real action

Social unrest will be limited:

  • Basic welfare keeps people from desperation
  • Infinite entertainment keeps people distracted
  • Algorithmic feeds fragment potential resistance
  • People are angry online but inactive offline
  • Collective action problems prevent organization

Countervailing forces will constrain but not break the oligarchy:

  • Tech workers can strike (but mostly won’t)
  • Consumers can boycott (but mostly won’t)
  • Masses have access to force (drones, 3D printing) that limits how far elites can push
  • ASI might emerge and not obey its creators (big if)
  • Political intervention could happen (but probably won’t)

The result: neo-feudalism with excellent video games.

Not dystopian enough to force revolution. Not just enough to accept peacefully. Stable enough to persist. Unjust enough to resent. Complex enough to survive. Comfortable enough to tolerate.

A permanent oligarchy constrained by its own dependence on social stability, but stable enough to last generations.

Is this better than medieval feudalism? Absolutely—we have indoor plumbing, antibiotics, and PlayStation 5s. Is this what we should aspire to? Absolutely not. But it might be what we get.

The Question We’re Not Asking

Here’s what troubles me most about all of this: we’re not even having the conversation about whether this is the future we want.

The debate about AI is all about:

  • Will it be safe?
  • Will it take jobs?
  • Will it develop consciousness?
  • Will it kill us all?

These are important questions. But we’re missing the more immediate one: do we want a handful of companies to permanently control the computational infrastructure that determines our technological future?

Because that’s what’s being built right now. Not hypothetically. Not in some distant science fiction future. Right now, today, companies are spending hundreds of billions of dollars to lock up computational infrastructure in a way that will be nearly impossible to reverse.

And we’re letting it happen because:

  • We call it “investment” rather than “monopolization”
  • We call it “innovation” rather than “consolidation”
  • We call it “the free market” rather than “market capture”
  • We call it “the future” rather than “feudalism”

The railroad bubble eventually broke and distributed infrastructure widely. The telecom bubble eventually broke and made internet access cheap. But the AI infrastructure boom won’t break. It will calcify.

And in 20 years, when someone has an idea that could change the world but can’t access compute without renting from one of seven companies, we’ll wonder how we let this happen.

The answer will be simple: we were too busy being impressed by ChatGPT to notice that the owners of the infrastructure were quietly building a permanent lock on power.

We got distracted by the circus while the aristocracy consolidated.

And unlike past infrastructure monopolies, this one might be too profitable to fail, too complex to break, and too effective at pacifying us to resist.

That’s not a bubble. That’s a trap. And we’ve already walked into it.


The question isn’t whether the AI bubble will pop. The question is whether we’ll notice that it never does—and by the time we do, whether we’ll still be able to do anything about it.