The AI Boom That Isn’t: Oklahoma’s Business and Political Leaders Bought the Copper Version of AI — and It’s Already Obsolete
Just as copper networks collapsed the moment fiber arrived, NVIDIA’s copper-wired GPU farms are being overtaken by Google’s light-based TPU clusters. Yet Oklahoma’s leaders are rushing to subsidize th
I. A Billion-Dollar Bet on Yesterday’s Technology
Oklahoma’s political and business leaders are patting themselves on the back: data-center campuses near Tulsa, promised “AI jobs,” massive tax abatements, discounted utilities, and the image of “Oklahoma on the cutting edge.”
But — make no mistake — we aren’t buying the future. We’re buying the copper version of the future.
The data centers being proposed around Owasso, Coweta, Sand Springs, and east Tulsa are built around NVIDIA GPU farms: thousands of GPUs wired together with hundreds of thousands of copper cables, drawing massive power for computation and cooling.
That architecture is already showing its age — and the world is moving on.
II. Why the GPU Data-Center Model Is Already Being Outclassed
Light-based TPU clusters are winning — fast.
Leading AI infrastructure is no longer using copper-wired GPUs. Instead:
Google’s TPUs (Tensor Processing Units) are now connected via optical interconnects, not copper. That means far higher bandwidth, lower latency, and — crucially — far lower power and cooling demand. (arXiv)
According to recent benchmarks, TPU-based systems can deliver 2–4× better performance-per-dollar and more than 2× cost-efficiency for inference workloads than comparable GPU-based systems. (Google Cloud)
One analysis of TPU vs GPU shows that TPUs consume 175–250 W per chip, whereas high-end GPUs can draw 700–1,000 W each under heavy AI loads. (Artech Digital)
According to energy-efficiency studies, TPUs can out-perform GPUs by 2–3× on performance-per-watt, while delivering comparable or better training throughput. (Medium)
As one equity-research note recently put it:
“Google’s latest TPU v7 chip marks a major inflection point in the AI hardware race, closing the longstanding performance gap with NVIDIA’s cutting-edge GPUs.” (lighthouse-canton.com)
In short: TPUs are faster, leaner, cheaper to power, and easier to cool. The GPU-heavy, copper-wired data center model simply cannot match that in cost, energy demand, and scalability.
III. Why These Efficiency Gains Matter for Oklahoma — and What They Reveal
TPUs don’t just win on raw compute. They win on infrastructure demands:
Lower electricity consumption → smaller power draw → fewer or smaller substations, fewer upgrades
Less heat + less wasted power → reduced cooling requirements → lower water usage (or possibly no water-cooled systems at all)
Because optical interconnects are more efficient than copper networking fabrics, AI clusters scale cleaner, without exponential growth in wiring, heat, and energy overhead. (Introl)
That undermines the entire pitch behind Oklahoma’s data-center subsidies:
You don’t need massive cheap water, cheap electricity, huge tax giveaways — if your hardware is built on the latest technology.
It also means that if a data center built today with GPU farms gets converted (or partly reused) tomorrow for TPU-based infrastructure, much of the “infrastructure footprint” — high voltage lines, oversized cooling, oversized water delivery — becomes wasted. A white elephant.
IV. Who’s Building Oklahoma’s Data Centers — and Who Benefits
The companies behind Tulsa-area projects aren’t local. They are outsiders betting on low-cost infrastructure:
Beale Infrastructure — a San Francisco–based developer backed by a major New-York–listed asset manager. Their pitch: build large data-center campuses anywhere the incentives are generous.
On the ground, the real-estate deals are signed by Quartz Mountain Properties, LLC, a Chicago-registered shell, often incorporated in Delaware. They own or option hundreds of acres around Owasso, Coweta, and east Tulsa.
These are not “build local, stay local” plays. They are real estate and infrastructure arbitrage — pick the cheapest region, soak up incentives, build heavy-asset facilities, then lease them to whoever wants raw compute.
Oklahomans aren’t equity holders. We won’t see stock upside. We won’t see long-term dividends.
We only see higher utility bills, strained water systems, and overbuilt infrastructure.
V. The Tucson Warning: This Isn’t Theory — It’s Happening Elsewhere
In Tucson, the same developer pitched a major data-center complex — then ran headfirst into public backlash when the water usage and power demand became clear:
Water draw measured in hundreds of millions of gallons per year.
Proposed load requiring massive new substations and transmission upgrades.
Minimal permanent jobs despite massive infrastructure demand.
School districts and local utilities facing steep long-term costs.
The community pushed back. Tucson walked away.
Here in Oklahoma — with weaker regulation, bigger tax incentives, and less public scrutiny — the deal is on the table again.
And the situation is worse: we’re building with copper-based GPU farms just as TPU/light-based systems are beginning to dominate.
VI. The Real Costs — and Who Pays Them
These aren’t small costs:
Upgraded transmission lines and substations
Expanded water supply systems and increased water consumption
Additional cooling infrastructure, piping, and maintenance
Electricity generation capacity — often from fossil-fuel peaker plants
Discounted utility rates locked in by long-term contract
Who pays for all this?
You — the residential ratepayer.
You — when water rates rise.
Your kids — when schools lose tax revenue from abatements.
Your community — when water resources stretch thin, summer after summer.
These data centers will generate very few long-term jobs — maybe a few dozen per site.
But they’ll saddle entire communities with infrastructure costs for decades.
And that’s if the hardware lasts. If, ten years down the road, TPU or even newer optical-AI hardware takes over (as many insiders expect), these GPU-based facilities could become ghost towns full of stranded equipment.
VII. The Moral of the Story for Oklahoma Policy Makers
If Oklahoma wants to bring real, future-proof AI infrastructure to the state — fine. But it must do three things before writing multi-billion-dollar checks and signing 20-year tax-and-utility deals:
Recognize the hardware shift. GPU farms built on copper may already be heading for obsolescence.
Require infrastructure costs to be borne by developers — not taxpayers or ratepayers. If you want to build a supercomputer campus, pay for the grid upgrades, water infrastructure, environmental mitigation, and long-term maintenance.
Demand transparency and local benefit. Who owns the skyscraper-sized data sheds? Who profits? What happens if ownership changes? What happens when hardware is replaced?
Without those safeguards — and without a real understanding of how AI hardware is evolving — Oklahoma risks writing a check for someone else’s vintage gear, while getting stuck with the bill.
VIII. Final Word
We’ve been documenting on wlangdon.substack.com how residential electricity bills in Oklahoma keep going up — often for reasons unrelated to ordinary use. Now, state and local leaders are rushing to green-light massive data-center projects that double down on the same waste: oversized grid upgrades, heavy power demand, extensive cooling, and heavy water use.
That might have made sense when GPUs dominated AI. But GPU dominance is ending.
Today’s frontier is optical-connected TPU clusters — faster, cooler, cleaner, and far more efficient.
If Oklahoma’s leaders want to be part of the AI future, they need to see which version of that future is actually winning — and stop subsidizing a dying generation of data hardware.
Because right now, they’re asking Oklahomans to pay for copper in the new light optic world.

