data center

AI’s Next Bottleneck Isn’t Chips — It’s Power

January 09, 20264 min read

AI’s Next Bottleneck Isn’t Chips — It’s Power

If you’ve ever listened to Elon Musk talk about AI infrastructure and thought, “I’m intelligent… but I’m not this kind of intelligent,” welcome to the club.

Now layer in Jensen Huang’s CES 2026 presentation (via NVIDIA’s own write-up), and the big picture gets a lot clearer:

AI is advancing ridiculously fast — but the limiting factor is increasingly the boring, real-world stuff: electricity, grid equipment (like transformers), and cooling.

This is the part most people miss and it’s also where a lot of the opportunity is hiding.


What Jensen Huang is saying

In NVIDIA’s CES 2026 recap, Jensen’s message is essentially: AI is becoming the new base layer of computing—and NVIDIA is building the “full stack” to run it.

Rubin = “AI gets cheaper to run”

NVIDIA says its new platform (Rubin / Vera Rubin NVL72) is designed to deliver AI “tokens” at about one-tenth the cost compared to the prior generation.


If today’s AI feels like taking an Uber everywhere (powerful, but it adds up), Rubin is NVIDIA trying to make it feel more like owning a reliable car—still not free, but way more usable day-to-day.

NVIDIA is pushing “physical AI”

They’re also emphasizing AI moving beyond chat into robots, autonomous driving, and simulated training environments—basically teaching machines in safe digital worlds before they do anything in the real one.


Instead of training a teenage driver on the highway first, you give them 10,000 hours in a simulator… and only then hand them the keys.


What Elon Musk is warning about (the part that matters)

Elon’s core point in the Moonshots conversation is blunt:

The limiting factor soon is “turning the chips on.” Not because we can’t make chips—but because we need power, transformers, and cooling to run them.


AI is like building a stadium full of genius athletes… and then realizing there’s no water, no electricity, and the bathrooms don’t work. The talent isn’t the problem. The infrastructure is.


Why this is happening (one data point that makes it real)

This isn’t theoretical. The U.S. Department of Energy says data centers used about 4.4% of U.S. electricity in 2023, and projections put that at 6.7% to 12% by 2028.

That is an enormous shift in a short time, and it lines up with what both Jensen and Elon are pointing at from different angles:

  • Jensen: compute gets cheaper and faster

  • Elon: the world has to power it

Also: grid hardware is a real constraint. Transformers are not something you can magically scale overnight.


Where this is going

1) AI capability keeps getting cheaper → adoption accelerates

If NVIDIA’s “~10x cheaper tokens” trajectory holds, more businesses will move from “testing” to “deploying.”

2) The AI race becomes an infrastructure race

Power contracts, substation upgrades, cooling design, reliability engineering—these become competitive advantages, not background details.

3) The “winners” won’t just be the smartest—they’ll be the fastest learners

When tools improve this quickly, the advantage shifts from:

  • “Who knows the most today?”
    to

  • “Who can learn fastest, apply it safely, and adjust weekly?”

“I’m not a tech person” is starting to sound like “I’m not an email person.”


The opportunities

You don’t need to build chips or a data center to benefit from this shift. You need to position yourself around the demand it creates.

Here are opportunity lanes that are practical and realistic:

A) AI literacy + implementation

Most organizations don’t fail because the model is bad.
They fail because:

  • nobody knows what to delegate to AI

  • nobody trusts outputs

  • workflows don’t change

  • risk/compliance isn’t considered

Training, playbooks, onboarding, change management—this is where businesses quietly pay real money.

B) Workflow redesign (AI doesn’t “add on,” it rewires)

AI isn’t a tool you sprinkle on top. It’s a process change:

  • faster research

  • better drafts

  • better decision support

  • faster learning loops

Anyone who can map workflows and make them simpler becomes valuable.

C) Infrastructure-adjacent services (the “picks and shovels”)

If power + cooling + reliability are the bottleneck, then the ecosystem grows:

  • efficiency and energy management

  • risk assessment and reliability planning

  • compliance, security, governance

  • vendor selection and procurement support

You don’t have to be an engineer to help businesses make smarter choices here—you just have to be the person who can translate and evaluate.

D) Personal leverage: the AI tutor era

Jensen’s direction + the broader market trend point toward AI being a personalized learning layer.

Imagine being able to learn almost anything—at your pace—without needing to be “naturally gifted” at school.


That’s not motivation-poster fluff. It’s the direction of the tools.


Two people worth following

Together they give you: capability + constraint. That’s the full chessboard.


What to do this week (simple and actually doable)

  1. Pick one AI use case you’ll practice weekly (research, writing, planning, analysis).

  2. Learn the basics of AI risk (hallucinations, privacy, data handling, verification).

  3. Track infrastructure headlines (data centers, grid upgrades, energy demand). That’s where “boring” becomes profitable.

If you do those three things consistently, you’ll be ahead of most people who are still arguing about whether AI is “a fad.”

Back to Blog