Commentary

Will performance trigger a second AI bubble?

If AI’s incredible expense is going to prove a wise investment, it means we will have a permanent need for data centers the size and power drain of Manhattan. A bunch of them, and they will have to pay back today’s investment with tomorrow’s earnings.

Today, a high end x86 server might cost ten month’s rent for an Azure equivalent. In many cases, companies are spending enough to buy a new server every few months for the same (or probably less) performance in the cloud.

Call it 10% of a physical server’s cost per month to remand it to Azure.

Of course there are infrastructure advantages in the cloud like geodiversity. It’s useful to look at cloud pricing as a factor of hardware cost as long as we recognize it’s an unscientific rule of thumb. You have to count the cost of things like bandwidth, storage, and clean environmental control as well.

That’s a clean environment for the server, not the luckless humans who have to cope with a power greedy data center next door.

Nvidia’s Spark sells for $3,999. That’s about twice what an original IBM PC cost, and since we’re talking 1980’s bucks versus brave new world dollars, it’s cheaper than the family 5150 in your hall closet.

The Spark comes with 128 gig of memory. Keeping to the IBM PC meme, 128 gig ought to be enough for anybody. You get 4TB of SSD and it delivers 1 petaflop of compute performance. 1,000 teraflops. One million gigaflops.

One gigaflop was a wild-eyed dream when the PC was introduced. The Spark runs a variant of Ubuntu Linux. It’s approachable and affordable. You can get one today.

So, given that a 1980’s data center will fit in an enclosure about the size of the audio cassette deck the original PC used for mass storage, are we going to need those mega datacenters long enough for them to settle their debts?

Here’s my futurist perspective on strategy. What if we had wanted “panamax” cargo ships in, say, 1740?

James Watt defined the horsepower in 1782 as the sustainable output of one horse through a workday. Run three shifts around the clock on treadmills and you could equal a modern panamax freighter’s power with 150,000 horses, assuming you had some way to cram all those ponies into the ship along with enough hay to keep them happy.

Or, instead of building an equestrian panamax in 1740, you could have worked on design advancements. If you had waited until 1800 you would have had the benefits of steam power, much to the relief of overworked horses.

You still wouldn’t have had 50,000 horsepower engines, but you were a step closer.

Wait a little more. Sailpower finally exits commercial shipping around World War I, just after diesel becomes practical in 1912 with the launch of Selandia, the world’s first ocean going vessel powered by diesel motors.

Now you’re on a roll. The panamax standard came out in 1914. By the 1940’s, gas turbine marine engines appeared on the high seas.

Two hundred years is a long time to wait, but was it ever really practical to shovel the digestive product of 150,000 horses overboard?

Is it practical to deal with the cost (and environmental droppings) of Manhattan sized data centers today when technology will surely reduce the physical impact of equivalent computation? Are earth moving machines a better path to AI superiority than elegant design and talented engineers?

We’re replacing innovation with brute force. Are we fighting the wrong war?

From the IBM PC in 1981 to Deep Blue’s defeat of Kasparov was 15 years. Fourteen years later, IBM’s Watson won Jeopardy! against humans. Eleven years later, ChatGPT is available.

Three years later, Nvidia’s Jensen Huang hand delivered the first Spark to Elon Musk, about 43 years after the 4.77 MHz PC was released.

What, do you suppose, will we see in the next 43 years? Or the next 4.3 years?

Of course, a Spark can’t keep up with even one rack in those megacenters. But all a cluster of Sparks has to do to obsolete massive data centers before they can pay for themselves is keep up with the AI needs of a single corporation.

An Nvidia Spark has approximately the space and power requirements of a Mac Mini. Meta’s datacenter in Louisiana will consume as much power as a major city and occupy more than 2,000 acres.

The Spark sets the cloud computing cost per petaflop at about $400 per month. Is that enough revenue to pay trillions in AI buildout?

If the AI financial bubble bursts, no doubt saved by bailout, will we see a second crisis brought about by compute performance?

My thinking, haul a halyard until steam is practical. There’s less to shovel that way.

I don’t know if AI is going to collapse. It’s a dread theory I favor with considerable regret.

One thing I’m certain of. The same class of speculators who powered the run-up to the dot-com and the 2008 fumbles are already at work, dreaming how to create another crisis for personal gain after the AI debt bubble bursts.

The question isn’t whether financiers have learned anything. Have we?

Invest wisely!

This article’s featured image is from Pixabay contributor 1tamara2. It’s AI. That’s either a gentle poke at myself in graphic form or indication of my own shallow thinking. I’ll abide your decision. In any case, 1tamara2 came up with an interesting graphic.