The Great AI Furnace: A Modest Proposal for Burning the Future Before We Know What It Is

There is a difference between invention and stampede.

Invention is when a group of people notices a problem, studies it, builds carefully, tests the consequences, and then tries to deploy the useful parts without setting the village on fire.

A stampede is when four trillion-dollar companies all see the same golden throne in the distance and begin paving separate eight-lane highways toward it using copper, water, rare earth metals, public electricity infrastructure, and the increasingly frail patience of everyone who still has to pay a utility bill.

Guess which one we are living through.

The current AI buildout has the energy of five men in a crowded room discovering a firearm at the same time. Each insists that he alone can be trusted with it. Each insists that the truly irresponsible thing would be letting another man reach it first. Each insists that the gun is mostly theoretical, probably safe, certainly innovative, and in any case already priced into the next earnings call.

Meanwhile the rest of us are saying:

It is fine to wave a stick. Please stop waving the gun.

That distinction matters. AI research is not the problem. Useful AI tools are not the problem. Accessibility tools, medical discovery, coding assistants, local creative systems, better search, better translation, better prosthetics for thought — all of these may be genuinely good. Some already are.

The problem is the industrial posture: build first, consume first, enclose first, ask questions later.

We are watching a private arms race over a public future. Each major firm wants to become the rail layer through which intelligence, work, search, education, software, media, and eventually ordinary decision-making must pass. Nobody wants to build the second-best god-box. So instead of one coordinated effort to build sane, efficient, interoperable public infrastructure, we get duplicate empires.

One company builds a furnace. Another builds a furnace. Another builds a furnace. Another builds a furnace and names it something friendly. Each furnace needs chips, memory, substations, water, land, backup power, engineers, training data, legal cover, and a large public-relations department to explain why the furnace is actually a meadow.

This is “competition,” we are told.

It is also waste with a business-card holder.

If one useful AI infrastructure buildout would require one set of resources, the current model creates several parallel resource demands because each actor believes only one or two winners will dominate. They may be right. But that only makes the waste more absurd. If four companies build at planetary scale and one wins, the other three do not magically become wheat fields, libraries, hospitals, affordable housing, or functioning public transit. The concrete has already been poured. The chips have already been fabbed. The substations have already been built. The water has already been negotiated. The energy has already been consumed or committed.

You cannot un-mine the copper because your chatbot failed to achieve product-market fit.

This is the prisoner’s dilemma with cooling towers.

From the point of view of each firm, overbuilding is rational. From the point of view of civilization, it is deranged. The market says, “If we do not do it, our competitor will.” The planet says, “I do not care which logo was on the invoice.”

And the public, as usual, gets the bill.

Not necessarily as one clean invoice marked AI Megalomania Surcharge, though that would at least be honest. The bill arrives as higher power demand, strained grids, dirtier backup generation, water conflicts, component shortages, higher hardware prices, public subsidies, tax incentives, infrastructure upgrades, and the quiet reallocation of material reality toward private prediction machines.

For decades, ordinary people were told to conserve. Use efficient bulbs. Drive less. Turn down the thermostat. Worry about emissions. Think of the children. Think of the planet. Think of your carbon footprint, citizen.

Then capital discovered that intelligence might be rentable by the token, and suddenly the thermostat lecture was over.

Apparently the planet was in danger when you left a light on, but not when a corporation builds a synthetic oracle that requires its own private power horizon.

This is not a serious civilization. This is a civilization that scolds the poor for using plastic straws while billionaires commission artificial weather systems to answer email slightly faster.

The environmental hypocrisy would be easier to stomach if the outcome were clear, limited, and democratically chosen. It is not. We do not know what the labor effects will be. We do not know how many jobs will be eliminated, degraded, deskilled, surveilled, or turned into low-paid AI babysitting. We do not know what happens to education when every assignment becomes a prompt-engineering arms race. We do not know what happens to art when every style can be instantly digested and reproduced. We do not know what happens to public knowledge when synthetic text floods every searchable surface. We do not know what happens to human judgment when institutions begin outsourcing more and more decisions to systems whose reasoning is half-statistical fog and half-legal disclaimer.

We are not even sure whether the people building these systems understand them well enough to govern them.

But we are very sure they would like you to subscribe.

That is the insult: uncertainty for us, upside for them.

If this goes well, they own the platform. If it goes badly, workers “reskill,” ratepayers absorb grid costs, communities fight data-center water usage, schools scramble, artists sue, governments panic, and everyone else is told this is the inevitable price of progress.

Inevitable is a very useful word when nobody voted.

Then there is the so-called “safety” apparatus, much of which has the elegance of placing six guards in front of a fire exit while ignoring the faulty wiring. Instead of building smaller, bounded, context-aware systems with clear failure modes, companies often build enormous general-purpose systems and then bolt on refusal layers, moderation classifiers, vague policy filters, and customer-service apologies.

This is not fire safety. This is exit policing.

When the system misreads a harmless request, the user rephrases. When the rephrase fails, the user tries again. The prompt gets longer. The computation grows. The frustration increases. The system generates more waste while calling itself responsible. In the name of safety, it creates latency, confusion, adversarial behavior, and distrust.

A real safety culture would ask: where are the hazards, how do they propagate, what permissions should the system have, what should it never touch, what can be verified, who audits it, and how do people get out quickly when something goes wrong?

The current model too often asks: can we make the model say no in a way that protects the brand?

That is not safety engineering. That is liability choreography.

The deeper pattern is the same everywhere: private actors create public risks, then sell private access to the tools needed to survive those risks.

They automate work, then sell productivity subscriptions. They flood the internet with synthetic material, then sell detection and filtering. They create dependency, then sell enterprise reliability. They build opaque systems, then sell compliance dashboards. They destabilize knowledge, then sell trusted AI search. They turn every workplace into a test site, then sell training on how not to be replaced by the thing they are selling.

This is not merely innovation. It is arson with a service contract.

And yet the answer is not to retreat into anti-technology nostalgia. That would be too easy, and also wrong. AI is useful. It can help people write, code, translate, learn, design, repair, imagine, communicate, and build. It can serve disabled people, lonely people, overworked people, poor people, small businesses, researchers, teachers, artists, and weird half-feral engineers trying to make sense of a world that keeps hiding the manual.

The problem is not the existence of powerful tools.

The problem is putting those tools inside an ownership model that treats the future as a market to be captured before society can understand the terms.

The sane version would look very different. It would prioritize smaller models where smaller models work. It would use local and community infrastructure where possible. It would make energy and water use public. It would require data centers to justify their grid impact like serious industrial facilities, not magical office parks. It would demand interoperability so no company becomes the private toll road for thought. It would fund public compute for universities, libraries, nonprofits, municipalities, and small businesses. It would make safety an engineering discipline, not a press release with a velvet rope. It would ask, before deploying AI into a domain: does this actually need AI, and who is harmed if it fails?

Most importantly, it would reject the idea that speed is morality.

The companies building this future want speed to stand in for wisdom. They want scale to stand in for legitimacy. They want inevitability to stand in for consent.

But building quickly does not make a thing necessary. Winning a market does not make a thing good. Spending billions does not make a thing wise. And calling something innovation does not absolve it from being stupid at planetary scale.

We are in a narrow window now. Not because AI must be stopped, but because the shape of its deployment is still being decided. Once the furnaces are built, once the workflows are locked in, once the schools adapt around it, once the employers restructure around it, once the public infrastructure bends around it, the argument becomes much harder. At that point we will be told, as always, that the system is too large to change.

Which is why the question is not “AI, yes or no?”

The question is:

Who owns the machine? Who pays for the power? Who absorbs the risk? Who gets replaced? Who gets watched? Who gets rich? Who gets a say? And why, exactly, are we building four versions of the same furnace while the house is already hot?

The great firms will tell us they are building the future.

Maybe they are.

But there is a difference between building a future and strip-mining the present so that one company can rent the future back to us.

The first is civilization.

The second is a man waving a gun in a crowded room, insisting he is merely demonstrating the future of sticks and blaming others for ducking.