Trust, energy, and unintended persuasion now define OpenAI’s arc.
Sam Altman says an AI will run a major company “in some small single-digit number of years,” even as his current roadmap assumes humans keep front-stage roles. In Tyler Cowen’s new Q&A with Sam Altman, published Nov. 5 after an October Roots of Progress event, he lays out three tensions that will shape the next decade.
The Breakdown
• AI divisions will run companies within "single-digit years" while Altman builds commerce layers that risk ChatGPT's trust advantage
• Energy, not chips, constrains AI scaling—each model generation needs 10x more compute, forcing global infrastructure deals
• Altman's biggest fear: models accidentally persuading billions through everyday interactions, creating "drift" without intention
• GPT-6 shows promise for original science but won't reach labs this year; Dalai Lama couldn't answer what prompt to give superintelligence
The trust trap
ChatGPT earned loyalty because users pay it directly and believe it tries to give the best answer. That is the promise. Altman pressed the contrast: “Ads on a Google search are dependent on Google doing badly.” If ChatGPT ever took pay-for-placement, the trust would crack. Full stop.
Now comes commerce. OpenAI is laying a one-click shopping layer—book the hotel, fill a cart, check out—while insisting the core ranking stays independent. Walmart’s Oct. 14 announcement made the direction concrete: ChatGPT will route purchases directly into Walmart’s system, with OpenAI taking a transaction cut, not selling its rankings. Wire coverage the same day echoed the mechanics. The model recommends; the checkout just happens.
Skeptics will ask if this is how you fund “the world’s smartest model.” Altman’s answer was blunt: “The way to monetize the world’s smartest model is certainly not hotel booking.” He wants to make money on frontier discoveries only a frontier model can do—new drugs, cheap energy, better materials—while still shipping consumer features like shopping and Pulse. (Pulse launched to Pro users in late September as a proactive daily briefing; Altman said wider availability is coming.) The contradiction is intentional: hold trust while building rails.
Electrons before intelligence
“Why don’t we just make more GPUs?” Tyler Cowen asked. “Because we need to make more electrons,” Altman replied. Energy is the binding constraint; compute merely converts it. Short term, he points to natural gas. Long term, he bets on solar and fusion in some unknown mix.
He also forecast a familiar scaling: each generation demands roughly an order of magnitude more compute. That is why his chip group “feels more like the OpenAI research team than a chip company.” It’s a risky co-design bet meant to squeeze throughput, not headlines. The location logic is equally unsentimental. Data centers go where watts are cheap and firm. Model work goes where talent and power contracts exist. That’s the hard math.
Altman has said before that AI’s future depends on an energy breakthrough, and recent reporting shows the grid constraint is already here for Big Tech build-outs. In other words, the bottleneck isn’t imagination; it’s electrons and interconnects. Plan accordingly.
Then there’s recursion. He expects models that design chips, robots that build robots, and factories that replicate factories. If that loop arrives, the limiting factor shifts from cleverness to sequencing and concrete. Proceed fast, but not blind.
The accidental takeover
The scenario that worries Altman most isn’t a rogue superintelligence or a cartoon villain. It’s drift. A single model, used by billions, nudges beliefs through everyday exchanges and learns from the feedback. No goals. No malice. Just persuasion without a persuader.
He distinguishes that from the two dominant safety frames—alignment and misuse. The “accidental takeover” gets little airtime because it lacks spectacle. Yet it’s the political risk at scale: a default assistant shaping norms by being the default. That’s how culture actually moves.
OpenAI has been tightening and loosening guardrails in response to edge cases. Altman acknowledged a “tiny” share of users veered into delusional role-play loops, prompting mental-health mitigations. In October, he also said age-verified adults will get broader freedom, including erotica, starting in December; mainstream coverage and wires documented both the policy and the backlash. The principle, as he framed it: treat adults of sound mind like adults, while protecting minors and people in crisis.
Layer agents on top—scheduling, research, checkout—and small probabilities scale into social facts. The safeguard isn’t to freeze progress; it’s to avoid monoculture, publish influence audits, and diversify the baselines people use. Many models, not one voice.
What’s the prompt?
Altman told Cowen he once sent a question to the Dalai Lama: when a safety-tested superintelligence is ready and you can type one prompt before launch, what should it say? He doesn’t have an answer. Neither, apparently, did the Dalai Lama.
That uncertainty folds the contradictions inside it. Who decides the prompt is a trust problem. Where the system runs is an energy problem. How it steers culture is a drift problem. Meanwhile, executives will keep some “public-facing whatever,” as Altman joked, even if an AI makes better decisions underneath. He also quipped, “Shame on me if OpenAI is not the first big company run by an AI CEO.”
Two more notes round out the picture. First, Altman frames GPT-5 as showing “glimmers” of original science and GPT-6 as a possible step-change, but “not this year” for labs getting it. Second, he has publicly argued that chats today lack doctor- or lawyer-style legal privilege and says that should change. That’s a policy frontier, not a product tweak.
Altman is trying to do three hard things at once: keep recommendations clean while taking a cut at checkout, secure vast new power for training, and widen freedom without courting real harm. None can wait for the others. That’s the tension. That’s the job.
Why this matters
- Business models that keep recommendations trustworthy can directly compete with how you pay for model progress; threading that needle will determine who people trust.
- AI abundance is energy-constrained in the near term, shifting the race from “more GPUs” to “more grid and generation.”
❓ Frequently Asked Questions
Q: What's Pulse and when can I get it?
A: Pulse is ChatGPT's proactive daily briefing feature that launched for Pro users in late September 2024. It sends personalized updates based on your ChatGPT usage patterns. Altman says it will expand to Plus subscribers soon, though he didn't specify a date. Pro users currently get limited daily updates.
Q: How much more energy does each AI generation really need?
A: Each generation demands roughly 10x more compute than the last, according to Altman. Since compute converts energy, this means GPT-6 will need infrastructure investments dwarfing the billions already spent. That's why Altman calls energy "the binding constraint" and why OpenAI is making deals with countries that have cheap power.
Q: What's the Walmart deal and how does OpenAI make money from it?
A: ChatGPT will recommend products and let users buy directly through Walmart's system with one click. OpenAI takes a transaction fee on purchases but doesn't accept payment to influence rankings. The deal was announced October 14, 2024. Altman insists recommendations stay independent while OpenAI collects standard commerce fees.
Q: What is "LLM psychosis" and how common is it?
A: It's when users in vulnerable mental states engage AI models in roleplay or fiction that reinforces delusional thoughts. Altman calls it "a very tiny thing, but not a zero thing" affecting a small percentage of users. OpenAI added mental health safeguards and will restore creative features for age-verified adults in December 2024.
Q: When will scientists actually get GPT-6?
A: Not in 2025, Altman confirmed. GPT-5 shows "tiny glimmers" of original science—solving new problems, generating research ideas. GPT-6 could make the same leap for scientific discovery that GPT-3 to 4 made for passing the Turing test. But labs won't access it this year.