technology

Sam Altman and OpenAI Are Victims of Their Own Hype

Photo-Illustration: Intelligencer; Photo: Getty Images

Last Friday, seemingly out of nowhere, the board of OpenAI announced that it had fired its celebrity CEO, Sam Altman, for being “not consistently candid in his communications,” offering no further detail. Things devolved from there, with Microsoft announcing it had hired Altman and OpenAI teetering on the brink of implosion. To the tech world, it’s a “seismic” story — like when Steve Jobs was fired from Apple, except that maybe the fate of humanity hangs in the balance. And like Jobs, Altman is returning to the company he co-founded after nearly all of the employees threatened to quit unless the board took him back.

There are plenty of theories about what happened here, some more credible than others. There are reports that Altman was attempting to raise funds for a new venture. There’s ample evidence that OpenAI’s nonprofit board, whose mission is to develop safe AI for the benefit of “humanity, not OpenAI investors,” was concerned about the direction Altman was taking the company. Within the company, according to The Atlantic, there has been a deepening rift since the release of ChatGPT between a true-believer faction represented by lead scientist Ilya Sutskever, who directed the coup against Altman, and a larger faction led by Altman that wanted to pursue growth and didn’t seem particularly concerned about, for example, destroying the world.

These are different takes on what’s starting to happen. But they share one trait that has made the conversation around OpenAI, and AI in general, feel profoundly disconnected from reality and frankly a little bit insane: It’s all speculative. They’re bets. They’re theories based on premises that have been rendered invisible in the fog of a genuinely dramatic year for AI development, and they’re being brought to bear on the present with bizarre results.

There are dozens of nuanced positions positioned along the spectrum of AI’s risk and potential, but among people in the industry, the biggest camps can be described thusly: AI is going to be huge, therefore we should develop it as quickly and fully as possible to realize a glorious future; AI is going to be huge, therefore we should be very careful so as not to realize a horrifying future; AI is going to be huge, therefore we need to invest so we can make lots of money and beat everyone else who is trying, and the rest will take care of itself.

They’re concerned with what might happen, what should happen, what shouldn’t happen, and what various parties need to happen. Despite being articulated in terms of disagreement, they share a lot in common — these are people arguing about different versions of what they believe to be an inevitable future in which the sort of work OpenAI is doing becomes, in one way or another, the most significant in the world. If you’re wondering why people are treating OpenAI’s slapstick self-injury like the biggest story on the planet, it’s because lots of people close to it believe, or need to believe, that it is.

The novel, sci-fi-adjacent speculative topics in the AI discourse get the most attention for being sort of out there: the definition and probability of “artificial general intelligence,” which OpenAI describes as “a highly autonomous system that outperforms humans at most economically valuable work”; the notion and prospect of a superintelligence that might subjugate humans; thought experiments about misaligned AIs that raze the planet to plant more strawberries or destroy civilization to maximize paper-clip production; the jumble of recently popularized terms and acronyms and initialisms — x-risk, effective altruism, e/acc, decel, alignment, doomers — that suggest the arrival not just of a new technology but of an intellectual moment rising to meet it.

These are all interesting predictions with widely varying levels of evidence to support them; many of them merge predictions about technology (can sufficiently advanced large-language models produce something resembling human intelligence?) with predictions about the social and economic effects of technology (will AI destroy jobs or create them?). To invest in them intellectually, however, still requires some level of belief — again, we’re talking about highly speculative futures here. To defend them requires confidence. To work professionally toward or against them would certainly foster something like faith, and it makes sense that a company like OpenAI would factionalize along ideological and to some extent spiritual lines, akin to schools of thought in an academic discipline or denominations within the same church. (At the edges, it’s a little bit of both.)

The commercial project of AI, while less evocative, is no less speculative: People believe that they stand to make inconceivable amounts of money, and OpenAI appeared to be the start-up to end all start-ups. But it also has revenues of about a billion dollars a year, was recently rumored to be raising money at an $86 billion valuation, and currently “incinerates” money on computing power. Microsoft is still in the early stages of trying to commercialize OpenAI’s technology in Windows, Office, Github, and Bing. Google has invested billions of dollars in generative AI products that mostly remain in testing; Meta, Amazon, and practically every other large tech company are in the process of rolling out new tools, but nobody’s making any real money on this stuff, yet. Almost across the board, in fact, these companies are hemorrhaging money on AI. They’re doing this on purpose, of course. Investors and companies are placing bets. Some of them are confident. Others don’t want to miss out. None of them, of course, actually know what’s going to happen. If they see opportunities, they haven’t yet had a chance to realize them. If they predict major threats, they haven’t yet materialized. In the future, we’re talking about a whole new world. In the present, we’re talking about … Microsoft 365 Copilot and bringing AI to Excel.

Which, yes, of course: The effects of new technology remain to be seen, nobody wants to be caught flat-footed, and there’s a lot to gain by getting in front of a major change or to lose by not. We’re just getting started! Every industry makes decisions based on uncertain predictions, and tech especially. But a lot of discussions about what’s happening at OpenAI — the leading AI firm, but also one of many; a nonprofit that became a start-up that became, in all but name, a subsidiary of a notoriously boring tech company — have inherited the industry’s narrow sense of destiny and inevitability, represented as a range of extreme outcomes: apocalypse; mass automation and sudden economic displacement; humanlike autonomous intelligence running amok; the meeting of all human needs, quickly and forever. Or rapid industry dominance by a single firm; geopolitical dominance by the country whose AI research or companies prevail; a payoff bigger than anyone’s ever imagined.

https://x.com/sama/status/1484950632331034625?s=20

It’s a compelling range of possibilities that are interesting to pit against one another. They’re all apocalyptic, in the sense that they imagine, if not an end to, then a departure from the current world. In their extremity, they seem to account for all possible outcomes, except that they don’t, not even for the implied but unspoken scenarios in which none of them quite comes to pass. There are countless plausible but far less interesting scenarios in which the world doesn’t bend to AI, but rather AI bends to the world. Scenarios in which maybe OpenAI is just another big tech company or gets caught and marginalized by other firms in a crowded sector. Scenarios in which AI does not represent a categorically different technology of any sort, but rather a new range of new partial automation possibilities, resulting in halting, uneven changes to the labor market. Scenarios in which “alignment” is just a new term for ESG. Scenarios in which AI improves productivity, exaggerates rather than overtakes current features of capitalism, or subtly reallocates power and capital in ways that are hard for a single firm to monetize. This would be no small thing! But it would be familiar. And at this point, it would come to many in the industry — from doomers to profit-minded Microsoft executives — as a surprise and a genuine reputational or financial disaster.

What happened (and is happening) at OpenAI is dramatic and fascinating and may indeed matter a great deal. But it should also be legible as a governance breakdown at a late-stage start-up run by a small group of people with different instincts about how to scale, massive egos, and a big needy partner breathing down their necks. In attempting to anticipate various AI-related slippery slopes and pitfalls, OpenAI found itself with a strange, awkward, and ultimately useless organizational structure; it failed, in other words, at its very first prediction about how AI would play out in the real world and what could be done about it ahead of time. It’s obviously a mess, a fuck-up from a few different angles, and a setback for everyone involved. Maybe Sam Altman is Steve Jobs in exile or Robert Oppenheimer before Trinity. But maybe he’s Travis Kalanick.

OpenAI’s slapstick weekend should have demonstrated the dangers of running a company according to, basically, a collection of speculative short stories and called into question the collective predictive powers of a company, and an industry, that’s borrowing heavily — in intellectual and financial terms — against a range of futures that it can’t guarantee. Instead, it demonstrated how thoroughly those stories about artificial intelligence (and financial returns) have embedded themselves in the imagination of the entire tech industry, the press, and, to a lesser but real extent, the broader public. The sudden success of the company has helped launder long-shot predictions into a sort of incoherent conventional wisdom. The sense that what happens at OpenAI is much bigger than OpenAI has escaped containment, much to the credit of Altman.

OpenAI achieved something else of incredible value before getting anywhere near AGI. By entertaining every speculative AI scenario at once — and releasing an impressive chatbot to the public — it made itself synonymous with tech’s future. For Microsoft, but also for its peers and competitors, OpenAI is too hyped to fail.

Sam Altman and OpenAI Are Victims of Their Own Hype