Ilya Sutskever, Sam Altman, Mira Murati, and Greg Brockman, of OpenAIPhotograph: Jessica Chou
The air crackles with an almost Beatlemaniac energy as the star and his entourage tumble into a waiting Mercedes van. Theyâve just ducked out of one event and are headed to another, then another, where a frenzied mob awaits. As they careen through the streets of Londonâthe short hop from Holborn to Bloomsburyâitâs as if theyâre surfing one of civilizationâs before-and-after moments. The history-making force personified inside this car has captured the attention of the world. Everyone wants a piece of it, from the students whoâve waited in line to the prime minister.
Inside the luxury van, wolfing down a salad, is the neatly coiffed 38-year-old entrepreneur Sam Altman, cofounder of OpenAI; a PR person; a security specialist; and me. Altman is unhappily sporting a blue suit with a tieless pink dress shirt as he whirlwinds through London as part of a monthlong global jaunt through 25 cities on six continents. As he gobbles his greensâno time for a sit-down lunch todayâhe reflects on his meeting the previous night with French president Emmanuel Macron. Pretty good guy! And very interested in artificial intelligence.
As was the prime minister of Poland. And the prime minister of Spain.
Riding with Altman, I can almost hear the ringing, ambiguous chord that opens âA Hard Dayâs Nightââintroducing the future. Last November, when OpenAI let loose its monster hit, ChatGPT, it triggered a tech explosion not seen since the internet burst into our lives. Suddenly the Turing test was history, search engines were endangered species, and no college essay could ever be trusted. No job was safe. No scientific problem was immutable.
Altman didnât do the research, train the neural net, or code the interface of ChatGPT and its more precocious sibling, GPT-4. But as CEOâand a dreamer/doer type whoâs like a younger version of his cofounder Elon Musk, without the baggageâone news article after another has used his photo as the visual symbol of humanityâs new challenge. At least those that havenât led with an eye-popping image generated by OpenAIâs visual AI product, Dall-E. He is the oracle of the moment, the figure that people want to consult first on how AI might usher in a golden age, or consign humans to irrelevance, or worse.
Altmanâs van whisks him to four appearances that sunny day in May. The first is stealthy, an off-the-record session with the Round Table, a group of government, academia, and industry types. Organized at the last minute, itâs on the second floor of a pub called the Somers Town Coffee House. Under a glowering portrait of brewmaster Charles Wells (1842â1914), Altman fields the same questions he gets from almost every audience. Will AI kill us? Can it be regulated? What about China? He answers every one in detail, while stealing glances at his phone. After that, he does a fireside chat at the posh Londoner Hotel in front of 600 members of the Oxford Guild. From there itâs on to a basement conference room where he answers more technical questions from about 100 entrepreneurs and engineers. Now heâs almost late to a mid-afternoon onstage talk at University College London. He and his group pull up at a loading zone and are ushered through a series of winding corridors, like the Steadicam shot in Goodfellas. As we walk, the moderator hurriedly tells Altman what heâll ask. When Altman pops on stage, the auditoriumâpacked with rapturous academics, geeks, and journalistsâerupts.
Altman is not a natural publicity seeker. I once spoke to him right after The New Yorker ran a long profile of him. âToo much about me,â he said. But at University College, after the formal program, he wades into the scrum of people who have surged to the foot of the stage. His aides try to maneuver themselves between Altman and the throng, but he shrugs them off. He takes one question after another, each time intently staring at the face of the interlocutor as if heâs hearing the query for the first time. Everyone wants a selfie. After 20 minutes, he finally allows his team to pull him out. Then heâs off to meet with UK prime minister Rishi Sunak.
Maybe one day, when robots write our history, they will cite Altmanâs world tour as a milestone in the year when everyone, all at once, started to make their own personal reckoning with the singularity. Or then again, maybe whoever writes the history of this moment will see it as a time when a quietly compelling CEO with a paradigm-busting technology made an attempt to inject a very peculiar worldview into the global mindstreamâfrom an unmarked four-story headquarters in San Franciscoâs Mission District to the entire world.
This article appears in the October 2023 issue. Subscribe to WIRED.
For Altman and his company, ChatGPT and GPT-4 are merely stepping stones along the way to achieving a simple and seismic mission, one these technologists may as well have branded on their flesh. That mission is to build artificial general intelligenceâa concept thatâs so far been grounded more in science fiction than scienceâand to make it safe for humanity. The people who work at OpenAI are fanatical in their pursuit of that goal. (Though, as any number of conversations in the office cafĂ© will confirm, the âbuild AGIâ bit of the mission seems to offer up more raw excitement to its researchers than the âmake it safeâ bit.) These are people who do not shy from casually using the term âsuper-intelligence.â They assume that AIâs trajectory will surpass whatever peak biology can attain. The companyâs financial documents even stipulate a kind of exit contingency for when AI wipes away our whole economic system.
Itâs not fair to call OpenAI a cult, but when I asked several of the companyâs top brass if someone could comfortably work there if they didnât believe AGI was truly comingâand that its arrival would mark one of the greatest moments in human historyâmost executives didnât think so. Why would a nonbeliever want to work here? they wondered. The assumption is that the workforceânow at approximately 500, though it might have grown since you began reading this paragraphâhas self-selected to include only the faithful. At the very least, as Altman puts it, once you get hired, it seems inevitable that youâll be drawn into the spell.
At the same time, OpenAI is not the company it once was. It was founded as a purely nonprofit research operation, but today most of its employees technically work for a profit-making entity that is reportedly valued at almost $30 billion. Altman and his team now face the pressure to deliver a revolution in every product cycle, in a way that satisfies the commercial demands of investors and keeps ahead in a fiercely competitive landscape. All while hewing to a quasi-messianic mission to elevate humanity rather than exterminate it.
That kind of pressureânot to mention the unforgiving attention of the entire worldâcan be a debilitating force. The Beatles set off colossal waves of cultural change, but they anchored their revolution for only so long: Six years after chiming that unforgettable chord they werenât even a band anymore. The maelstrom OpenAI has unleashed will almost certainly be far bigger. But the leaders of OpenAI swear theyâll stay the course. All they want to do, they say, is build computers smart enough and safe enough to end history, thrusting humanity into an era of unimaginable bounty.
Growing up in the late â80s and early â90s, Sam Altman was a nerdy kid who gobbled up science fiction and Star Wars. The worlds built by early sci-fi writers often had humans living withâor competing withâsuperintelligent AI systems. The idea of computers matching or exceeding human capabilities thrilled Altman, who had been coding since his fingers could barely cover a keyboard. When he was 8, his parents bought him a Macintosh LC II. One night he was up late playing with it and the thought popped into his head: âSomeday this computer is going to learn to think.â When he arrived at Stanford as an undergrad in 2003, he hoped to help make that happen and took courses in AI. But âit wasnât working at all,â heâd later say. The field was still mired in an innovation trough known as AI winter. Altman dropped out to enter the startup world; his company Loopt was in the tiny first batch of wannabe organizations in Y Combinator, which would become the worldâs most famed incubator.
In February 2014, Paul Graham, YCâs founding guru, chose then-28-year-old Altman to succeed him. âSam is one of the smartest people I know,â Graham wrote in the announcement, âand understands startups better than perhaps anyone I know, including myself.â But Altman saw YC as something bigger than a launchpad for companies. âWe are not about startups,â he told me soon after taking over. âWe are about innovation, because we believe that is how you make the future great for everyone.â In Altmanâs view, the point of cashing in on all those unicorns was not to pack the partnersâ wallets but to fund species-level transformations. He began a research wing, hoping to fund ambitious projects to solve the worldâs biggest problems. But AI, in his mind, was the one realm of innovation to rule them all: a superintelligence that could address humanityâs problems better than humanity could.
As luck would have it, Altman assumed his new job just as AI winter was turning into an abundant spring. Computers were now performing amazing feats, via deep learning and neural networks, like labeling photos, translating text, and optimizing sophisticated ad networks. The advances convinced him that for the first time, AGI was actually within reach. Leaving it in the hands of big corporations, however, worried him. He felt those companies would be too fixated on their products to seize the opportunity to develop AGI as soon as possible. And if they did create AGI, they might recklessly unleash it upon the world without the necessary precautions.
At the time, Altman had been thinking about running for governor of California. But he realized that he was perfectly positioned to do something biggerâto lead a company that would change humanity itself. âAGI was going to get built exactly once,â he told me in 2021. âAnd there were not that many people that could do a good job running OpenAI. I was lucky to have a set of experiences in my life that made me really positively set up for this.â
Altman began talking to people who might help him start a new kind of AI company, a nonprofit that would direct the field toward responsible AGI. One kindred spirit was Tesla and SpaceX CEO Elon Musk. As Musk would later tell CNBC, he had become concerned about AIâs impact after having some marathon discussions with Google cofounder Larry Page. Musk said he was dismayed that Page had little concern for safety and also seemed to regard the rights of robots as equal to humans. When Musk shared his concerns, Page accused him of being a âspeciesist.â Musk also understood that, at the time, Google employed much of the worldâs AI talent. He was willing to spend some money for an effort more amenable to Team Human.
Within a few months Altman had raised money from Musk (who pledged $100 million, and his time) and Reid Hoffman (who donated $10 million). Other funders included Peter Thiel, Jessica Livingston, Amazon Web Services, and YC Research. Altman began to stealthily recruit a team. He limited the search to AGI believers, a constraint that narrowed his options but one he considered critical. âBack in 2015, when we were recruiting, it was almost considered a career killer for an AI researcher to say that you took AGI seriously,â he says. âBut I wanted people who took it seriously.â
Greg Brockman is now OpenAIâs president.
Photograph: Jessica Chou
Greg Brockman, the chief technology officer of Stripe, was one such person, and he agreed to be OpenAIâs CTO. Another key cofounder would be Andrej Karpathy, who had been at Google Brain, the search giantâs cutting-edge AI research operation. But perhaps Altmanâs most sought-after target was a Russian-born engineer named Ilya Sutskever.
Sutskeverâs pedigree was unassailable. His family had emigrated from Russia to Israel, then to Canada. At the University of Toronto he had been a standout student under Geoffrey Hinton, known as the godfather of modern AI for his work on deep learning and neural networks. Hinton, who is still close to Sutskever, marvels at his protĂ©gĂ©âs wizardry. Early in Sutskeverâs tenure at the lab, Hinton had given him a complicated project. Sutskever got tired of writing code to do the requisite calculations, and he told Hinton it would be easier if he wrote a custom programming language for the task. Hinton got a bit annoyed and tried to warn his student away from what he assumed would be a monthlong distraction. Then Sutskever came clean: âI did it this morning.â
Sutskever became an AI superstar, coauthoring a breakthrough paper that showed how AI could learn to recognize images simply by being exposed to huge volumes of data. He ended up, happily, as a key scientist on the Google Brain team.
In mid-2015 Altman cold-emailed Sutskever to invite him to dinner with Musk, Brockman, and others at the swank Rosewood Hotel on Palo Altoâs Sand Hill Road. Only later did Sutskever figure out that he was the guest of honor. âIt was kind of a general conversation about AI and AGI in the future,â he says. More specifically, they discussed âwhether Google and DeepMind were so far ahead that it would be impossible to catch up to them, or whether it was still possible to, as Elon put it, create a lab which would be a counterbalance.â While no one at the dinner explicitly tried to recruit Sutskever, the conversation hooked him.
Sutskever wrote an email to Altman soon after, saying he was game to lead the projectâbut the message got stuck in his drafts folder. Altman circled back, and after months fending off Googleâs counteroffers, Sutskever signed on. He would soon become the soul of the company and its driving force in research.
Sutskever joined Altman and Musk in recruiting people to the project, culminating in a Napa Valley retreat where several prospective OpenAI researchers fueled each otherâs excitement. Of course, some targets would resist the lure. John Carmack, the legendary gaming coder behind Doom, Quake, and countless other titles, declined an Altman pitch.
OpenAI officially launched in December 2015. At the time, when I interviewed Musk and Altman, they presented the project to me as an effort to make AI safe and accessible by sharing it with the world. In other words, open source. OpenAI, they told me, was not going to apply for patents. Everyone could make use of their breakthroughs. Wouldnât that be empowering some future Dr. Evil? I wondered. Musk said that was a good question. But Altman had an answer: Humans are generally good, and because OpenAI would provide powerful tools for that vast majority, the bad actors would be overwhelmed. He admitted that if Dr. Evil were to use the tools to build something that couldnât be counteracted, âthen weâre in a really bad place.â But both Musk and Altman believed that the safer course for AI would be in the hands of a research operation not polluted by the profit motive, a persistent temptation to ignore the needs of humans in the search for boffo quarterly results.
Altman cautioned me not to expect results soon. âThis is going to look like a research lab for a long time,â he said.
There was another reason to tamp down expectations. Google and the others had been developing and applying AI for years. While OpenAI had a billion dollars committed (largely via Musk), an ace team of researchers and engineers, and a lofty mission, it had no clue about how to pursue its goals. Altman remembers a moment when the small team gathered in Brockmanâs apartmentâthey didnât have an office yet. âI was like, what should we do?â
Altman remembers a moment when the small team gathered in Brockmanâs apartmentâthey didnât have an office yet. âI was like, what should we do?â
I had breakfast in San Francisco with Brockman a little more than a year after OpenAIâs founding. For the CTO of a company with the word open in its name, he was pretty parsimonious with details. He did affirm that the nonprofit could afford to draw on its initial billion-dollar donation for a while. The salaries of the 25 people on its staffâwho were being paid at far less than market valueâate up the bulk of OpenAIâs expenses. âThe goal for us, the thing that weâre really pushing on,â he said, âis to have the systems that can do things that humans were just not capable of doing before.â But for the time being, what that looked like was a bunch of researchers publishing papers. After the interview, I walked him to the companyâs newish office in the Mission District, but he allowed me to go no further than the vestibule. He did duck into a closet to get me a T-shirt.
Had I gone in and asked around, I might have learned exactly how much OpenAI was floundering. Brockman now admits that ânothing was working.â Its researchers were tossing algorithmic spaghetti toward the ceiling to see what stuck. They delved into systems that solved video games and spent considerable effort on robotics. âWe knew what we wanted to do,â says Altman. âWe knew why we wanted to do it. But we had no idea how.â
But they believed. Supporting their optimism were the steady improvements in artificial neural networks that used deep-learning techniques.âThe general idea is, donât bet against deep learning,â says Sutskever. Chasing AGI, he says, âwasnât totally crazy. It was only moderately crazy.â
OpenAIâs road to relevance really started with its hire of an as-yet-unheralded researcher named Alec Radford, who joined in 2016, leaving the small Boston AI company heâd cofounded in his dorm room. After accepting OpenAIâs offer, he told his high school alumni magazine that taking this new role was âkind of similar to joining a graduate programââan open-ended, low-pressure perch to research AI.
The role he would actually play was more like Larry Page inventing PageRank.
Radford, who is press-shy and hasnât given interviews on his work, responds to my questions about his early days at OpenAI via a long email exchange. His biggest interest was in getting neural nets to interact with humans in lucid conversation. This was a departure from the traditional scripted model of making a chatbot, an approach used in everything from the primitive ELIZA to the popular assistants Siri and Alexaâall of which kind of sucked. âThe goal was to see if there was any task, any setting, any domain, any anything that language models could be useful for,â he writes. At the time, he explains, âlanguage models were seen as novelty toys that could only generate a sentence that made sense once in a while, and only then if you really squinted.â His first experiment involved scanning 2 billion Reddit comments to train a language model. Like a lot of OpenAIâs early experiments, it flopped. No matter. The 23-year-old had permission to keep going, to fail again. âWe were just like, Alec is great, let him do his thing,â says Brockman.
His next major experiment was shaped by OpenAIâs limitations of computer power, a constraint that led him to experiment on a smaller data set that focused on a single domainâAmazon product reviews. A researcher had gathered about 100 million of those. Radford trained a language model to simply predict the next character in generating a user review.
Radford began experimenting with the transformer architecture. âI made more progress in two weeks than I did over the past two years,â he says.
But then, on its own, the model figured out whether a review was positive or negativeâand when you programmed the model to create something positive or negative, it delivered a review that was adulatory or scathing, as requested. (The prose was admittedly clunky: âI love this weapons look ⊠A must watch for any man who love Chess!â) âIt was a complete surprise,â Radford says. The sentiment of a reviewâits favorable or disfavorable gistâis a complex function of semantics, but somehow a part of Radfordâs system had gotten a feel for it. Within OpenAI, this part of the neural net came to be known as the âunsupervised sentiment neuron.â
Sutskever and others encouraged Radford to expand his experiments beyond Amazon reviews, to use his insights to train neural nets to converse or answer questions on a broad range of subjects.
And then good fortune smiled on OpenAI. In early 2017, an unheralded preprint of a research paper appeared, coauthored by eight Google researchers. Its official title was âAttention Is All You Need,â but it came to be known as the âtransformer paper,â named so both to reflect the game-changing nature of the idea and to honor the toys that transmogrified from trucks to giant robots. Transformers made it possible for a neural net to understandâand generateâlanguage much more efficiently. They did this by analyzing chunks of prose in parallel and figuring out which elements merited âattention.â This hugely optimized the process of generating coherent text to respond to prompts. Eventually, people came to realize that the same technique could also generate images and even video. Though the transformer paper would become known as the catalyst for the current AI frenzyâthink of it as the Elvis that made the Beatles possibleâat the time Ilya Sutskever was one of only a handful of people who understood how powerful the breakthrough was. âThe real aha moment was when Ilya saw the transformer come out,â Brockman says. âHe was like, âThatâs what weâve been waiting for.â Thatâs been our strategyâto push hard on problems and then have faith that we or someone in the field will manage to figure out the missing ingredient.â
Radford began experimenting with the transformer architecture. âI made more progress in two weeks than I did over the past two years,â he says. He came to understand that the key to getting the most out of the new model was to add scaleâto train it on fantastically large data sets. The idea was dubbed âBig Transformerâ by Radfordâs collaborator Rewon Child.
This approach required a change of culture at OpenAI and a focus it had previously lacked. âIn order to take advantage of the transformer, you needed to scale it up,â says Adam DâAngelo, the CEO of Quora, who sits on OpenAIâs board of directors. âYou need to run it more like an engineering organization. You canât have every researcher trying to do their own thing and training their own model and make elegant things that you can publish papers on. You have to do this more tedious, less elegant work.â That, he added, was something OpenAI was able to do, and something no one else did.
Mira Murati, OpenAIâs chief technology officer.
The name that Radford and his collaborators gave the model they created was an acronym for âgeneratively pretrained transformerââGPT-1. Eventually, this model came to be generically known as âgenerative AI.â To build it, they drew on a collection of 7,000 unpublished books, many in the genres of romance, fantasy, and adventure, and refined it on Quora questions and answers, as well as thousands of passages taken from middle school and high school exams. All in all, the model included 117 million parameters, or variables. And it outperformed everything that had come before in understanding language and generating answers. But the most dramatic result was that processing such a massive amount of data allowed the model to offer up results beyond its training, providing expertise in brand-new domains. These unplanned robot capabilities are called zero-shots. They still baffle researchersâand account for the queasiness that many in the field have about these so-called large language models.
Radford remembers one late night at OpenAIâs office. âI just kept saying over and over, âWell, thatâs cool, but Iâm pretty sure it wonât be able to do x.â And then I would quickly code up an evaluation and, sure enough, it could kind of do x.â
Each GPT iteration would do better, in part because each one gobbled an order of magnitude more data than the previous model. Only a year after creating the first iteration, OpenAI trained GPT-2 on the open internet with an astounding 1.5 billion parameters. Like a toddler mastering speech, its responses got better and more coherent. So much so that OpenAI hesitated to release the program into the wild. Radford was worried that it might be used to generate spam. âI remember reading Neal Stephensonâs Anathem in 2008, and in that book the internet was overrun with spam generators,â he says. âI had thought that was really far-fetched, but as I worked on language models over the years and they got better, the uncomfortable realization that it was a real possibility set in.â
In fact, the team at OpenAI was starting to think it wasnât such a good idea after all to put its work where Dr. Evil could easily access it. âWe thought that open-sourcing GPT-2 could be really dangerous,â says chief technology officer Mira Murati, who started at the company in 2018. âWe did a lot of work with misinformation experts and did some red-teaming. There was a lot of discussion internally on how much to release.â Ultimately, OpenAI temporarily withheld the full version, making a less powerful version available to the public. When the company finally shared the full version, the world managed just fineâbut there was no guarantee that more powerful models would avoid catastrophe.
The very fact that OpenAI was making products smart enough to be deemed dangerous, and was grappling with ways to make them safe, was proof that the company had gotten its mojo working. âWeâd figured out the formula for progress, the formula everyone perceives nowâthe oxygen and the hydrogen of deep learning is computation with a large neural network and data,â says Sutskever.
To Altman, it was a mind-bending experience. âIf you asked the 10-year-old version of me, who used to spend a lot of time daydreaming about AI, what was going to happen, my pretty confident prediction would have been that first weâre gonna have robots, and theyâre going to perform all physical labor. Then weâre going to have systems that can do basic cognitive labor. A really long way after that, maybe weâll have systems that can do complex stuff like proving mathematical theorems. Finally we will have AI that can create new things and make art and write and do these deeply human things. That was a terrible predictionâitâs going exactly the other direction.â
The world didnât know it yet, but Altman and Muskâs research lab had begun a climb that plausibly creeps toward the summit of AGI. The crazy idea behind OpenAI suddenly was not so crazy.
By early 2018, OpenAI was starting to focus productively on large language models, or LLMs. But Elon Musk wasnât happy. He felt that the progress was insufficientâor maybe he felt that now that OpenAI was on to something, it needed leadership to seize its advantage. Or maybe, as heâd later explain, he felt that safety should be more of a priority. Whatever his problem was, he had a solution: Turn everything over to him. He proposed taking a majority stake in the company, adding it to the portfolio of his multiple full-time jobs (Tesla, SpaceX) and supervisory obligations (Neuralink and the Boring Company).
Musk believed he had a right to own OpenAI. âIt wouldnât exist without me,â he later told CNBC. âI came up with the name!â (True.) But Altman and the rest of OpenAIâs brain trust had no interest in becoming part of the Muskiverse. When they made this clear, Musk cut ties, providing the public with the incomplete explanation that he was leaving the board to avoid a conflict with Teslaâs AI effort. His farewell came at an all-hands meeting early that year where he predicted that OpenAI would fail. And he called at least one of the researchers a âjackass.â
He also took his money with him. Since the company had no revenue, this was an existential crisis. âElon is cutting off his support,â Altman said in a panicky call to Reid Hoffman. âWhat do we do?â Hoffman volunteered to keep the company afloat, paying overhead and salaries.
But this was a temporary fix; OpenAI had to find big bucks elsewhere. Silicon Valley loves to throw money at talented people working on trendy tech. But not so much if they are working at a nonprofit. It had been a massive lift for OpenAI to get its first billion. To train and test new generations of GPTâand then access the computation it takes to deploy themâthe company needed another billion, and fast. And that would only be the start.
Somewhere in the restructuring documents is a clause to the effect that, if the company does manage to create AGI, all financial arrangements will be reconsidered. After all, it will be a new world from that point on.
So in March 2019, OpenAI came up with a bizarre hack. It would remain a nonprofit, fully devoted to its mission. But it would also create a for-profit entity. The actual structure of the arrangement is hopelessly baroque, but basically the entire company is now engaged in a âcappedââ profitable business. If the cap is reachedâthe number isnât public, but its own charter, if you read between the lines, suggests it might be in the trillionsâeverything beyond that reverts to the nonprofit research lab. The novel scheme was almost a quantum approach to incorporation: Behold a company that, depending on your time-space point of view, is for-profit and nonprofit. The details are embodied in charts full of boxes and arrows, like the ones in the middle of a scientific paper where only PhDs or dropout geniuses dare to tread. When I suggest to Sutskever that it looks like something the as-yet-unconceived GPT-6 might come up with if you prompted it for a tax dodge, he doesnât warm to my metaphor. âItâs not about accounting,â he says.
But accounting is critical. A for-profit company optimizes for, well, profits. Thereâs a reason why companies like Meta feel pressure from shareholders when they devote billions to R&D. How could this not affect the way a firm operates? And wasnât avoiding commercialism the reason why Altman made OpenAI a nonprofit to begin with? According to COO Brad Lightcap, the view of the companyâs leaders is that the board, which is still part of the nonprofit controlling entity, will make sure that the drive for revenue and profits wonât overwhelm the original idea. âWe needed to maintain the mission as the reason for our existence,â he says, âIt shouldnât just be in spirit, but encoded in the structure of the company.â Board member Adam DâAngelo says he takes this responsibility seriously: âItâs my job, along with the rest of the board, to make sure that OpenAI stays true to its mission.â
Potential investors were warned about those boundaries, Lightcap explains. âWe have a legal disclaimer that says you, as an investor, stand to lose all your money,â he says. âWe are not here to make your return. Weâre here to achieve a technical mission, foremost. And, oh, by the way, we donât really know what role money will play in a post-AGI world.â
That last sentence is not a throwaway joke. OpenAIâs plan really does include a reset in case computers reach the final frontier. Somewhere in the restructuring documents is a clause to the effect that, if the company does manage to create AGI, all financial arrangements will be reconsidered. After all, it will be a new world from that point on. Humanity will have an alien partner that can do much of what we do, only better. So previous arrangements might effectively be kaput.
There is, however, a hitch: At the moment, OpenAI doesnât claim to know what AGI really is. The determination would come from the board, but itâs not clear how the board would define it. When I ask Altman, who is on the board, for clarity, his response is anything but open. âItâs not a single Turing test, but a number of things we might use,â he says. âI would happily tell you, but I like to keep confidential conversations private. I realize that is unsatisfyingly vague. But we donât know what itâs going to be like at that point.â
Nonetheless, the inclusion of the âfinancial arrangementsâ clause isnât just for fun: OpenAIâs leaders think that if the company is successful enough to reach its lofty profit cap, its products will probably have performed well enough to reach AGI. Whatever that is.
âMy regret is that weâve chosen to double down on the term AGI,â Sutskever says. âIn hindsight it is a confusing term, because it emphasizes generality above all else. GPT-3 is general AI, but yet we donât really feel comfortable calling it AGI, because we want human-level competence. But back then, at the beginning, the idea of OpenAI was that superintelligence is attainable. It is the endgame, the final purpose of the field of AI.â
Those caveats didnât stop some of the smartest venture capitalists from throwing money at OpenAI during its 2019 funding round. At that point, the first VC firm to invest was Khosla Ventures, which kicked in $50 million. According to Vinod Khosla, it was double the size of his largest initial investment. âIf we lose, we lose 50 million bucks,â he says. âIf we win, we win 5 billion.â Others investors reportedly would include elite VC firms Thrive Capital, Andreessen Horowitz, Founders Fund, and Sequoia.
The shift also allowed OpenAIâs employees to claim some equity. But not Altman. He says that originally he intended to include himself but didnât get around to it. Then he decided that he didnât need any piece of the $30 billion company that heâd cofounded and leads. âMeaningful work is more important to me,â he says. âI donât think about it. I honestly donât get why people care so much.â
Because ⊠not taking a stake in the company you cofounded is weird?
âIf I didnât already have a ton of money, it would be much weirder,â he says. âIt does seem like people have a hard time imagining ever having enough money. But I feel like I have enough.â (Note: For Silicon Valley, this is extremely weird.) Altman joked that heâs considering taking one share of equity âso I never have to answer that question again.â
Ilya Sutskever, OpenAIâs chief scientist.
Photograph: Jessica Chou
The billion-dollar VC round wasnât even table stakes to pursue OpenAIâs vision. The miraculous Big Transformer approach to creating LLMs required Big Hardware. Each iteration of the GPT family would need exponentially more powerâGPT-2 had over a billion parameters, and GPT-3 would use 175 billion. OpenAI was now like Quint in Jaws after the shark hunter sees the size of the great white. âIt turned out we didnât know how much of a bigger boat we needed,â Altman says.
Obviously, only a few companies in existence had the kind of resources OpenAI required. âWe pretty quickly zeroed in on Microsoft,â says Altman. To the credit of Microsoft CEO Satya Nadella and CTO Kevin Scott, the software giant was able to get over an uncomfortable reality: After more than 20 years and billions of dollars spent on a research division with supposedly cutting-edge AI, the Softies needed an innovation infusion from a tiny company that was only a few years old. Scott says that it wasnât just Microsoft that fell shortââit was everyone.â OpenAIâs focus on pursuing AGI, he says, allowed it to accomplish a moonshot-ish achievement that the heavy hitters werenât even aiming for. It also proved that not pursuing generative AI was a lapse that Microsoft needed to address. âOne thing you just very clearly need is a frontier model,â says Scott.
Microsoft originally chipped in a billion dollars, paid off in computation time on its servers. But as both sides grew more confident, the deal expanded. Microsoft now has sunk $13 billion into OpenAI. (âBeing on the frontier is a very expensive proposition,â Scott says.)
Of course, because OpenAI couldnât exist without the backing of a huge cloud provider, Microsoft was able to cut a great deal for itself. The corporation bargained for what Nadella calls ânon-controlling equity interestâ in OpenAIâs for-profit sideâreportedly 49 percent. Under the terms of the deal, some of OpenAIâs original ideals of granting equal access to all were seemingly dragged to the trash icon. (Altman objects to this characterization.) Now, Microsoft has an exclusive license to commercialize OpenAIâs tech. And OpenAI also has committed to use Microsoftâs cloud exclusively. In other words, without even taking its cut of OpenAIâs profits (reportedly Microsoft gets 75 percent until its investment is paid back), Microsoft gets to lock in one of the worldâs most desirable new customers for its Azure web services. With those rewards in sight, Microsoft wasnât even bothered by the clause that demands reconsideration if OpenAI achieves general artificial intelligence, whatever that is. âAt that point,â says Nadella, âall bets are off.â It might be the last invention of humanity, he notes, so we might have bigger issues to consider once machines are smarter than we are.
By the time Microsoft began unloading Brinks trucksâ worth of cash into OpenAI ($2 billion in 2021, and the other $10 billion earlier this year), OpenAI had completed GPT-3, which, of course, was even more impressive than its predecessors. When Nadella saw what GPT-3 could do, he says, it was the first time he deeply understood that Microsoft had snared something truly transformative. âWe started observing all those emergent properties.â For instance, GPT had taught itself how to program computers. âWe didnât train it on codingâit just got good at coding!â he says. Leveraging its ownership of GitHub, Microsoft released a product called Copilot that uses GPT to churn out code literally on command. Microsoft would later integrate OpenAI technology in new versions of its workplace products. Users pay a premium for those, and a cut of that revenue gets logged to OpenAIâs ledger.
Some observers professed whiplash at OpenAIâs one-two punch: creating a for-profit component and reaching an exclusive deal with Microsoft. How did a company that promised to remain patent-free, open source, and totally transparent wind up giving an exclusive license of its tech to the worldâs biggest software company? Elon Muskâs remarks were particularly lacerating. âThis does seem like the opposite of openâOpenAI is essentially captured by Microsoft,â he posted on Twitter. On CNBC, he elaborated with an analogy: âLetâs say you founded an organization to save the Amazon rainforest, and instead you became a lumber company, chopped down the forest, and sold it.â
Muskâs jibes might be dismissed as bitterness from a rejected suitor, but he wasnât alone. âThe whole vision of it morphing the way it did feels kind of gross,â says John Carmack. (He does specify that heâs still excited about the companyâs work.) Another prominent industry insider, who prefers to speak without attribution, says, âOpenAI has turned from a small, somewhat open research outfit into a secretive product-development house with an unwarranted superiority complex.â
Even some employees had been turned off by OpenAIâs venture into the for-profit world. In 2019, several key executives, including head of research Dario Amodei, left to start a rival AI company called Anthropic. They recently told The New York Times that OpenAI had gotten too commercial and had fallen victim to mission drift.
Another OpenAI defector was Rewon Child, a main technical contributor to the GPT-2 and GPT-3 projects. He left in late 2021 and is now at Inflection AI, a company led by former DeepMind cofounder Mustafa Suleyman.
Altman professes not to be bothered by defections, dismissing them as simply the way Silicon Valley works. âSome people will want to do great work somewhere else, and that pushes society forward,â he says. âThat absolutely fits our mission.â
Until November of last year, awareness of OpenAI was largely confined to people following technology and software development. But as the whole world now knows, OpenAI took the dramatic step of releasing a consumer product late that month, built on what was then the most recent iteration of GPT, version 3.5. For months, the company had been internally using a version of GPT with a conversational interface. It was especially important for what the company called âtruth-seeking.â That means that via dialog, the user could coax the model to provide responses that would be more trustworthy and complete. ChatGPT, optimized for the masses, could allow anyone to instantly tap into what seemed to be an endless source of knowledge simply by typing in a promptâand then continue the conversation as if hanging out with a fellow human who just happened to know everything, albeit one with a penchant for fabrication.
Within OpenAI, there was a lot of debate about the wisdom of releasing a tool with such unprecedented power. But Altman was all for it. The release, he explains, was part of a strategy designed to acclimate the public to the reality that artificial intelligence is destined to change their everyday lives, presumably for the better. Internally, this is known as the âiterative deployment hypothesis.â Sure, ChatGPT would create a stir, the thinking went. After all, here was something anyone could use that was smart enough to get college-level scores on the SATs, write a B-minus essay, and summarize a book within seconds. You could ask it to write your funding proposal or summarize a meeting and then request it to do a rewrite in Lithuanian or as a Shakespeare sonnet or in the voice of someone obsessed with toy trains. In a few seconds, pow, the LLM would comply. Bonkers. But OpenAI saw it as a table-setter for its newer, more coherent, more capable, and scarier successor, GPT-4, trained with a reported 1.7 trillion parameters. (OpenAI wonât confirm the number, nor will it reveal the data sets.)
Altman explains why OpenAI released ChatGPT when GPT-4 was close to completion, undergoing safety work. âWith ChatGPT, we could introduce chatting but with a much less powerful backend, and give people a more gradual adaptation,â he says. âGPT-4 was a lot to get used to at once.â By the time the ChatGPT excitement cooled down, the thinking went, people might be ready for GPT-4, which can pass the bar exam, plan a course syllabus, and write a book within seconds. (Publishing houses that produced genre fiction were indeed flooded with AI-generated bodice rippers and space operas.)
A cynic might say that a steady cadence of new products is tied to the companyâs commitment to investors, and equity-holding employees, to make some money. OpenAI now charges customers who use its products frequently. But OpenAI insists that its true strategy is to provide a soft landing for the singularity. âIt doesnât make sense to just build AGI in secret and drop it on the world,â Altman says. âLook back at the industrial revolutionâeveryone agrees it was great for the world,â says Sandhini Agarwal, an OpenAI policy researcher. âBut the first 50 years were really painful. There was a lot of job loss, a lot of poverty, and then the world adapted. Weâre trying to think how we can make the period before adaptation of AGI as painless as possible.â
Sutskever puts it another way: âYou want to build larger and more powerful intelligences and keep them in your basement?â
Even so, OpenAI was stunned at the reaction to ChatGPT. âOur internal excitement was more focused on GPT-4,â says Murati, the CTO. âAnd so we didnât think ChatGPT was really going to change everything.â To the contrary, it galvanized the public to the reality that AI had to be dealt with, now. ChatGPT became the fastest-growing consumer software in history, amassing a reported 100 million users. (Not-so-OpenAI wonât confirm this, saying only that it has âmillions of users.â) âI underappreciated how much making an easy-to-use conversational interface to an LLM would make it much more intuitive for everyone to use,â says Radford.
ChatGPT was of course delightful and astonishingly useful, but also scaryâprone to âhallucinationsâ of plausible but shamefully fabulist details when responding to prompts. Even as journalists wrung their hands about the implications, however, they effectively endorsed ChatGPT by extolling its powers.
The clamor got even louder in February when Microsoft, taking advantage of its multibillion-dollar partnership, released a ChatGPT-powered version of its search engine Bing. CEO Nadella was euphoric that he had beaten Google to the punch in introducing generative AI to Microsoftâs products. He taunted the search king, which had been cautious in releasing its own LLM into products, to do the same. âI want people to know we made them dance,â he said.
In so doing, Nadella triggered an arms race that tempted companies big and small to release AI products before they were fully vetted. He also a triggered a new round of media coverage that kept wider and wider circles of people up at night: interactions with Bing that unveiled the chatbotâs shadow side, replete with unnerving professions of love, an envy of human freedom, and a weak resolve to withhold misinformation. As well as an unseemly habit of creating hallucinatory misinformation of its own.
But if OpenAIâs products were forcing people to confront the implications of artificial intelligence, Altman figured, so much the better. It was time for the bulk of humankind to come off the sidelines in discussions of how AI might affect the future of the species.
Photograph: Jessica Chou
OpenAIâs San Francisco headquarters is unmarked; but inside, the coffee is awesome.
Photograph: Jessica Chou
As society started to prioritize thinking through all the potential drawbacks of AIâjob loss, misinformation, human extinctionâOpenAI set about placing itself in the center of the discussion. Because if regulators, legislators, and doomsayers mounted a charge to smother this nascent alien intelligence in its cloud-based cradle, OpenAI would be their chief target anyway. âGiven our current visibility, when things go wrong, even if those things were built by a different company, thatâs still a problem for us, because weâre viewed as the face of this technology right now,â says Anna Makanju, OpenAIâs chief policy officer.
Makanju is a Russian-born DC insider who served in foreign policy roles at the US Mission to the United Nations, the US National Security Council, and the Defense Department, and in the office of Joe Biden when he was vice president. âI have lots of preexisting relationships, both in the US government and in various European governments,â she says. She joined OpenAI in September 2021. At the time, very few people in government gave a hoot about generative AI. Knowing that OpenAIâs products would soon change that, she began to introduce Altman to administration officials and legislators, making sure that theyâd hear the good news and the bad from OpenAI first.
âSam has been extremely helpful, but also very savvy, in the way that he has dealt with members of Congress,â says Richard Blumenthal, the chair of the Senate Judiciary Committee. He contrasts Altmanâs behavior with that of the younger Bill Gates, who unwisely stonewalled legislators when Microsoft was under antitrust investigations in the 1990s. âAltman, by contrast, was happy to spend an hour or more sitting with me to try to educate me,â says Blumenthal. âHe didnât come with an army of lobbyists or minders. He demonstrated ChatGPT. It was mind-blowing.â
In Blumenthal, Altman wound up making a semi-ally of a potential foe. âYes,â the senator admits. âIâm excited about both the upside and the potential perils.â OpenAI didnât shrug off discussion of those perils, but presented itself as the force best positioned to mitigate them. âWe had 100-page system cards on all the red-teaming safety valuations,â says Makanju. (Whatever that meant, it didnât stop users and journalists from endlessly discovering ways to jailbreak the system.)
By the time Altman made his first appearance in a congressional hearingâfighting a fierce migraine headacheâthe path was clear for him to sail through in a way that Bill Gates or Mark Zuckerberg could never hope to. He faced almost none of the tough questions and arrogant badgering that tech CEOs now routinely endure after taking the oath. Instead, senators asked Altman for advice on how to regulate AI, a pursuit Altman enthusiastically endorsed.
The paradox is that no matter how assiduously companies like OpenAI red-team their products to mitigate misbehavior like deepfakes, misinformation efforts, and criminal spam, future models might get smart enough to foil the efforts of the measly minded humans who invented the technology yet are still naive enough to believe they can control it. On the other hand, if they go too far in making their models safe, it might hobble the products, making them less useful. One study indicated that more recent versions of GPT, which have improved safety features, are actually dumber than previous versions, making errors in basic math problems that earlier programs had aced. (Altman says that OpenAIâs data doesnât confirm this. âWasnât that study retracted?â he asks. No.)
It makes sense that Altman positions himself as a fan of regulation; after all, his mission is AGI, but safely. Critics have charged that heâs gaming the process so that regulations would thwart smaller startups and give an advantage to OpenAI and other big players. Altman denies this. While he has endorsed, in principle, the idea of an international agency overseeing AI, he does feel that some proposed rules, like banning all copyrighted material from data sets, present unfair obstacles. He pointedly didnât sign a widely distributed letter urging a six-month moratorium on developing more powerful AI systems. But he and other OpenAI leaders did add their names to a one-sentence statement: âMitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.â Altman explains: âI said, âYeah, I agree with that. One-minute discussion.â
As one prominent Silicon Valley founder notes, âItâs rare that an industry raises their hand and says, âWe are going to be the end of humanityââand then continues to work on the product with glee and alacrity.â
OpenAI rejects this criticism. Altman and his team say that working and releasing cutting-edge products is the way to address societal risks. Only by analyzing the responses to millions of prompts by users of ChatGPT and GPT-4 could they get the knowledge to ethically align their future products.
Still, as the company takes on more tasks and devotes more energy to commercial activities, some question how closely OpenAI can concentrate on the missionâespecially the âmitigating risk of extinctionâ side. âIf you think about it, theyâre actually building five businesses,â says an AI industry executive, ticking them off with his fingers. âThereâs the product itself, the enterprise relationship with Microsoft, the developer ecosystem, and an app store. And, oh yesâthey are also obviously doing an AGI research mission.â Having used all five fingers, he recycles his index finger to add a sixth. âAnd of course, theyâre also doing the investment fund,â he says, referring to a $175 million project to seed startups that want to tap into OpenAI technology. âThese are different cultures, and in fact theyâre conflicting with a research mission.â
I repeatedly asked OpenAIâs execs how donning the skin of a product company has affected its culture. Without fail they insist that, despite the for-profit restructuring, despite the competition with Google, Meta, and countless startups, the mission is still central. Yet OpenAI has changed. The nonprofit board might technically be in charge, but virtually everyone in the company is on the for-profit ledger. Its workforce includes lawyers, marketers, policy experts, and user-interface designers. OpenAI contracts with hundreds of content moderators to educate its models on inappropriate or harmful answers to the prompts offered by many millions of users. Itâs got product managers and engineers working constantly on updates to its products, and every couple of weeks it seems to ping reporters with demonstrationsâjust like other product-oriented Big Tech companies. Its offices look like an Architectural Digest spread. I have visited virtually every major tech company in Silicon Valley and beyond, and not one surpasses the coffee options in the lobby of OpenAIâs headquarters in San Francisco.
Not to mention: Itâs obvious that the âopennessâ embodied in the companyâs name has shifted from the radical transparency suggested at launch. When I bring this up to Sutskever, he shrugs. âEvidently, times have changed,â he says. But, he cautions, that doesnât mean that the prize is not the same. âYouâve got a technological transformation of such gargantuan, cataclysmic magnitude that, even if we all do our part, success is not guaranteed. But if it all works out we can have quite the incredible life.â
âThe biggest thing weâre missing is coming up with new ideas,â says Brockman. âItâs nice to have something that could be a virtual assistant. But thatâs not the dream. The dream is to help us solve problems we canât.â
âI canât emphasize this enoughâwe didnât have a master plan,â says Altman. âIt was like we were turning each corner and shining a flashlight. We were willing to go through the maze to get to the end.â Though the maze got twisty, the goal has not changed. âWe still have our core missionâbelieving that safe AGI was this critically important thing that the world was not taking seriously enough.â
Meanwhile, OpenAI is apparently taking its time to develop the next version of its large language model. Itâs hard to believe, but the company insists it has yet to begin working on GPT-5, a product that people are, depending on point of view, either salivating about or dreading. Apparently, OpenAI is grappling with what an exponentially powerful improvement on its current technology actually looks like. âThe biggest thing weâre missing is coming up with new ideas,â says Brockman. âItâs nice to have something that could be a virtual assistant. But thatâs not the dream. The dream is to help us solve problems we canât.â
Considering OpenAIâs history, that next big set of innovations might have to wait until thereâs another breakthrough as major as transformers. Altman hopes that will come from OpenAIââWe want to be the best research lab in the world,â he saysâbut even if not, his company will make use of othersâ advances, as it did with Googleâs work. âA lot of people around the world are going to do important work,â he says.
It would also help if generative AI didnât create so many new problems of its own. For instance, LLMs need to be trained on huge data sets; clearly the most powerful ones would gobble up the whole internet. This doesnât sit well with some creators, and just plain people, who unwittingly provide content for those data sets and wind up somehow contributing to the output of ChatGPT. Tom Rubin, an elite intellectual property lawyer who officially joined OpenAI in March, is optimistic that the company will eventually find a balance that satisfies both its own needs and that of creatorsâincluding the ones, like comedian Sarah Silverman, who are suing OpenAI for using their content to train its models. One hint of OpenAIâs path: partnerships with news and photo agencies like the Associated Press and Shutterstock to provide content for its models without questions of who owns what.
As I interview Rubin, my very human mind, subject to distractions you never see in LLMs, drifts to the arc of this company that in eight short years has gone from a floundering bunch of researchers to a Promethean behemoth that has changed the world. Its very success has led it to transform itself from a novel effort to achieve a scientific goal to something that resembles a standard Silicon Valley unicorn on its way to elbowing into the pantheon of Big Tech companies that affect our everyday lives. And here I am, talking with one of its key hiresâa lawyerânot about neural net weights or computer infrastructure but copyright and fair use. Has this IP expert, I wonder, signed on to the mission, like the superintelligence-seeking voyagers who drove the company originally?
Rubin is nonplussed when I ask him whether he believes, as an article of faith, that AGI will happen and if heâs hungry to make it so. âI canât even answer that,â he says after a pause. When pressed further, he clarifies that, as an intellectual property lawyer, speeding the path to scarily intelligent computers is not his job. âFrom my perch, I look forward to it,â he finally says.
Updated 9-7-23, 5:30pm EST: This story was updated to clarify Rewon Child's role at OpenAI, and the aim of a letter calling for a six-month pause on the most powerful AI models.
Styling by Turner/The Wall Group. Hair and Makeup by Hiroko Claus.
This article appears in the October 2023 issue. Subscribe now.
Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.