Exclusive: Ilya Sutskever, OpenAI’s chief scientist, on his hopes and fears for the future of AI
“It’s going to be monumental, earth-shattering. There will be a before and an after.”
www.technologyreview.com
Exclusive: Ilya Sutskever, OpenAI’s chief scientist, on his hopes and fears for the future of AI
“It’s going to be monumental, earth-shattering. There will be a before and an after.”By
October 26, 2023
STEPHANIE ARNETT/MITTR | OPENAI (ILYA)
Ilya Sutskever, head bowed, is deep in thought. His arms are spread wide and his fingers are splayed on the tabletop like a concert pianist about to play his first notes. We sit in silence.
I’ve come to meet Sutskever, OpenAI’s cofounder and chief scientist, in his company’s unmarked office building on an unremarkable street in the Mission District of San Francisco to hear what’s next for the world-tilting technology he has had a big hand in bringing about. I also want to know what’s next for him—in particular, why building the next generation of his company’s flagship generative models is no longer the focus of his work.
Instead of building the next GPT or image maker DALL-E, Sutskever tells me his new priority is to figure out how to stop an artificial superintelligence (a hypothetical future technology he sees coming with the foresight of a true believer) from going rogue.
Sutskever tells me a lot of other things too. He thinks ChatGPT just might be conscious (if you squint). He thinks the world needs to wake up to the true power of the technology his company and others are racing to create. And he thinks some humans will one day choose to merge with machines.
A lot of what Sutskever says is wild. But not nearly as wild as it would have sounded just one or two years ago. As he tells me himself, ChatGPT has already rewritten a lot of people’s expectations about what’s coming, turning “will never happen” into “will happen faster than you think.”
“It’s important to talk about where it’s all headed,” he says, before predicting the development of artificial general intelligence (by which he means machines as smart as humans) as if it were as sure a bet as another iPhone: “At some point we really will have AGI. Maybe OpenAI will build it. Maybe some other company will build it.”
Since the release of its sudden surprise hit, ChatGPT, last November, the buzz around OpenAI has been astonishing, even in an industry known for hype. No one can get enough of this nerdy $80 billion startup. World leaders seek (and get) private audiences. Its clunky product names pop up in casual conversation.
OpenAI’s CEO, Sam Altman, spent a good part of the summer on a weeks-long outreach tour, glad-handing politicians and speaking to packed auditoriums around the world. But Sutskever is much less of a public figure, and he doesn’t give a lot of interviews.
He is deliberate and methodical when he talks. There are long pauses when he thinks about what he wants to say and how to say it, turning questions over like puzzles he needs to solve. He does not seem interested in talking about himself. “I lead a very simple life,” he says. “I go to work; then I go home. I don’t do much else. There are a lot of social activities one could engage in, lots of events one could go to. Which I don’t.”
But when we talk about AI, and the epochal risks and rewards he sees down the line, vistas open up: “It’s going to be monumental, earth-shattering. There will be a before and an after.”
Better and better and better
In a world without OpenAI, Sutskever would still get an entry in the annals of AI history. An Israeli-Canadian, he was born in Soviet Russia but brought up in Jerusalem from the age of five (he still speaks Russian and Hebrew as well as English). He then moved to Canada to study at the University of Toronto with Geoffrey Hinton, the AI pioneer who went public with his fears about the technology he helped invent earlier this year. (Sutskever didn’t want to comment on Hinton’s pronouncements, but his new focus on rogue superintelligence suggests they’re on the same page.)Hinton would later share the Turing Award with Yann LeCun and Yoshua Bengio for their work on neural networks. But when Sutskever joined him in the early 2000s, most AI researchers believed neural networks were a dead end. Hinton was an exception. He was already training tiny models that could produce short strings of text one character at a time, says Sutskever: “It was the beginning of generative AI right there. It was really cool—it just wasn’t very good.”
Sutskever was fascinated with brains: how they learned and how that process might be re-created, or at least mimicked, in machines. Like Hinton, he saw the potential of neural networks and the trial-and-error technique Hinton used to train them, called deep learning. “It kept getting better and better and better,” says Sutskever.
In 2012 Sutskever, Hinton, and another of Hinton’s graduate students, Alex Krizhevsky, built a neural network called AlexNet that they trained to identify objects in photos far better than any other software around at the time. It was deep learning’s Big Bang moment.
After many years of false starts, they had showed that neural networks were amazingly effective at pattern recognition after all. You just needed more data than most researchers had seen before (in this case, a million images from the ImageNet data set that Princeton University researcher Fei-Fei Li had been building since 2006) and an eye-watering amount of computer power.
The step change in compute came from a new kind of chip called a graphics processing unit (GPU), made by Nvidia. GPUs were designed to be lightning quick at throwing fast-moving video-game visuals onto screens. But the calculations that GPUs are good at—multiplying massive grids of numbers—happened to look a lot like the calculations needed to train neural networks.
Nvidia is now a trillion-dollar company. At the time it was desperate to find applications for its niche new hardware. “When you invent a new technology, you have to be receptive to crazy ideas,” says Nvidia CEO Jensen Huang. “My state of mind was always to be looking for something quirky, and the idea that neural networks would transform computer science—that was an outrageously quirky idea.”
Huang says that Nvidia sent the Toronto team a couple of GPUs to try when they were working on AlexNet. But they wanted the newest version, a chip called the GTX 580 that was fast selling out in stores. According to Huang, Sutskever drove across the border from Toronto to New York to buy some. “People were lined up around the corner,” says Huang. “I don’t know how he did it—I’m pretty sure you were only allowed to buy one each; we had a very strict policy of one GPU per gamer—but he apparently filled a trunk with them. That trunk full of GTX 580s changed the world.”
It’s a great story—it just might not be true. Sutskever insists he bought those first GPUs online. But such myth-making is commonplace in this buzzy business. Sutskever himself is more humble: “I thought, like, if I could make even an ounce of real progress, I would consider that a success,” he says. “The real-world impact felt so far away because computers were so puny back then.”
After the success of AlexNet, Google came knocking. It acquired Hinton’s spin-off company DNNresearch and hired Sutskever. At Google Sutskever showed that deep learning’s powers of pattern recognition could be applied to sequences of data, such as words and sentences, as well as images. “Ilya has always been interested in language,” says Sutskever’s former colleague Jeff Dean, who is now Google’s chief scientist: “We’ve had great discussions over the years. Ilya has a strong intuitive sense about where things might go.”
But Sutskever didn’t remain at Google for long. In 2014, he was recruited to become a cofounder of OpenAI. Backed by $1 billion (from Altman, Elon Musk, Peter Thiel, Microsoft, Y Combinator, and others) plus a massive dose of Silicon Valley swagger, the new company set its sights from the start on developing AGI, a prospect that few took seriously at the time.
With Sutskever on board, the brains behind the bucks, the swagger was understandable. Up until then, he had been on a roll, getting more and more out of neural networks. His reputation preceded him, making him a major catch, says Dalton Caldwell, managing director of investments at Y Combinator.
“I remember Sam [Altman] referring to Ilya as one of the most respected researchers in the world,” says Caldwell. “He thought that Ilya would be able to attract a lot of top AI talent. He even mentioned that Yoshua Bengio, one of the world's top AI experts, believed that it would be unlikely to find a better candidate than Ilya to be OpenAI's lead scientist."
And yet at first OpenAI floundered. “There was a period of time when we were starting OpenAI when I wasn’t exactly sure how the progress would continue,” says Sutskever. “But I had one very explicit belief, which is: one doesn’t bet against deep learning. Somehow, every time you run into an obstacle, within six months or a year researchers find a way around it.”
His faith paid off. The first of OpenAI’s GPT large language models (the name stands for “generative pretrained transformer”) appeared in 2016. Then came GPT-2 and GPT-3. Then DALL-E, the striking text-to-image model. Nobody was building anything as good. With each release, OpenAI raised the bar for what was thought possible.
Managing expectations
Last November, OpenAI released a free-to-use chatbot that repackaged some of its existing tech. It reset the agenda of the entire industry.At the time, OpenAI had no idea what it was putting out. Expectations inside the company couldn’t have been lower, says Sutskever: “I will admit, to my slight embarrassment—I don’t know if I should, but what the hell, it is true—when we made ChatGPT, I didn’t know if it was any good. When you asked it a factual question, it gave you a wrong answer. I thought it was going to be so unimpressive that people would say, ‘Why are you doing this? This is so boring!’”
The draw was the convenience, says Sutskever. The large language model under ChatGPT’s hood had been around for months. But wrapping that in an accessible interface and giving it away for free made billions of people aware for the first time of what OpenAI and others were building.
“That first-time experience is what hooked people,” says Sutskever. “The first time you use it, I think it’s almost a spiritual experience. You go, ‘Oh my God, this computer seems to understand.’”
OpenAI amassed 100 million users in less than two months, many of them dazzled by this stunning new toy. Aaron Levie, CEO of the storage firm Box, summed up the vibe in the week after launch when he tweeted: “ChatGPT is one of those rare moments in technology where you see a glimmer of how everything is going to be different going forward.”