Sam Altman says the research strategy that birthed ChatGPT is played out and future strides in artificial intelligence will require new ideas.
www.wired.com
WILL KNIGHT BUSINESSAPR 17, 2023 7:00 AM
OpenAI’s CEO Says the Age of Giant AI Models Is Already Over
Sam Altman says the research strategy that birthed ChatGPT is played out and future strides in artificial intelligence will require new ideas.
https://www.facebook.com/dialog/feed?&display=popup&caption=OpenAI’s CEO Says the Age of Giant AI Models Is Already Over&app_id=719405864858490&link=https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/?utm_source=facebook&utm_medium=social&utm_campaign=onsite-share&utm_brand=wired&utm_social-type=earned
PHOTOGRAPH: JASON REDMOND/GETTY IMAGES
THE STUNNING CAPABILITIES of
ChatGPT, the chatbot from startup OpenAI, has triggered a surge of new interest and investment in
artificial intelligence. But late last week, OpenAI’s CEO warned that the research strategy that birthed the bot is played out. It's unclear exactly where future advances will come from.
OpenAI has delivered a series of impressive advances in AI that works with language in recent years by taking existing machine-learning algorithms and scaling them up to previously unimagined size. GPT-4, the latest of those projects, was likely trained using trillions of words of text and many thousands of powerful computer chips. The process cost over $100 million.
But the company’s CEO, Sam Altman, says further progress will not come from making models bigger. “I think we're at the end of the era where it's going to be these, like, giant, giant models,” he told an audience at an event held at MIT late last week. “We'll make them better in other ways.”
Altman’s declaration suggests an unexpected twist in the race to develop and deploy new AI algorithms. Since OpenAI launched ChatGPT in November, Microsoft has used the underlying technology to
add a chatbot to its Bing search engine, and
Google has launched a rival chatbot called Bard. Many people have rushed to experiment with using the new breed of chatbot to help with work or personal tasks.
Meanwhile, numerous well-funded startups, including
Anthropic,
AI21,
Cohere, and
Character.AI, are throwing enormous resources into building ever larger algorithms in an effort to catch up with OpenAI’s technology. The initial version of ChatGPT was based on a slightly upgraded version of GPT-3, but users can now also access a version powered by the more capable GPT-4.
Altman’s statement suggests that GPT-4 could be the last major advance to emerge from OpenAI’s strategy of making the models bigger and feeding them more data. He did not say what kind of research strategies or techniques might take its place. In the
paper describing GPT-4, OpenAI says its estimates suggest diminishing returns on scaling up model size. Altman said there are also physical limits to how many data centers the company can build and how quickly it can build them.
Nick Frosst, a cofounder at Cohere who previously worked on AI at Google, says Altman’s feeling that going bigger will not work indefinitely rings true. He, too, believes that progress on transformers, the type of machine learning model at the heart of GPT-4 and its rivals, lies beyond scaling. “There are lots of ways of making transformers way, way better and more useful, and lots of them don’t involve adding parameters to the model,” he says. Frosst says that new AI model designs, or architectures, and further tuning based on human feedback are promising directions that many researchers are already exploring.
Each version of OpenAI’s influential family of language algorithms consists of an artificial neural network, software loosely inspired by the way neurons work together, which is trained to predict the words that should follow a given string of text.
The first of these language models, GPT-2, was
announced in 2019. In its largest form, it had 1.5 billion parameters, a measure of the number of adjustable connections between its crude artificial neurons.
At the time, that was extremely large compared to previous systems, thanks in part to OpenAI researchers finding that scaling up made the model more coherent. And the company made GPT-2’s successor, GPT-3,
announced in 2020, still bigger, with a whopping 175 billion parameters. That system’s broad abilities to generate poems, emails, and other text helped convince other companies and research institutions to push their own AI models to similar and even greater size.
After ChatGPT debuted in November,
meme makers and
tech pundits speculated that GPT-4, when it arrived, would be a model of vertigo-inducing size and complexity. Yet when
OpenAI finally announced the new artificial intelligence model, the company didn’t disclose how big it is—perhaps because size is no longer all that matters. At the MIT event, Altman was asked if training GPT-4 cost $100 million; he replied, “It’s more than that.”
Although OpenAI is keeping GPT-4’s size and inner workings secret, it is likely that some of its intelligence already comes from looking beyond just scale. On possibility is that it used a method called reinforcement learning with human feedback, which was used to enhance ChatGPT. It involves having humans judge the quality of the model’s answers to steer it towards providing responses more likely to be judged as high quality.
The remarkable capabilities of GPT-4 have stunned some experts and sparked debate over the potential for AI to transform the economy but also spread disinformation and eliminate jobs. Some AI experts, tech entrepreneurs including Elon Musk, and scientists recently wrote an open letter
calling for a six-month pause on the development of anything more powerful than GPT-4.
At MIT last week, Altman confirmed that his company is not currently developing GPT-5. “An earlier version of the letter claimed OpenAI is training GPT-5 right now,” he said. “We are not, and won't for some time.”