Dawn of Superintelligence: What Ilya Sutskever's New Project Tells Us

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,333
Reputation
8,496
Daps
159,996

Dawn of Superintelligence: What Ilya Sutskever's New Project Tells Us​

Several former OpenAI employees have announced the creation of a new company. It is called Safe Superintelligence. What is interesting about this is that the concept of artificial superintelligence has existed for quite some time, but there is a nuance—it has been considered almost science fiction. Or was it?

June 25, 2024


Changes and Shifts​

Ilya Sutskever, co-founder and lead researcher at OpenAI, announced a new project on June 19 on the social network X. In the message, he quoted a post from the company, which begins with the words: "Superintelligence within arm's reach."

Safe Superintelligence, the name of the new company, is a personal project of Sutskever, which he is working on together with Daniel Gross, former head of AI development at Apple, and Daniel Levy, also a former OpenAI employee. In total, three leading experts in the field of neural networks have embarked on the development of something new, about which nothing is known yet. Except that it involves superintelligence.

It is not surprising that two former OpenAI employees are among the founders. Recently, the company has been going through a period of turmoil, mainly related to the brief dismissal of Sam Altman at the end of last year. After such news, the overwhelming majority of employees threatened to leave for Microsoft, and the board of directors assured that they would reinstate Altman in his position. Which ultimately happened.

According to information from The Verge, Ilya Sutskever played a significant role in Altman's dismissal. Sources believe that there was an internal struggle between the research and product departments within the company. In particular, the co-founders could not agree on how quickly artificial intelligence should develop and whether its rapid advancement might harm humanity.

In May, Sutskever himself announced his decision to leave OpenAI, citing a desire to work on a new project that holds personal significance for him. With this post on the social network X, Sutskever broke his silence, which had lasted since December.

(Safe) Superintelligence​

Currently, the official website of Safe Superintelligence consists of a text page with a contact link. The company’s mission is to create safe superintelligence (SSI), without a complex managerial system and unnecessary bureaucracy. It seems that the internal scandals at OpenAI prompted Sutskever, Gross, and Levy to rethink their approach to startups.

The Safe Superintelligence website states: “SSI is our mission, our name, and the entire roadmap of our products because it is our sole focus.” But what is superintelligence?

Artificial superintelligence, or ASI, is a hypothetical concept of AI whose cognitive capabilities far exceed those of humans by tens of thousands of times. Presumably, such a system would not only be able to learn from vast amounts of data, which neural networks can already do in their current state, but also possess problem-solving abilities, social interaction skills, and creative thinking. To envision ASI, it is easiest to think of various science fiction titles, such as the movie "Bicentennial Man" based on the novella by Isaac Asimov, or "Iron Man," where the protagonist is assisted by a superintelligent AI butler named JARVIS.

Of course, we already have conditional AI butlers today—Alexa, Siri, and other voice assistants. But they are limited by the current state of artificial intelligence. What we use now, and what we see in chatbots, self-driving cars, and facial recognition systems, is narrow, or weak, AI. It is designed to solve specific tasks and can make decisions only based on the data it has been trained on. For instance, it can be a recommendation system on a streaming platform. It performs its job brilliantly, possibly even better than a human, but in terms of versatility, it is quite weak.

The next logical step after narrow AI is general, or strong, AI, also known as AGI (artificial general intelligence). Like superintelligence, it is only a hypothetical concept—an artificial intelligence whose scope and performance are indistinguishable from that of a human. It is a machine you can converse with on the same level, capable of learning, applying its knowledge to absolutely any field, and even solving creative tasks. When thinking about AGI, one imagines a human.

Experts believe that creating strong AI is entirely possible, and OpenAI scientists even released a study stating: "Given the breadth and depth of GPT-4's capabilities, we believe that it can reasonably be viewed as an early (still incomplete) version of an artificial general intelligence (AGI) system." Is this true? It is hard to say because there is no clear definition of AGI.

ASI, or superintelligence, is the next step after AGI in the hypothetical evolution of AI. Ilya Sutskever is making a big bet—he wants to leapfrog over the stage of general AI and immediately embark on a much more ambitious project. Or perhaps OpenAI was working on something that is not yet available to the general public for various reasons. We can speculate further on this topic and suggest that this may have been the cause of the discord between Sutskever and Altman. One might even suggest that the researchers are concerned not so much with business issues but with the problem of AI safety. After all, it is no coincidence that the word "safe" appears before "superintelligence" in the company's name, Safe Superintelligence.

For various reasons.​

On the Safe Superintelligence website, the word "safety" is repeated several times. The team plans to approach the issues of safety and power in tandem—rapidly developing capabilities while prioritizing safety. This is not surprising: the question of the harmlessness of artificial intelligence concerns people even at the current stage of weak AI, and the smarter neural networks become, the more relevant this issue becomes.

Some experts, including computer science professor Roman Yampolskiy and former Google employee Geoffrey Hinton, urge caution with AI. Hinton has stated that the existential threat it poses to humanity has long since moved from the realm of science fiction into reality, and Yampolskiy calls AI unpredictable. At the AGI stage, and even more so at the ASI stage, he believes it will become practically impossible to predict AI behavior.

Yampolskiy explored the possibility of controlling AI and concluded that it would be very difficult. The hardest part would be making it think within the framework of human ethics and morality. After all, even in Asimov's stories, robots with brains significantly more powerful than the average human were constrained by the laws of robotics.

Therefore, it is impossible for a scientist of Ilya Sutskever's caliber not to have considered this issue. While we do not yet know what is meant by the "safety" of superintelligence, nor do we know how the team plans to develop it, one thing is clear: Sutskever's announcement has made an impact in certain circles, and ultimately, his project could influence all spheres of society—science, ethics, and business.

Investments in the Future​

The word "superintelligence" has appeared more than once in recent news reports. There is another person for whom ASI might become a life’s work, and that person is the Japanese billionaire, strategic investor, and founder of SoftBank, Masayoshi Son.

SoftBank owns about 90% of the shares of chip manufacturer Arm. It is with Arm that Son associates his idea of superintelligence—the potential to create robots that, through joint cognitive efforts, can cure cancer, clean houses, and care for children. Son plans to invest around $100 billion in the new project—all to make superintelligence a reality.

The last few investments by SoftBank have been unsuccessful, forcing Son to temporarily disappear from the radar, but at the annual shareholders' meeting, he sounded enthusiastic for the first time in a long while. "I got the answer," Son said. "This is what I was born for. We have done a lot, but it was all just a warm-up for my dream of realizing ASI."

When leading scientists involved in AI development and the largest Japanese investor in telecommunications technology suddenly start talking about the same thing, it is at least a reason to take notice. Artificial superintelligence, until this point, has been more science fiction than reality, a mental exercise. Now, the idea is taking shape. What form it will take—chips, bots, robots, or an all-encompassing "smart city" system—only time will tell. And time will reveal what impact ASI, should it be developed, will have on life and business.
 

Hood Critic

The Power Circle
Joined
May 2, 2012
Messages
24,051
Reputation
3,727
Daps
109,815
Reppin
דעת
And perhaps it is smart enough to solve alignment, its just not aligned enough to want to solve alignment..

I don't disagree but lol I can feel the indifference through the words of the person who typed this.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,333
Reputation
8,496
Daps
159,996


OpenAI cofounder Ilya Sutskever says the way AI is built is about to change​



“We’ve achieved peak data and there’ll be no more,” OpenAI’s former chief scientist told a crowd of AI researchers.​


By Kylie Robison, a senior AI reporter working with The Verge's policy and tech teams. She previously worked at Fortune Magazine and Business Insider.
Dec 13, 2024, 7:34 PM EST

ISRAEL-SCIENCE-TECHNOLOGY-AI


Ilya Sutskever. Photo by JACK GUEZ/AFP via Getty Images

OpenAI’s cofounder and former chief scientist, Ilya Sutskever, made headlines earlier this year after he left to start his own AI lab called Safe Superintelligence Inc. He has avoided the limelight since his departure but made a rare public appearance in Vancouver on Friday at the Conference on Neural Information Processing Systems (NeurIPS).
“Pre-training as we know it will unquestionably end,” Sutskever said onstage. This refers to the first phase of AI model development, when a large language model learns patterns from vast amounts of unlabeled data — typically text from the internet, books, and other sources.

“We’ve achieved peak data and there’ll be no more.”

During his NeurIPS talk, Sutskever said that, while he believes existing data can still take AI development farther, the industry is tapping out on new data to train on. This dynamic will, he said, eventually force a shift away from the way models are trained today. He compared the situation to fossil fuels: just as oil is a finite resource, the internet contains a finite amount of human-generated content.
“We’ve achieved peak data and there’ll be no more,” according to Sutskever. “We have to deal with the data that we have. There’s only one internet.”

A screenshot of Ilya Sutskever’s talk. He is on the left, and on the right the slide says: pre training as we know it will end. Compute is growing, better hardware, better algorithms, larger clusters. Data is not growing, we have but one internet, the fossil fuel of AI.


Ilya Sutskever calls data the “fossil fuel” of AI. Ilya Sutskever/NeurIPS

Next-generation models, he predicted, are going to “be agentic in a real ways.” Agents have become a real buzzword in the AI field. While Sutskever didn’t define them during his talk, they are commonly understood to be an autonomous AI system that performs tasks, makes decisions, and interacts with software on its own.

Along with being “agentic,” he said future systems will also be able to reason. Unlike today’s AI, which mostly pattern-matches based on what a model has seen before, future AI systems will be able to work things out step-by-step in a way that is more comparable to thinking.

Do you work at OpenAI? I’d love to chat. You can reach me securely on Signal @kylie.01 or via email at kylie@theverge.com.

The more a system reasons, “the more unpredictable it becomes,” according to Sutskever. He compared the unpredictability of “truly reasoning systems” to how advanced AIs that play chess “are unpredictable to the best human chess players.”
“They will understand things from limited data,” he said. “They will not get confused.”

On stage, he drew a comparison between the scaling of AI systems and evolutionary biology, citing research that shows the relationship between brain and body mass across species.
 
Top