Dawn of Superintelligence: What Ilya Sutskever's New Project Tells Us

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,802
Reputation
8,234
Daps
157,308

Dawn of Superintelligence: What Ilya Sutskever's New Project Tells Us​

Several former OpenAI employees have announced the creation of a new company. It is called Safe Superintelligence. What is interesting about this is that the concept of artificial superintelligence has existed for quite some time, but there is a nuance—it has been considered almost science fiction. Or was it?

June 25, 2024


Changes and Shifts​

Ilya Sutskever, co-founder and lead researcher at OpenAI, announced a new project on June 19 on the social network X. In the message, he quoted a post from the company, which begins with the words: "Superintelligence within arm's reach."

Safe Superintelligence, the name of the new company, is a personal project of Sutskever, which he is working on together with Daniel Gross, former head of AI development at Apple, and Daniel Levy, also a former OpenAI employee. In total, three leading experts in the field of neural networks have embarked on the development of something new, about which nothing is known yet. Except that it involves superintelligence.

It is not surprising that two former OpenAI employees are among the founders. Recently, the company has been going through a period of turmoil, mainly related to the brief dismissal of Sam Altman at the end of last year. After such news, the overwhelming majority of employees threatened to leave for Microsoft, and the board of directors assured that they would reinstate Altman in his position. Which ultimately happened.

According to information from The Verge, Ilya Sutskever played a significant role in Altman's dismissal. Sources believe that there was an internal struggle between the research and product departments within the company. In particular, the co-founders could not agree on how quickly artificial intelligence should develop and whether its rapid advancement might harm humanity.

In May, Sutskever himself announced his decision to leave OpenAI, citing a desire to work on a new project that holds personal significance for him. With this post on the social network X, Sutskever broke his silence, which had lasted since December.

(Safe) Superintelligence​

Currently, the official website of Safe Superintelligence consists of a text page with a contact link. The company’s mission is to create safe superintelligence (SSI), without a complex managerial system and unnecessary bureaucracy. It seems that the internal scandals at OpenAI prompted Sutskever, Gross, and Levy to rethink their approach to startups.

The Safe Superintelligence website states: “SSI is our mission, our name, and the entire roadmap of our products because it is our sole focus.” But what is superintelligence?

Artificial superintelligence, or ASI, is a hypothetical concept of AI whose cognitive capabilities far exceed those of humans by tens of thousands of times. Presumably, such a system would not only be able to learn from vast amounts of data, which neural networks can already do in their current state, but also possess problem-solving abilities, social interaction skills, and creative thinking. To envision ASI, it is easiest to think of various science fiction titles, such as the movie "Bicentennial Man" based on the novella by Isaac Asimov, or "Iron Man," where the protagonist is assisted by a superintelligent AI butler named JARVIS.

Of course, we already have conditional AI butlers today—Alexa, Siri, and other voice assistants. But they are limited by the current state of artificial intelligence. What we use now, and what we see in chatbots, self-driving cars, and facial recognition systems, is narrow, or weak, AI. It is designed to solve specific tasks and can make decisions only based on the data it has been trained on. For instance, it can be a recommendation system on a streaming platform. It performs its job brilliantly, possibly even better than a human, but in terms of versatility, it is quite weak.

The next logical step after narrow AI is general, or strong, AI, also known as AGI (artificial general intelligence). Like superintelligence, it is only a hypothetical concept—an artificial intelligence whose scope and performance are indistinguishable from that of a human. It is a machine you can converse with on the same level, capable of learning, applying its knowledge to absolutely any field, and even solving creative tasks. When thinking about AGI, one imagines a human.

Experts believe that creating strong AI is entirely possible, and OpenAI scientists even released a study stating: "Given the breadth and depth of GPT-4's capabilities, we believe that it can reasonably be viewed as an early (still incomplete) version of an artificial general intelligence (AGI) system." Is this true? It is hard to say because there is no clear definition of AGI.

ASI, or superintelligence, is the next step after AGI in the hypothetical evolution of AI. Ilya Sutskever is making a big bet—he wants to leapfrog over the stage of general AI and immediately embark on a much more ambitious project. Or perhaps OpenAI was working on something that is not yet available to the general public for various reasons. We can speculate further on this topic and suggest that this may have been the cause of the discord between Sutskever and Altman. One might even suggest that the researchers are concerned not so much with business issues but with the problem of AI safety. After all, it is no coincidence that the word "safe" appears before "superintelligence" in the company's name, Safe Superintelligence.

For various reasons.​

On the Safe Superintelligence website, the word "safety" is repeated several times. The team plans to approach the issues of safety and power in tandem—rapidly developing capabilities while prioritizing safety. This is not surprising: the question of the harmlessness of artificial intelligence concerns people even at the current stage of weak AI, and the smarter neural networks become, the more relevant this issue becomes.

Some experts, including computer science professor Roman Yampolskiy and former Google employee Geoffrey Hinton, urge caution with AI. Hinton has stated that the existential threat it poses to humanity has long since moved from the realm of science fiction into reality, and Yampolskiy calls AI unpredictable. At the AGI stage, and even more so at the ASI stage, he believes it will become practically impossible to predict AI behavior.

Yampolskiy explored the possibility of controlling AI and concluded that it would be very difficult. The hardest part would be making it think within the framework of human ethics and morality. After all, even in Asimov's stories, robots with brains significantly more powerful than the average human were constrained by the laws of robotics.

Therefore, it is impossible for a scientist of Ilya Sutskever's caliber not to have considered this issue. While we do not yet know what is meant by the "safety" of superintelligence, nor do we know how the team plans to develop it, one thing is clear: Sutskever's announcement has made an impact in certain circles, and ultimately, his project could influence all spheres of society—science, ethics, and business.

Investments in the Future​

The word "superintelligence" has appeared more than once in recent news reports. There is another person for whom ASI might become a life’s work, and that person is the Japanese billionaire, strategic investor, and founder of SoftBank, Masayoshi Son.

SoftBank owns about 90% of the shares of chip manufacturer Arm. It is with Arm that Son associates his idea of superintelligence—the potential to create robots that, through joint cognitive efforts, can cure cancer, clean houses, and care for children. Son plans to invest around $100 billion in the new project—all to make superintelligence a reality.

The last few investments by SoftBank have been unsuccessful, forcing Son to temporarily disappear from the radar, but at the annual shareholders' meeting, he sounded enthusiastic for the first time in a long while. "I got the answer," Son said. "This is what I was born for. We have done a lot, but it was all just a warm-up for my dream of realizing ASI."

When leading scientists involved in AI development and the largest Japanese investor in telecommunications technology suddenly start talking about the same thing, it is at least a reason to take notice. Artificial superintelligence, until this point, has been more science fiction than reality, a mental exercise. Now, the idea is taking shape. What form it will take—chips, bots, robots, or an all-encompassing "smart city" system—only time will tell. And time will reveal what impact ASI, should it be developed, will have on life and business.
 

Hood Critic

The Power Circle
Joined
May 2, 2012
Messages
23,876
Reputation
3,610
Daps
108,489
Reppin
דעת
And perhaps it is smart enough to solve alignment, its just not aligned enough to want to solve alignment..

I don't disagree but lol I can feel the indifference through the words of the person who typed this.
 
Top