AI that’s smarter than humans? Americans say a firm “no thank you.”

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,546
Reputation
8,519
Daps
160,310

GPT-5 will have ‘Ph.D.-level’ intelligence​


By Luke Larsen June 20, 2024 4:46PM


OpenAI CTO Mira Murati on stage answering questions.
Dartmouth Engineering

The next major evolution of ChatGPT has been rumored for a long time. GPT-5, or whatever it will be called, has been talked about vaguely many times over the past year, but yesterday, OpenAI Chief Technology Officer Mira Murati gave some additional clarity on its capabilities.

In an interview with Dartmouth Engineering that was posted on X (formerly Twitter), Murati describes the jump from GPT-4 to GPT-5 as someone growing from a high-schooler up to university.

“If you look at the trajectory of improvement, systems like GPT-3 were maybe toddler-level intelligence,” Murati says. “And then systems like GPT-4 are more like smart high-schooler intelligence. And then, in the next couple of years, we’re looking at Ph.D. intelligence for specific tasks. Things are changing and improving pretty rapidly.”


Mira Murati: GPT-3 was toddler-level, GPT-4 was a smart high schooler and the next gen, to be released in a year and a half, will be PhD-level pic.twitter.com/jyNSgO9Kev

— Tsarathustra (@tsarnick) June 20, 2024

Interestingly, the interviewer asked her to specify the timetable, asking if it’d come in the next year. Murati nods her head, and then clarifies that it’d be in a year and a half. If that’s true, GPT-5 may not come out until late 2025 or early 2026. Some will be disappointed to hear that the next big step is that far away.

After all, the first rumors about the launch time of GPT-5 were that it would be in late 2023. And then, when that didn’t turn out, reports indicated that it would launch later this summer. That turned out to be GPT-4o, which was an impressive release, but it wasn’t the kind of step function in intelligence Murati is referencing here.

In terms of the claim about intelligence, it confirms what has been said about GPT-5 in the past. Microsoft CTO Kevin Scott claims that the next-gen AI systems will be “capable of passing Ph.D. exams” thanks to better memory and reasoning operations.

Murati admits that the “Ph.D.-level” intelligence only applies to some tasks. “These systems are already human-level in specific tasks, and, of course, in a lot of tasks, they’re not,” she says.





1/1
Mira Murati: GPT-3 was toddler-level, GPT-4 was a smart high schooler and the next gen, to be released in a year and a half, will be PhD-level


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

Artificial Intelligence

Not Allen Iverson
Joined
May 17, 2012
Messages
54,426
Reputation
6,914
Daps
125,954

GPT-5 will have ‘Ph.D.-level’ intelligence​


By Luke Larsen June 20, 2024 4:46PM


OpenAI CTO Mira Murati on stage answering questions.
Dartmouth Engineering

The next major evolution of ChatGPT has been rumored for a long time. GPT-5, or whatever it will be called, has been talked about vaguely many times over the past year, but yesterday, OpenAI Chief Technology Officer Mira Murati gave some additional clarity on its capabilities.

In an interview with Dartmouth Engineering that was posted on X (formerly Twitter), Murati describes the jump from GPT-4 to GPT-5 as someone growing from a high-schooler up to university.

“If you look at the trajectory of improvement, systems like GPT-3 were maybe toddler-level intelligence,” Murati says. “And then systems like GPT-4 are more like smart high-schooler intelligence. And then, in the next couple of years, we’re looking at Ph.D. intelligence for specific tasks. Things are changing and improving pretty rapidly.”




Interestingly, the interviewer asked her to specify the timetable, asking if it’d come in the next year. Murati nods her head, and then clarifies that it’d be in a year and a half. If that’s true, GPT-5 may not come out until late 2025 or early 2026. Some will be disappointed to hear that the next big step is that far away.

After all, the first rumors about the launch time of GPT-5 were that it would be in late 2023. And then, when that didn’t turn out, reports indicated that it would launch later this summer. That turned out to be GPT-4o, which was an impressive release, but it wasn’t the kind of step function in intelligence Murati is referencing here.

In terms of the claim about intelligence, it confirms what has been said about GPT-5 in the past. Microsoft CTO Kevin Scott claims that the next-gen AI systems will be “capable of passing Ph.D. exams” thanks to better memory and reasoning operations.

Murati admits that the “Ph.D.-level” intelligence only applies to some tasks. “These systems are already human-level in specific tasks, and, of course, in a lot of tasks, they’re not,” she says.





1/1
Mira Murati: GPT-3 was toddler-level, GPT-4 was a smart high schooler and the next gen, to be released in a year and a half, will be PhD-level


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196

:banderas:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,546
Reputation
8,519
Daps
160,310





1/11
This paper seems very interesting: say you train an LLM to play chess using only transcripts of games of players up to 1000 elo. Is it possible that the model plays better than 1000 elo? (i.e. "transcends" the training data performance?). It seems you get something from nothing, and some information theory arguments that this should be impossible were discussed in conversations I had in the past. But this paper shows this can happen: training on 1000 elo game transcripts and getting an LLM that plays at 1500! Further the authors connect to a clean theoretical framework for why: it's ensembling weak learners, where you get "something from nothing" by averaging the independent mistakes of multiple models. The paper argued that you need enough data diversity and careful temperature sampling for the transcendence to occur. I had been thinking along the same lines but didn't think of using chess as a clean measurable way to scientifically measure this. Fantastic work that I'll read I'll more depth.

2/11
[2406.11741v1] Transcendence: Generative Models Can Outperform The Experts That Train Them paper is here. @ShamKakade6 @nsaphra please tell me if I have any misconceptions.

3/11
In the classic "Human Decisions and Machine Predictions" paper Kleinberg et al. give evidence that a predictor learned from the bail decisions of multiple judges does better than the judges themselves, calling it a wisdom of the crowd effect. This could be a similar phenomena

4/11
Yes that is what the authors formalize. Only works when there is diversity in the weak learners ie they make different types of mistakes independently.

5/11
It seems very straightforward: a 1000 ELO player makes good and bad moves that average to 1000. A learning process is a max of the quality of moves, so you should get a higher than 1000 rating. I wonder if the AI is more consistent in making "1500 ELO" moves than players.

6/11
Any argument that says it's not surprising must also explain why it didn't happen at 1500 elo training, or why it doesn't happen at higher temperatures.

7/11
The idea might be easier to understand for something that’s more of a motor skill like archery. Imagine building a dataset of humans shooting arrows at targets and then imitating only the examples where they hit the targets.

8/11
Yes but they never have added information on what a better move is or who won, as far as I understood. Unclear if the LLM is even trying to win.

9/11
Interesting - is majority vote by a group of weak learners a form of “verification” as I describe in this thread?

10/11
I don't think it's verification, ie they didn't use signal of who won in each game. It's clear you can use that to filter only better (winning) player transcripts , train on that, iterate to get stronger elo transcripts and repeat. But this phenomenon is different, I think. It's ensembling weak learners. The cleanest setting to understand ensembling: Imagine if I have 1000 binary classifiers, each correct with 60 percent probability and *Independent*. If I make a new classifier by taking majority, it will perform much better than 60 percent. It's concentration of measure, the key tool for information theory too. The surprising experimental findings are 1. this happens with elo 1000 chess players where I wouldn't think they make independent mistakes. 2. Training on transcripts seems to behave like averaging weak learners.

11/11
Interesting


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GQaWL4zbgAANwp0.png

GQcLT1cXoAAwYmG.jpg

 

DatNkkaCutty

Veteran
Joined
Nov 18, 2016
Messages
12,573
Reputation
4,619
Daps
82,512
Reppin
@ PA
why would advanced AI bother doing any of that when theres an entire universe with billions of galaxies to explore and add to it's knowledge and experiences?

The same reason we experiment on mice, monkeys, and every other creature, humans can get their hands on. :troll:

Why do that, when we could only worry about exploring space, or anything else? Oh yea...we make time to do BOTH. :unimpressed:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,546
Reputation
8,519
Daps
160,310

AI scientist Ray Kurzweil: ‘We are going to expand intelligence a millionfold by 2045’​

Zoë Corbyn
ray kurzweil in a room at singularity, the university he co-founded in mountain view, california
View image in fullscreen
Ray Kurzweil: ‘The computing we have today is basically perfect: it will get better every year.’ Photograph: Winni Wintermeyer/Redux/eyevine

The Google futurist talks nanobots and avatars, deepfakes and elections – and why he is so optimistic about a future where we merge with computers

Sat 29 Jun 2024 11.00 EDT




The American computer scientist and techno-optimist Ray Kurzweil is a long-serving authority on artificial intelligence (AI). His bestselling 2005 book, The Singularity Is Near, sparked imaginations with sci-fi like predictions that computers would reach human-level intelligence by 2029 and that we would merge with computers and become superhuman around 2045, which he called “the Singularity”. Now, nearly 20 years on, Kurzweil, 76, has a sequel, The Singularity Is Nearer – and some of his predictions no longer seem so wacky. Kurzweil’s day job is principal researcher and AI visionary at Google. He spoke to the Observer in his personal capacity as an author, inventor and futurist.

Why write this book?

The Singularity Is Near talked about the future, but 20 years ago, when people didn’t know what AI was. It was clear to me what would happen, but it wasn’t clear to everybody. Now AI is dominating the conversation. It is time to take a look again both at the progress we’ve made – large language models (LLMs) are quite delightful to use – and the coming breakthroughs.

Your 2029 and 2045 projections haven’t changed…

I have stayed consistent. So 2029, both for human-level intelligence and for artificial general intelligence (AGI) – which is a little bit different. Human-level intelligence generally means AI that has reached the ability of the most skilled humans in a particular domain and by 2029 that will be achieved in most respects. (There may be a few years of transition beyond 2029 where AI has not surpassed the top humans in a few key skills like writing Oscar-winning screenplays or generating deep new philosophical insights, though it will.) AGI means AI that can do everything that any human can do, but to a superior level. AGI sounds more difficult, but it’s coming at the same time. And my five-year-out estimate is actually conservative: Elon Musk recently said it is going to happen in two years.

We do have to be aware of the potential here and monitor what AI is doing – but just being against it is not sensible

Why should we believe your dates?

I’m really the only person that predicted the tremendous AI interest that we’re seeing today. In 1999 people thought that would take a century or more. I said 30 years and look what we have. The most important driver is the exponential growth in the amount of computing power for the price in constant dollars. We are doubling price-performance every 15 months. LLMs just began to work two years ago because of the increase in computation.

What’s missing currently to bring AI to where you are predicting it will be in 2029?

One is more computing power – and that’s coming. That will enable improvements in contextual memory, common sense reasoning and social interaction, which are all areas where deficiencies remain. Then we need better algorithms and more data to answer more questions. LLM hallucinations [where they create nonsensical or inaccurate outputs] will become much less of a problem, certainly by 2029 – they already happen much less than they did two years ago. The issue occurs because they don’t have the answer, and they don’t know that. They look for the best thing, which might be wrong or not appropriate. As AI gets smarter, it will be able to understand its own knowledge more precisely and accurately report to humans when it doesn’t know.

What exactly is the Singularity?

Today, we have one brain size which we can’t go beyond to get smarter. But the cloud is getting smarter and it is growing really without bounds. The Singularity, which is a metaphor borrowed from physics, will occur when we merge our brain with the cloud. We’re going to be a combination of our natural intelligence and our cybernetic intelligence and it’s all going to be rolled into one. Making it possible will be brain-computer interfaces which ultimately will be nanobots – robots the size of molecules – that will go noninvasively into our brains through the capillaries. We are going to expand intelligence a millionfold by 2045 and it is going to deepen our awareness and consciousness.

It is hard to imagine what this would be like, but it doesn’t sound very appealing…

Think of it like having your phone, but in your brain. If you ask a question your brain will be able to go out to the cloud for an answer similar to the way you do on your phone now – only it will be instant, there won’t be any input or output issues, and you won’t realise it has been done (the answer will just appear). People do say “I don’t want that”: they thought they didn’t want phones either!

Ray Kurzweil in Cambridge, Massachusetts in 1977 with this Kurzweil Reading Machine that converted the printed word into synthetic speech.
View image in fullscreen

Kurzweil in Cambridge, Massachusetts in 1977 with this Kurzweil Reading Machine that converted the printed word into synthetic speech. Photograph: Bettmann Archive

What of the existential risk of advanced AI systems – that they could gain unanticipated powers and seriously harm humanity? AI “godfather” Geoffrey Hinton left Google last year, in part because of such concerns, while other high -profile tech leaders such as Elon Musk have also issued warnings. Earlier this month, OpenAI and Google DeepMind workers called for greater protections for whistleblowers who raise safety concerns.

I have a chapter on perils. I’ve been involved with trying to find the best way to move forward and I helped to develop the Asilomar AI Principles [a 2017 non-legally binding set of guidelines for responsible AI development]. We do have to be aware of the potential here and monitor what AI is doing. But just being against it is not sensible: the advantages are so profound. All the major companies are putting more effort into making sure their systems are safe and align with human values than they are into creating new advances, which is positive.

Won’t there be physical limits to computing power that put the brakes on?

The computing that we have today is basically perfect: it will get better every year and continue in that realm. There are many ways we can continue to improve chips. We’ve only just begun to use the third dimension [create 3D chips], which will carry us for many years. I don’t see us needing quantum computing: we’ve never been able to demonstrate its value.

You argue that the Turing test, wherein an AI can communicate by text indistinguishably from a human, will be passed by 2029. But to pass it, AI will need to dumb down. How so?

Humans are not that accurate and they don’t know a lot of things! You can ask an LLM today very specifically about any theory in any field and it will answer you very intelligently. But who can possibly do that? If a human answered like that, you’d know it was a machine. So that’s the purpose of dumbing it down – because the test is trying to imitate a human. Some people are reporting that GPT-4 can pass a Turing test. I think we have a few more years until we settle this issue.

Not everyone is likely to be able to afford the technology of the future you envisage. Does technological inequality worry you?

Being wealthy allows you to afford these technologies at an early point, but also one where they don’t work very well. When [mobile] phones were new they were very expensive and also did a terrible job. They had access to very little information and didn’t talk to the cloud. Now they are very affordable and extremely useful. About three quarters of people in the world have one. So it’s going to be the same thing here: this issue goes away over time.

My first plan is to stay alive – reaching longevity escape velocity. I’m also intending to create a replicant of myself

The book looks in detail at AI’s job-killing potential. Should we be worried?

Yes, and no. Certain types of jobs will be automated and people will be affected. But new capabilities also create new jobs. A job like “social media influencer” didn’t make sense, even 10 years ago. Today we have more jobs than we’ve ever had and US average personal income per hours worked is 10 times what it was 100 years ago adjusted to today’s dollars. Universal basic income will start in the 2030s, which will help cushion the harms of job disruptions. It won’t be adequate at that point but over time it will become so.

There are other alarming ways, beyond job loss, that AI is promising to transform the world: spreading disinformation, causing harm through biased algorithms and supercharging surveillance. You don’t dwell much on those…

We do have to work through certain types of issues. We have an election coming and “deepfake” videos are a worry. I think we can actually figure out [what’s fake] but if it happens right before the election we won’t have time. On issues of bias, AI is learning from humans and humans have bias. We’re making progress but we’re not where we want to be. There are also issues around fair data use by AI that need to be sorted out via the legal process.

What do you do at Google and did the book go through any pre-publication review?

I advise them on different ways they can improve their products and advance their technology, including LLMs. The book is written in a personal capacity. Google is happy for me to publish these things and there was no review.

Many people will be sceptical of your predictions about physical and digital immortality. You anticipate medical nanobots arriving in the 2030s that will be able to enter our bodies and carry out repairs so we can remain alive indefinitely as well as “after life” technology coming in the 2040s that will allow us to upload our minds so they can be restored even put into convincing androids if we experience biological death.

Everything is progressing exponentially: not only computing power but our understanding of biology and our ability to engineer at far smaller scales. In the early 2030s we can expect to reach longevity escape velocity where every year of life we lose through ageing we get back from scientific progress. And as we move past that we’ll actually get back more years. It isn’t a solid guarantee of living for ever – there are still accidents – but your probability of dying won’t increase year to year. The capability to bring back departed humans digitally will bring up some interesting societal and legal questions.

What is your own plan for immortality?

My first plan is to stay alive, therefore reaching longevity escape velocity. I take about 80 pills a day to help keep me healthy. Cryogenic freezing is the fallback. I’m also intending to create a replicant of myself [an afterlife AI avatar], which is an option I think we’ll all have in the late 2020s. I did something like that with my father, collecting everything that he had written in his life, and it was a little bit like talking to him. [My replicant] will be able to draw on more material and so represent my personality more faithfully.

What should we be doing now to best prepare for the future?

It is not going to be us versus AI: AI is going inside ourselves. It will allow us to create new things that weren’t feasible before. It’ll be a pretty fantastic future.
 

Rekkapryde

GT, LWO, 49ERS, BRAVES, HAWKS, N4O...yeah UMAD!
Supporter
Joined
May 1, 2012
Messages
149,871
Reputation
27,187
Daps
504,521
Reppin
TYRONE GA!
In Horizon Zero Dawn scientist Elizabeth Sobeck believed that the super AI (GAIA) they created to save the world had to actually care about the job instead of being coded to do the job. So she kinda raised and taught GAIA empathy and compassion.
Love the game and you can EASILY see it playing out in real life.

Hope the tv series is good

We are fukked and the powers that be don't care
 
Top