1/9
Unpopular opinion: We will not achieve AGI any time soon and @leopoldasch's prediction is way off.
Here's why:
1 - The Straight Line Fallacy
One of the most common mistakes in predicting technological advancements is assuming that progress will continue in a straight line. This belief doesn’t account for the possibility of an "AI winter," a period where advancements slow down or stop due to unexpected technical challenges or diminishing returns.
History shows us that progress doesn’t happen in straight lines. Look at the transition from horses to cars. It wasn’t a smooth, continuous progression. There were bursts of rapid development followed by long periods where nothing much happened. The same can happen with AI. In physics, we've seen significant slowdowns despite continuous efforts.
Similarly, in medicine, the fight against cancer has not progressed in a straight line; breakthroughs are followed by long periods of incremental progress or even setbacks.
2 - The AGI Definition Problem
First, we don’t even have a clear definition of AGI.
Leopold’s statements highlight this confusion: On one hand, he says it’s "strikingly plausible that by 2027, models will be able to do the work of an AI researcher/engineer." On the other hand, he claims, "We are on course for AGI by 2027."
These are vastly different capabilities. The lack of a clear definition muddles the discussion.
3 - The Compute Problem
Even if we assume that all we need to achieve AGI is more compute power, we face significant hurdles. The production of GPUs and other essential components is already hitting resource limits.
The environmental impact of the massive energy consumption required for training these models could lead to regulatory restrictions. So, even if more compute is all we need, who’s to say we’ll have enough of it?
4 - The Limits of Current AI
@ylecun often points out that current LLMs, despite their impressive capabilities, are fundamentally limited.
They don’t understand the real world, they often generate incorrect information, and they can’t plan or reason beyond their training data. These are not minor issues—they’re fundamental gaps that more data and compute power alone won’t fix.
Final Point: The Influence of Corporate Hype
There’s another, less talked about aspect of the AGI debate: corporate hype. The talk of AGI has been hijacked by corporations looking to benefit from millions of $ of free marketing the term generates.
Don't fall for the hype.
---
Let me be clear that this is not an attack on Leopold (awesome guy) but my personal opinion on the subject.
2/9
Is there a plausible path to AI becoming smarter than humans while only being able to train on information generated by humans or derivatives synthesized from that information?
3/9
My problem with that graph is assumption that ChatGPT-4.0 is a "smart high schooler". It's hilarious knowing all the scams that had to be manually added to the training data so that chatgpt wouldn't get fooled.
4/9
Yann Le Cun has also been wrong in the past about the capabilities of GPT-n+1 models, particularly regarding the understanding of the physical world or certain reasoning/planning capacities.
What guarantees us that larger models cannot solve some of its problems ?
5/9
> 2 - The AGI Definition Problem
I think the fuzziness of the AGI definition also is the main argument in favour of his viewpoint.
But yes, this fuzziness makes statements like this somewhat pointless.
6/9
The people who don't know what consciousness is predicting conscious computers is the biggest scientific joke of 21st century.
Second biggest is people believing time-travel is possible.
7/9
It's not even a straight line. It's a log plot, so the line looks straight, but it's really exponential. Expecting the exponential growth to continue indefinitely is just silly.
8/9
Number one should already be enough, but people love fooling themselves. It will not be a straight line, period
9/9
There’s a fundamental underlying issue here: we cannot define or effectively measure human intelligence. We can’t even define human consciousness. Thinking we can invent it in a couple decades is a staggering overestimation of our own understanding of ourselves.
To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196