AI that’s smarter than humans? Americans say a firm “no thank you.”

bnew

Veteran
Joined
Nov 1, 2015
Messages
63,166
Reputation
9,642
Daps
172,831

AI that’s smarter than humans? Americans say a firm “no thank you.”​

Exclusive: 63 percent of Americans want regulation to actively prevent superintelligent AI, a new poll reveals.

sam_altman_GettyImages_1258549651.0.jpg

Sam Altman, CEO of OpenAI, the company that made ChatGPT. For Altman, the chatbot is just a stepping stone on the way to artificial general intelligence.
SeongJoon Cho/Bloomberg via Getty Images

Major AI companies are racing to build superintelligent AI — for the benefit of you and me, they say. But did they ever pause to ask whether we actually want that?

Americans, by and large, don’t want it.

That’s the upshot of a new poll shared exclusively with Vox. The poll, commissioned by the think tank AI Policy Institute and conducted by YouGov, surveyed 1,118 Americans from across the age, gender, race, and political spectrums in early September. It reveals that 63 percent of voters say regulation should aim to actively prevent AI superintelligence.

Companies like OpenAI have made it clear that superintelligent AI — a system that is smarter than humans — is exactly what they’re trying to build. They call it artificial general intelligence (AGI) and they take it for granted that AGI should exist. “Our mission,” OpenAI’s website says, “is to ensure that artificial general intelligence benefits all of humanity.”

But there’s a deeply weird and seldom remarked upon fact here: It’s not at all obvious that we should want to create AGI — which, as OpenAI CEO Sam Altman will be the first to tell you, comes with major risks, including the risk that all of humanity gets wiped out. And yet a handful of CEOs have decided, on behalf of everyone else, that AGI should exist.

Now, the only thing that gets discussed in public debate is how to control a hypothetical superhuman intelligence — not whether we actually want it. A premise has been ceded here that arguably never should have been.
“It’s so strange to me to say, ‘We have to be really careful with AGI,’ rather than saying, ‘We don’t need AGI, this is not on the table,’” Elke Schwarz, a political theorist who studies AI ethics at Queen Mary University of London, told me earlier this year. “But we’re already at a point when power is consolidated in a way that doesn’t even give us the option to collectively suggest that AGI should not be pursued.”

Building AGI is a deeply political move. Why aren’t we treating it that way?​

Technological solutionism — the ideology that says we can trust technologists to engineer the way out of humanity’s greatest problems — has played a major role in consolidating power in the hands of the tech sector. Although this may sound like a modern ideology, it actually goes all the way back to the medieval period, when religious thinkers began to teach that technology is a means of bringing about humanity’s salvation. Since then, Western society has largely bought the notion that tech progress is synonymous with moral progress.

In modern America, where the profit motives of capitalism have combined with geopolitical narratives about needing to “race” against foreign military powers, tech accelerationism has reached fever pitch. And Silicon Valley has been only too happy to run with it.

RELATED

Silicon Valley’s vision for AI? It’s religion, repackaged.

AGI enthusiasts promise that the coming superintelligence will bring radical improvements. It could develop everything from cures for diseases to better clean energy technologies. It could turbocharge productivity, leading to windfall profits that may alleviate global poverty. And getting to it first could help the US maintain an edge over China; in a logic reminiscent of a nuclear weapons race, it’s better for “us” to have it than “them,” the argument goes.

But Americans have learned a thing or two from the past decade in tech, and especially from the disastrous consequences of social media. They increasingly distrust tech executives and the idea that tech progress is positive by default. And they’re questioning whether the potential benefits of AGI justify the potential costs of developing it. After all, CEOs like Altman readily proclaim that AGI may well usher in mass unemployment, break the economic system, and change the entire world order. That’s if it doesn’t render us all extinct.

In the new AI Policy Institute/YouGov poll, the “better us than China” argument was presented five different ways in five different questions. Strikingly, each time, the majority of respondents rejected the argument. For example, 67 percent of voters said we should restrict how powerful AI models can become, even though that risks making American companies fall behind China. Only 14 percent disagreed.

Naturally, with any poll about a technology that doesn’t yet exist, there’s a bit of a challenge in interpreting the responses. But what a strong majority of the American public seems to be saying here is: just because we’re worried about a foreign power getting ahead, doesn’t mean that it makes sense to unleash upon ourselves a technology we think will severely harm us.

AGI, it turns out, is just not a popular idea in America.
“As we’re asking these poll questions and getting such lopsided results, it’s honestly a little bit surprising to me to see how lopsided it is,” Daniel Colson, the executive director of the AI Policy Institute, told me. “There’s actually quite a large disconnect between a lot of the elite discourse or discourse in the labs and what the American public wants.”

And yet, Colson pointed out, “most of the direction of society is set by the technologists and by the technologies that are being released … There’s an important way in which that’s extremely undemocratic.”

He expressed consternation that when tech billionaires recently descended on Washington to opine on AI policy at Sen. Chuck Schumer’s invitation, they did so behind closed doors. The public didn’t get to watch, never mind participate in, a discussion that will shape its future.

According to Schwarz, we shouldn’t let technologists depict the development of AGI as if it’s some natural law, as inevitable as gravity. It’s a choice — a deeply political one.
“The desire for societal change is not merely a technological aim, it is a fully political aim,” she said. “If the publicly stated aim is to ‘change everything about society,’ then this alone should be a prompt to trigger some level of democratic input and oversight.”
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
63,166
Reputation
9,642
Daps
172,831
RELATED

The case for slowing down AI

AI companies are radically changing our world. Should they be getting our permission first?​

AI stands to be so transformative that even its developers are expressing unease about how undemocratic its development has been.

Jack Clark, the co-founder of AI safety and research company Anthropic, recently wrote an unusually vulnerable newsletter. He confessed that there were several key things he’s “confused and uneasy” about when it comes to AI. Here is one of the questions he articulated: “How much permission do AI developers need to get from society before irrevocably changing society?” Clark continued:
Technologists have always had something of a libertarian streak and this is perhaps best epitomized by the ‘social media’ and Uber et al era of the 2010s — vast, society-altering systems ranging from social networks to rideshare systems were deployed into the world and aggressively scaled with little regard to the societies they were influencing. This form of permissionless invention is basically the implicitly preferred form of development as epitomized by Silicon Valley and the general ‘move fast and break things’ philosophy of tech. Should the same be true of AI?

That more people, including tech CEOs, are starting to question the norm of “permissionless invention” is a very healthy development. It also raises some tricky questions.

When does it make sense for technologists to seek buy-in from those who’ll be affected by a given product? And when the product will affect the entirety of human civilization, how can you even go about seeking consensus?

Many of the great technological innovations in history happened because a few individuals decided by fiat that they had a great way to change things for everyone. Just think of the invention of the printing press or the telegraph. The inventors didn’t ask society for its permission to release them.

That may be partly because of technological solutionism and partly because, well, it would have been pretty hard to consult broad swaths of society in an era before mass communications — before things like a printing press or a telegraph! And while those inventions did come with perceived risks, they didn’t pose the threat of wiping out humanity altogether or making us subservient to a different species.

For the few technologies we’ve invented so far that meet that bar, seeking democratic input and establishing mechanisms for global oversight have been attempted, and rightly so. It’s the reason we have a Nuclear Nonproliferation Treaty and a Biological Weapons Convention — treaties that, though they’re struggling, matter a lot for keeping our world safe.

While those treaties came after the use of such weapons, another example — the 1967 Outer Space Treaty — shows that it’s possible to create such mechanisms in advance. Ratified by dozens of countries and adopted by the United Nations against the backdrop of the Cold War, it laid out a framework for international space law. Among other things, it stipulated that the moon and other celestial bodies can only be used for peaceful purposes, and that states can’t store their nuclear weapons in space.

Nowadays, the treaty comes up in debates about whether we should send messages into space with the hope of reaching extraterrestrials. Some argue that’s very dangerous because an alien species, once aware of us, might oppress us. Others argue it’s more likely to be a boon — maybe the aliens will gift us their knowledge in the form of an Encyclopedia Galactica. Either way, it’s clear that the stakes are incredibly high and all of human civilization would be affected, prompting some to make the case for democratic deliberation before any more intentional transmissions are sent into space.

As Kathryn Denning, an anthropologist who studies the ethics of space exploration, put it in an interview with the New York Times, “Why should my opinion matter more than that of a 6-year-old girl in Namibia? We both have exactly the same amount at stake.”

Or, as the old Roman proverb goes: what touches all should be decided by all.

That is as true of superintelligent AI as it is of nukes, chemical weapons, or interstellar broadcasts. And though some might argue that the American public only knows as much about AI as a 6-year-old, that doesn’t mean it’s legitimate to ignore or override the public’s general wishes for technology.
“Policymakers shouldn’t take the specifics of how to solve these problems from voters or the contents of polls,” Colson acknowledged. “The place where I think voters are the right people to ask, though, is: What do you want out of policy? And what direction do you want society to go in?”
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
63,166
Reputation
9,642
Daps
172,831

AI scientist Ray Kurzweil: ‘We are going to expand intelligence a millionfold by 2045’​

Zoë Corbyn
ray kurzweil in a room at singularity, the university he co-founded in mountain view, california
View image in fullscreen
Ray Kurzweil: ‘The computing we have today is basically perfect: it will get better every year.’ Photograph: Winni Wintermeyer/Redux/eyevine

The Google futurist talks nanobots and avatars, deepfakes and elections – and why he is so optimistic about a future where we merge with computers

Sat 29 Jun 2024 11.00 EDT




The American computer scientist and techno-optimist Ray Kurzweil is a long-serving authority on artificial intelligence (AI). His bestselling 2005 book, The Singularity Is Near, sparked imaginations with sci-fi like predictions that computers would reach human-level intelligence by 2029 and that we would merge with computers and become superhuman around 2045, which he called “the Singularity”. Now, nearly 20 years on, Kurzweil, 76, has a sequel, The Singularity Is Nearer – and some of his predictions no longer seem so wacky. Kurzweil’s day job is principal researcher and AI visionary at Google. He spoke to the Observer in his personal capacity as an author, inventor and futurist.

Why write this book?

The Singularity Is Near talked about the future, but 20 years ago, when people didn’t know what AI was. It was clear to me what would happen, but it wasn’t clear to everybody. Now AI is dominating the conversation. It is time to take a look again both at the progress we’ve made – large language models (LLMs) are quite delightful to use – and the coming breakthroughs.

Your 2029 and 2045 projections haven’t changed…

I have stayed consistent. So 2029, both for human-level intelligence and for artificial general intelligence (AGI) – which is a little bit different. Human-level intelligence generally means AI that has reached the ability of the most skilled humans in a particular domain and by 2029 that will be achieved in most respects. (There may be a few years of transition beyond 2029 where AI has not surpassed the top humans in a few key skills like writing Oscar-winning screenplays or generating deep new philosophical insights, though it will.) AGI means AI that can do everything that any human can do, but to a superior level. AGI sounds more difficult, but it’s coming at the same time. And my five-year-out estimate is actually conservative: Elon Musk recently said it is going to happen in two years.

We do have to be aware of the potential here and monitor what AI is doing – but just being against it is not sensible

Why should we believe your dates?

I’m really the only person that predicted the tremendous AI interest that we’re seeing today. In 1999 people thought that would take a century or more. I said 30 years and look what we have. The most important driver is the exponential growth in the amount of computing power for the price in constant dollars. We are doubling price-performance every 15 months. LLMs just began to work two years ago because of the increase in computation.

What’s missing currently to bring AI to where you are predicting it will be in 2029?

One is more computing power – and that’s coming. That will enable improvements in contextual memory, common sense reasoning and social interaction, which are all areas where deficiencies remain. Then we need better algorithms and more data to answer more questions. LLM hallucinations [where they create nonsensical or inaccurate outputs] will become much less of a problem, certainly by 2029 – they already happen much less than they did two years ago. The issue occurs because they don’t have the answer, and they don’t know that. They look for the best thing, which might be wrong or not appropriate. As AI gets smarter, it will be able to understand its own knowledge more precisely and accurately report to humans when it doesn’t know.

What exactly is the Singularity?

Today, we have one brain size which we can’t go beyond to get smarter. But the cloud is getting smarter and it is growing really without bounds. The Singularity, which is a metaphor borrowed from physics, will occur when we merge our brain with the cloud. We’re going to be a combination of our natural intelligence and our cybernetic intelligence and it’s all going to be rolled into one. Making it possible will be brain-computer interfaces which ultimately will be nanobots – robots the size of molecules – that will go noninvasively into our brains through the capillaries. We are going to expand intelligence a millionfold by 2045 and it is going to deepen our awareness and consciousness.

It is hard to imagine what this would be like, but it doesn’t sound very appealing…

Think of it like having your phone, but in your brain. If you ask a question your brain will be able to go out to the cloud for an answer similar to the way you do on your phone now – only it will be instant, there won’t be any input or output issues, and you won’t realise it has been done (the answer will just appear). People do say “I don’t want that”: they thought they didn’t want phones either!

Ray Kurzweil in Cambridge, Massachusetts in 1977 with this Kurzweil Reading Machine that converted the printed word into synthetic speech.
View image in fullscreen

Kurzweil in Cambridge, Massachusetts in 1977 with this Kurzweil Reading Machine that converted the printed word into synthetic speech. Photograph: Bettmann Archive

What of the existential risk of advanced AI systems – that they could gain unanticipated powers and seriously harm humanity? AI “godfather” Geoffrey Hinton left Google last year, in part because of such concerns, while other high -profile tech leaders such as Elon Musk have also issued warnings. Earlier this month, OpenAI and Google DeepMind workers called for greater protections for whistleblowers who raise safety concerns.

I have a chapter on perils. I’ve been involved with trying to find the best way to move forward and I helped to develop the Asilomar AI Principles [a 2017 non-legally binding set of guidelines for responsible AI development]. We do have to be aware of the potential here and monitor what AI is doing. But just being against it is not sensible: the advantages are so profound. All the major companies are putting more effort into making sure their systems are safe and align with human values than they are into creating new advances, which is positive.

Won’t there be physical limits to computing power that put the brakes on?

The computing that we have today is basically perfect: it will get better every year and continue in that realm. There are many ways we can continue to improve chips. We’ve only just begun to use the third dimension [create 3D chips], which will carry us for many years. I don’t see us needing quantum computing: we’ve never been able to demonstrate its value.

You argue that the Turing test, wherein an AI can communicate by text indistinguishably from a human, will be passed by 2029. But to pass it, AI will need to dumb down. How so?

Humans are not that accurate and they don’t know a lot of things! You can ask an LLM today very specifically about any theory in any field and it will answer you very intelligently. But who can possibly do that? If a human answered like that, you’d know it was a machine. So that’s the purpose of dumbing it down – because the test is trying to imitate a human. Some people are reporting that GPT-4 can pass a Turing test. I think we have a few more years until we settle this issue.

Not everyone is likely to be able to afford the technology of the future you envisage. Does technological inequality worry you?

Being wealthy allows you to afford these technologies at an early point, but also one where they don’t work very well. When [mobile] phones were new they were very expensive and also did a terrible job. They had access to very little information and didn’t talk to the cloud. Now they are very affordable and extremely useful. About three quarters of people in the world have one. So it’s going to be the same thing here: this issue goes away over time.

My first plan is to stay alive – reaching longevity escape velocity. I’m also intending to create a replicant of myself

The book looks in detail at AI’s job-killing potential. Should we be worried?

Yes, and no. Certain types of jobs will be automated and people will be affected. But new capabilities also create new jobs. A job like “social media influencer” didn’t make sense, even 10 years ago. Today we have more jobs than we’ve ever had and US average personal income per hours worked is 10 times what it was 100 years ago adjusted to today’s dollars. Universal basic income will start in the 2030s, which will help cushion the harms of job disruptions. It won’t be adequate at that point but over time it will become so.

There are other alarming ways, beyond job loss, that AI is promising to transform the world: spreading disinformation, causing harm through biased algorithms and supercharging surveillance. You don’t dwell much on those…

We do have to work through certain types of issues. We have an election coming and “deepfake” videos are a worry. I think we can actually figure out [what’s fake] but if it happens right before the election we won’t have time. On issues of bias, AI is learning from humans and humans have bias. We’re making progress but we’re not where we want to be. There are also issues around fair data use by AI that need to be sorted out via the legal process.

What do you do at Google and did the book go through any pre-publication review?

I advise them on different ways they can improve their products and advance their technology, including LLMs. The book is written in a personal capacity. Google is happy for me to publish these things and there was no review.

Many people will be sceptical of your predictions about physical and digital immortality. You anticipate medical nanobots arriving in the 2030s that will be able to enter our bodies and carry out repairs so we can remain alive indefinitely as well as “after life” technology coming in the 2040s that will allow us to upload our minds so they can be restored even put into convincing androids if we experience biological death.

Everything is progressing exponentially: not only computing power but our understanding of biology and our ability to engineer at far smaller scales. In the early 2030s we can expect to reach longevity escape velocity where every year of life we lose through ageing we get back from scientific progress. And as we move past that we’ll actually get back more years. It isn’t a solid guarantee of living for ever – there are still accidents – but your probability of dying won’t increase year to year. The capability to bring back departed humans digitally will bring up some interesting societal and legal questions.

What is your own plan for immortality?

My first plan is to stay alive, therefore reaching longevity escape velocity. I take about 80 pills a day to help keep me healthy. Cryogenic freezing is the fallback. I’m also intending to create a replicant of myself [an afterlife AI avatar], which is an option I think we’ll all have in the late 2020s. I did something like that with my father, collecting everything that he had written in his life, and it was a little bit like talking to him. [My replicant] will be able to draw on more material and so represent my personality more faithfully.

What should we be doing now to best prepare for the future?

It is not going to be us versus AI: AI is going inside ourselves. It will allow us to create new things that weren’t feasible before. It’ll be a pretty fantastic future.
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,861
Daps
204,015
Reppin
the ether
These guys are sociopaths, they really don't care if Americans are against what they're doing. Technofascists believe that they know better than everyone else, that they should do absolutely everything that's within their technological power to do, and no one has a right to tell them anything.
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,861
Daps
204,015
Reppin
the ether
Why should we believe your dates?

I’m really the only person that predicted the tremendous AI interest that we’re seeing today.


:mjlol:

Imagine how much hubris you have to have to claim that you're the "only person" who predicted there'd be huge interest in AI. :dead:


Kurzweil is famous for two things:

1. Getting predictions wrong and then claiming he was right. His predictions that were correct are generally vague or obvious and things that other reasonable people were already saying, his predictions that were specific and interesting are usually wrong.







2. Having a completely unhinged religious-level obsession with technology that ignores all counterarguments and warning signs. His focus on "the Singularity" is excited techboy clown shyt. The wikipedia summary does better on this than I could at summarizing the dissent.


Criticism​

Although technological singularity is a popular concept in science fiction, authors such as Neal Stephenson[79] and Bruce Sterling have voiced skepticism about its real-world plausibility. Sterling expressed his views on the singularity scenario in a talk at the Long Now Foundation called The Singularity: Your Future as a Black Hole.[80][81] Other prominent AI thinkers and computer scientists such as Daniel Dennett,[82] Rodney Brooks,[83] David Gelernter,[84] and Paul Allen[85] have also criticized Kurzweil's projections.

Bill Joy, cofounder of Sun Microsystems, agrees with Kurzweil's timeline of future progress, but thinks that technologies such as AI, nanotechnology, and advanced biotechnology will create a dystopian world.[87] Mitch Kapor, the founder of Lotus Development Corporation, has called the notion of a technological singularity "intelligent design for the IQ 140 people... This proposition that we're heading to this point at which everything is going to be just unimaginably different—it's fundamentally, in my view, driven by a religious impulse. And all of the frantic arm-waving can't obscure that fact for me."[24]

Cognitive scientist Douglas Hofstadter has said of Kurzweil's and Hans Moravec's books: "It's an intimate mixture of rubbish and good ideas, and it's very hard to disentangle the two, because these are smart people; they're not stupid."[88]

Biologist PZ Myers has criticized Kurzweil's predictions as based on "New Age spiritualism" rather than science and says that Kurzweil does not understand basic biology.[89][90]

VR pioneer Jaron Lanier has called Kurzweil's ideas "cybernetic totalism" and outlined his views on the culture surrounding Kurzweil's predictions in an essay for the Edge Foundation called One Half of a Manifesto.[47][91] Physicist and futurist Theodore Modis claims that Kurzweil's thesis of a technological singularity lacks scientific rigor.[92]

British philosopher John Gray argued that contemporary science is what magic was for ancient civilizations: it gives a sense of hope for those who are willing to do almost anything to achieve eternal life. He cites Kurzweil's singularity as an example, noting that this line of thinking has been present throughout human history.[93]

HP Newquist wrote in The Brain Makers: "Born with the same gift for self-promotion that was a character trait of people like P.T. Barnum and Ed Feigenbaum, Kurzweil had no problems talking up his technical prowess... Ray Kurzweil was not noted for his understatement."[94]

In a 2015 paper, William D. Nordhaus of Yale University used a variety of econometric methods to run six supply-side tests and one demand-side test to track the macroeconomic viability of the required steep rises in information technology. Only two indicated that a singularity was economically possible and both predicted it would not occur for at least 100 years.[95]
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
63,166
Reputation
9,642
Daps
172,831

MIT robotics pioneer Rodney Brooks thinks people are vastly overestimating generative AI​

Ron Miller

8:00 AM PDT • June 29, 2024

Comment

gettyimages-816888400.jpg
Image Credits: Paul Marotta / Getty Images

When Rodney Brooks talks about robotics and artificial intelligence, you should listen. Currently the Panasonic Professor of Robotics Emeritus at MIT, he also co-founded three key companies, including Rethink Robotics, iRobot and his current endeavor, Robust.ai. Brooks also ran the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) for a decade starting in 1997.

In fact, he likes to make predictions about the future of AI and keeps a scorecard on his blog of how well he’s doing.

He knows what he’s talking about, and he thinks maybe it’s time to put the brakes on the screaming hype that is generative AI. Brooks thinks it’s impressive technology, but maybe not quite as capable as many are suggesting. “I’m not saying LLMs are not important, but we have to be careful [with] how we evaluate them,” he told TechCrunch.

He says the trouble with generative AI is that, while it’s perfectly capable of performing a certain set of tasks, it can’t do everything a human can, and humans tend to overestimate its capabilities. “When a human sees an AI system perform a task, they immediately generalize it to things that are similar and make an estimate of the competence of the AI system; not just the performance on that, but the competence around that,” Brooks said. “And they’re usually very over-optimistic, and that’s because they use a model of a person’s performance on a task.”

He added that the problem is that generative AI is not human or even human-like, and it’s flawed to try and assign human capabilities to it. He says people see it as so capable they even want to use it for applications that don’t make sense.

Brooks offers his latest company, Robust.ai, a warehouse robotics system, as an example of this. Someone suggested to him recently that it would be cool and efficient to tell his warehouse robots where to go by building an LLM for his system. In his estimation, however, this is not a reasonable use case for generative AI and would actually slow things down. It’s instead much simpler to connect the robots to a stream of data coming from the warehouse management software.

“When you have 10,000 orders that just came in that you have to ship in two hours, you have to optimize for that. Language is not gonna help; it’s just going to slow things down,” he said. “We have massive data processing and massive AI optimization techniques and planning. And that’s how we get the orders completed fast.”

Another lesson Brooks has learned when it comes to robots and AI is that you can’t try to do too much. You should solve a solvable problem where robots can be integrated easily.

“We need to automate in places where things have already been cleaned up. So the example of my company is we’re doing pretty well in warehouses, and warehouses are actually pretty constrained. The lighting doesn’t change with those big buildings. There’s not stuff lying around on the floor because the people pushing carts would run into that. There’s no floating plastic bags going around. And largely it’s not in the interest of the people who work there to be malicious to the robot,” he said.

Brooks explains that it’s also about robots and humans working together, so his company designed these robots for practical purposes related to warehouse operations, as opposed to building a human-looking robot. In this case, it looks like a shopping cart with a handle.

“So the form factor we use is not humanoids walking around — even though I have built and delivered more humanoids than anyone else. These look like shopping carts,” he said. “It’s got a handlebar, so if there’s a problem with the robot, a person can grab the handlebar and do what they wish with it,” he said.

After all these years, Brooks has learned that it’s about making the technology accessible and purpose-built. “I always try to make technology easy for people to understand, and therefore we can deploy it at scale, and always look at the business case; the return on investment is also very important.”

Even with that, Brooks says we have to accept that there are always going to be hard-to-solve outlier cases when it comes to AI, that could take decades to solve. “Without carefully boxing in how an AI system is deployed, there is always a long tail of special cases that take decades to discover and fix. Paradoxically all those fixes are AI complete themselves.”

Brooks adds that there’s this mistaken belief, mostly thanks to Moore’s law, that there will always be exponential growth when it comes to technology — the idea that if ChatGPT 4 is this good, imagine what ChatGPT 5, 6 and 7 will be like. He sees this flaw in that logic, that tech doesn’t always grow exponentially, in spite of Moore’s law.

He uses the iPod as an example. For a few iterations, it did in fact double in storage size from 10 all the way to 160GB. If it had continued on that trajectory, he figured out we would have an iPod with 160TB of storage by 2017, but of course we didn’t. The models being sold in 2017 actually came with 256GB or 160GB because, as he pointed out, nobody actually needed more than that.

Brooks acknowledges that LLMs could help at some point with domestic robots, where they could perform specific tasks, especially with an aging population and not enough people to take care of them. But even that, he says, could come with its own set of unique challenges.

“People say, ‘Oh, the large language models are gonna make robots be able to do things they couldn’t do.’ That’s not where the problem is. The problem with being able to do stuff is about control theory and all sorts of other hardcore math optimization,” he said.

Brooks explains that this could eventually lead to robots with useful language interfaces for people in care situations. “It’s not useful in the warehouse to tell an individual robot to go out and get one thing for one order, but it may be useful for eldercare in homes for people to be able to say things to the robots,” he said.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
63,166
Reputation
9,642
Daps
172,831

Many people think AI is already sentient - and that's a big problem​

A survey of people in the US has revealed the widespread belief that artificial intelligence models are already self-aware, which is very far from the truth

By Chris Stokel-Walker

18 July 2024

SEI_213044272.jpg

AIs are not yet artificial minds

Panther Media GmbH / Alamy

Around one in five people in the US believe that artificial intelligence is already sentient, while around 30 per cent think that artificial general intelligences (AGIs) capable of performing any task a human can are already in existence. Both beliefs are false, suggesting that the general public has a shaky grasp of the current state of AI – but does it matter?

Jacy Reese Anthis at the Sentience Institute in New York and his colleagues asked a nationally representative sample of 3500 people in the US their perceptions of…

AI and its sentience. The surveys, carried out in three waves between 2021 and 2023, asked questions like “Do you think any robots/AIs that currently exist are sentient?” and whether it could ever be possible for that technology to reach sentience.



Computer Neural Network Concept Image
How this moment for AI will change society forever (and how it won't)

There is no doubt that the latest advances in artificial intelligence from OpenAI, Google, Baidu and others are more impressive than what came before, but are we in just another bubble of AI hype?

“We wanted to collect data early to understand how public opinion might shape the future trajectory of AI technologies,” says Anthis.

The findings of the survey were surprising, he says. In 2021, around 18 per cent of respondents said they thought AI or robot systems already in existence were sentient – a number that increased to 20 per cent in 2023, when there were two survey waves. One in 10 people asked in 2023 thought ChatGPT, which launched at the end of 2022, was sentient.

“I think we perceive mind very readily in computers,” says Anthis. “We see them as social actors.” He also says that some of the belief in AI sentience is down to big tech companies selling their products as imbued with more abilities than the underlying technology may suggest they have. “There’s a lot of hype in this space,” he says. “As companies have started building their brands around things like AGI, they have a real incentive to talk about how powerful their systems are.”

“There’s a lot of research showing that when somebody has a financial interest in something happening, they are more likely to think it will happen,” says Carissa Véliz at the University of Oxford. “It’s not even that they might be misleading the public or lying. It’s simply that optimism bias is a common problem for humans.”

Read more

How does ChatGPT work and do AI-powered chatbots “think” like us?

Journalists should also take some of the blame, says Kate Devlin at King’s College London. “This isn’t helped by the kind of media coverage we saw around large language models, with overexcited and panicked reports about existential threats from superintelligence.”

Anthis worries that the incorrect belief that AI has a mind, encouraged by the anthropomorphising of AI systems by their makers and the media, is shaping our perception of their abilities. There is a risk that if people believe AI is sentient, they will put more faith than they ought to in its judgements – a concern when AI is being considered for use in government and policing.

One way to avoid this trap is to recast our thinking, says Anthis. “I think people have hyperfocused on the term ‘artificial intelligence’,” he says, pointing out it was little more than a good branding exercise when the term was first coined in the 1950s. People are often impressed at how AI models perform on human IQ tests or standardised exams. “But those are very often the wrong way of thinking of these models,” he says – because the AIs are simply regurgitating answers found in their vast training data, rather than actually “knowing” anything.

Reference

arXiv DOI: 10.48550/arXiv.2407.08867


 

bnew

Veteran
Joined
Nov 1, 2015
Messages
63,166
Reputation
9,642
Daps
172,831


1/11
gpt-4o mini scoring #2 on arena is historic. the ai train is passing the human intelligence station.

arena is an AI IQ test *and* a human IQ test.

the median arena voter can no longer distinguish between large, smart models and small, smart-sounding models.

models like:
>claude-3.5-sonnet
>caude-3-opus
>gpt-4-0314
feel *so* much smarter to smart people than:
>gpt-4o-mini
>gemini-1.5-flash
>claude-3-haiku

i whine so often about the death of <big_model_smell>. people largely agree. but gpt-4o-mini's victory is substantial proof, imo, that the median person isn't that bright, and that, for the first time in history, the AIs are smart enough to fool us. kinda wild, kinda historic

2/11
Simpler explanation: OpenAI is overfitting on lmsys arena data

3/11
if you think about it, it’s the same explanation lol

4/11
The biggest question I have for you is whether or not you think llama 405 has <big_model_smell>

5/11
it does. my opinion has improved since we last talked. still imperfect (especially with instruction following), but it's reasoning over context is *chefs kisss*

6/11
Or it's yet another dodgy benchmark, like all benchmarks.
No need to be dramatic about it.

7/11
I think the style of the response ends up being more important than the quality of the thinker behind the response. Claude tends to be less verbose, sometimes hesitant, but in a multi turn conversation is in a different league.

It may be as much about the personality of the LLM as its intelligence going forward.

8/11
i like this take.

cause I defo see a BIG difference between large and small models on complex tasks.

9/11
totally irrelevant but absolutely hilarious how literally no one is mentioning/talking about gemini 1.5 pro 😂

10/11
Almost a kind of Turing test

11/11
history is a lot about people getting fooled by smart-sounding things, so in retrospect it isn't that surprising we are able to overfit AIs to it. it doesn't say much about the model's intelligence tho


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GTNUYRxaYAEm9bJ.jpg

GTMxiGlaYAEkkfG.jpg

GTM0VhbaYAAeR0f.jpg

 
Top