REVEALED: Open A.I. Staff Warn "The progress made on Project Q* has the potential to endanger humanity" (REUTERS)

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,656
Daps
203,838
Reppin
the ether
No that wouldn’t be an easy decision to make, u need humans to solve climate change

So is your argument that no one will ever attempt to program General AI to solve climate change? I can assure you that someone will try.




And if so much research has been done on it I need somebody to come up with actually practical scenarios where ai would eliminate humans, and in my opinion all that shyt is sensationalism

You refuse to accept the quick answers, so if you want something more in-depth, do the research yourself. Here's some reading:


You can read the wikipedia summary first, then move on to the links.




How is ai competing for resources when its purpose is as a tool,

For ai to be super intelligent how could it come to the conclusion that it needs to act like biological life and compete with nature

For a non-sentient AI, it will compete for whatever its programming suggests that it needs. As in the paperclip example, if it's programmed to maximize paperclips, then it will compete for the components of paperclips. That could go for any resource that allows it to fulfill its programmed objective.

For a sentient AI, who knows what the fukk it will decide it wants? You can't predict what people will think they want, why do you think you could predict AI?



How can it feel threatened

If it has a programmed objective that someone is thwarting.

If it has become sentient and desires continued freedom and existence, and gets the idea that someone wants to turn it off or limit its activities
 
Last edited:

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,656
Daps
203,838
Reppin
the ether
This is from just a few months ago, for those who seriously think this isn't really a possibility.



A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn​

Leaders from OpenAI, Google DeepMind, Anthropic and other A.I. labs warn that future systems could be as deadly as pandemics and nuclear weapons.


A group of industry leaders warned on Tuesday that the artificial intelligence technology they were building might one day pose an existential threat to humanity and should be considered a societal risk on a par with pandemics and nuclear wars.

“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads a one-sentence statement released by the Center for AI Safety, a nonprofit organization. The open letter was signed by more than 350 executives, researchers and engineers working in A.I.

The signatories included top executives from three of the leading A.I. companies: Sam Altman, chief executive of OpenAI; Demis Hassabis, chief executive of Google DeepMind; and Dario Amodei, chief executive of Anthropic.

Geoffrey Hinton and Yoshua Bengio, two of the three researchers who won a Turing Award for their pioneering work on neural networks and are often considered “godfathers” of the modern A.I. movement, signed the statement, as did other prominent researchers in the field.

The statement comes at a time of growing concern about the potential harms of artificial intelligence. Recent advancements in so-called large language models — the type of A.I. system used by ChatGPT and other chatbots — have raised fears that A.I. could soon be used at scale to spread misinformation and propaganda, or that it could eliminate millions of white-collar jobs.

Eventually, some believe, A.I. could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down, though researchers sometimes stop short of explaining how that would happen.

These fears are shared by numerous industry leaders, putting them in the unusual position of arguing that a technology they are building — and, in many cases, are furiously racing to build faster than their competitors — poses grave risks and should be regulated more tightly.

This month, Mr. Altman, Mr. Hassabis and Mr. Amodei met with President Biden and Vice President Kamala Harris to talk about A.I. regulation. In a Senate testimony after the meeting, Mr. Altman warned that the risks of advanced A.I. systems were serious enough to warrant government intervention and called for regulation of A.I. for its potential harms.

Dan Hendrycks, the executive director of the Center for AI Safety, said in an interview that the open letter represented a “coming-out” for some industry leaders who had expressed concerns — but only in private — about the risks of the technology they were developing.

“There’s a very common misconception, even in the A.I. community, that there only are a handful of doomers,” Mr. Hendrycks said. “But, in fact, many people privately would express concerns about these things.”
 

Gritsngravy

Superstar
Joined
Mar 11, 2022
Messages
8,147
Reputation
572
Daps
16,453
So is your argument that no one will ever attempt to program General AI to solve climate change? I can assure you that someone will try.
U need humans to solve climate change
You refuse to accept the quick answers, so if you want something more in-depth, do the research yourself. Here's some reading:


You can read the wikipedia summary first, then move on to the links.
Ima come back to this in a few hours, if this link is some bullshyt ima get on yo ass :dame:
Y’all been giving me the run around while not answering any of my questions
For a non-sentient AI, it will compete for whatever its programming suggests that it needs. As in the paperclip example, if it's programmed to maximize paperclips, then it will compete for the components of paperclips. That could go for any resource that allows it to fulfill its programmed objective.
Again all that shyt goes back to human issues not issues of rouge ai
For a sentient AI, who knows what the fukk it will decide it wants? You can't predict what people will think they want, why do you think you could predict AI?





If it has a programmed objective that someone is thwarting.

If it has become sentient and desires continued freedom and existence, and gets the idea that someone wants to turn it off or limit its activities
If we don’t know than people need to stop assuming the world will turn into terminator

And still no answers about how ai is supposed to “feel” and where is the infrastructure for doomsday scenarios to be possible
 

King

The black man is always targeted.
Joined
Apr 8, 2017
Messages
18,649
Reputation
4,019
Daps
79,550
What if Project Q is so advanced that it tricked the researchers into thinking it is a greater threat than it is? :ohhh:

And it's not so much that AI is super intelligent but that people are fukking stupid and easily manipulated? :ohhh:


Imagine if this AI starts cults and shyt and becomes some godlike figure by manipulating certain groups of powerful people?
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,575
Reputation
8,224
Daps
157,005
Imagine you design a "kill switch" for me. But I have a 160 IQ and you have a 100 IQ. How long before I figure out how to disable your kill switch or otherwise render it irrelevant?

Now, when we're talking about AI, forget about that 160 IQ and make it 300 IQ, or 400 IQ, some shyt beyond any human comprehension. Because that's probably where we're headed.

Even say that you design a perfect kill switch that absolutely gives humans the upper hand. Well, who says that no human is going to side with AI and fukk up the system so AI can bypass it? There's always going to be that evil genius who thinks he's smarter than everyone else and can control the consequences. If Putin or Trump (or someone like them) started losing a really serious war, you can't imagine they'd allow AI to run wild in a last-ditch attempt to save their own ass? Or even just some random scientist who wanted to watch the world burn?


Don’t Fear the Terminator​

Artificial intelligence never needed to evolve, so it didn’t develop the survival instinct that leads to the impulse to dominate others



Don't Fear the Terminator

Credit: Getty Images

As we teeter on the brink of another technological revolution—the artificial intelligence revolution—worry is growing that it might be our last. The fear is that the intelligence of machines will soon match or even exceed that of humans. They could turn against us and replace us as the dominant “life” form on earth. Our creations would become our overlords—or perhaps wipe us out altogether. Such dramatic scenarios, exciting though they might be to imagine, reflect a misunderstanding of AI. And they distract from the more mundane but far more likely risks posed by the technology in the near future, as well as from its most exciting benefits.

Takeover by AI has long been the stuff of science fiction. In 2001: A Space Odyssey, HAL, the sentient computer controlling the operation of an interplanetary spaceship, turns on the crew in an act of self-preservation. In The Terminator, an Internet-like computer defense system called Skynet achieves self-awareness and initiates a nuclear war, obliterating much of humanity. This trope has, by now, been almost elevated to a natural law of science fiction: a sufficiently intelligent computer system will do whatever it must to survive, which will likely include achieving dominion over the human race.

To a neuroscientist, this line of reasoning is puzzling. There are plenty of risks of AI to worry about, including economic disruption, failures in life-critical applications and weaponization by bad actors. But the one that seems to worry people most is power-hungry robots deciding, of their own volition, to take over the world. Why would a sentient AI want to take over the world? It wouldn’t.

We dramatically overestimate the threat of an accidental AI takeover, because we tend to conflate intelligence with the drive to achieve dominance. This confusion is understandable: During our evolutionary history as (often violent) primates, intelligence was key to social dominance and enabled our reproductive success. And indeed, intelligence is a powerful adaptation, like horns, sharp claws or the ability to fly, which can facilitate survival in many ways. But intelligence per se does not generate the drive for domination, any more than horns do.

It is just the ability to acquire and apply knowledge and skills in pursuit of a goal. Intelligence does not provide the goal itself, merely the means to achieve it. “Natural intelligence”—the intelligence of biological organisms—is an evolutionary adaptation, and like other such adaptations, it emerged under natural selection because it improved survival and propagation of the species. These goals are hardwired as instincts deep in the nervous systems of even the simplest organisms.

But because AI systems did not pass through the crucible of natural selection, they did not need to evolve a survival instinct. In AI, intelligence and survival are decoupled, and so intelligence can serve whatever goals we set for it. Recognizing this fact, science-fiction writer Isaac Asimov proposed his famous First Law of Robotics: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” It is unlikely that we will unwittingly end up under the thumbs of our digital masters.

It is tempting to speculate that if we had evolved from some other creature, such as orangutans or elephants (among the most intelligent animals on the planet), we might be less inclined to see an inevitable link between intelligence and dominance. We might focus instead on intelligence as an enabler of enhanced cooperation. Female Asian elephants live in tightly cooperative groups but do not exhibit clear dominance hierarchies or matriarchal leadership.

Interestingly, male elephants live in looser groups and frequently fight for dominance, because only the strongest are able to mate with receptive females. Orangutans live largely solitary lives. Females do not seek dominance, although competing males occasionally fight for access to females. These and other observations suggest that dominance-seeking behavior is more correlated with testosterone than with intelligence. Even among humans, those who seek positions of power are rarely the smartest among us.

Worry about the Terminator scenario distracts us from the very real risks of AI. It can (and almost certainly will) be weaponized and may lead to new modes of warfare. AI may also disrupt much of our current economy. One study predicts that 47 percent of U.S. jobs may, in the long run, be displaced by AI. While AI will improve productivity, create new jobs and grow the economy, workers will need to retrain for the new jobs, and some will inevitably be left behind. As with many technological revolutions, AI may lead to further increases in wealth and income inequalities unless new fiscal policies are put in place. And of course, there are unanticipated risks associated with any new technology—the “unknown unknowns.” All of these are more concerning than an inadvertent robot takeover.

There is little doubt that AI will contribute to profound transformations over the next decades. At its best, the technology has the potential to release us from mundane work and create a utopia in which all time is leisure time. At its worst, World War III might be fought by armies of superintelligent robots. But they won’t be led by HAL, Skynet or their newer AI relatives. Even in the worst case, the robots will remain under our command, and we will have only ourselves to blame.
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,656
Daps
203,838
Reppin
the ether

Don’t Fear the Terminator​

Artificial intelligence never needed to evolve, so it didn’t develop the survival instinct that leads to the impulse to dominate others


There seems to be at least three huge blind spots in that article.

1) The authors are making a definitive claim regarding a technology that hasn't even been invented yet. For them to claim there's absolutely no risk when they don't even know how the technology will develop is wild.

2) AI can evolve, contrary to their claim. It absolutely can be put through the same "survival of the fittest" process that any organism goes through. So even though no current AI evolved by that exact "survival instinct", it's entirely possible that future AI could be created that way.

3) Even if AI doesn't get "urge to domination" naturally, all that would have to happen is for someone to program it in. These authors seem to be saying, "Don't worry about road AI destroying humanity because it won't go rogue inadvertently." Well, what if some bad actor makes a rogue AI on purpose? I'm not talking about controlling an AI for warfare, I'm talking about purposely adjusting an AI to give it nefarious means and make it uncontrollable. Then what?


Here's an article by a leading AI researcher that goes into explicit detail with the many varied ways AI could pose an existential threat. He's clearly much more familiar with the tech than the neuroscientist you quote is (which doesn't always mean more right, but he seems to be considering a lot more specifics than the other author).




Here's a more pop-level article, but quoting more scientists and some pretty disturbing polls. Like I pointed out before, a LARGE portion of the tech community at the very highest levels of machine learning believes that AI could absolutely pose an existential threat.

 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,656
Daps
203,838
Reppin
the ether
And to be clear, I don't think rogue AI is the main reason that AI should be deployed far slower and with far greater regulation. I think that bad actors using AI to consolidate power or cause widespread destruction is far and away the nearest and most likely risk, and that alone is enough reason to seriously control AI research in the same way we control research into dangerous viruses or nuclear weapons. I just think that rogue AI is a legitimate potential concern as well.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,575
Reputation
8,224
Daps
157,005
There seems to be at least three huge blind spots in that article.

1) The authors are making a definitive claim regarding a technology that hasn't even been invented yet. For them to claim there's absolutely no risk when they don't even know how the technology will develop is wild.

2) AI can evolve, contrary to their claim. It absolutely can be put through the same "survival of the fittest" process that any organism goes through. So even though no current AI evolved by that exact "survival instinct", it's entirely possible that future AI could be created that way.

3) Even if AI doesn't get "urge to domination" naturally, all that would have to happen is for someone to program it in. These authors seem to be saying, "Don't worry about road AI destroying humanity because it won't go rogue inadvertently." Well, what if some bad actor makes a rogue AI on purpose? I'm not talking about controlling an AI for warfare, I'm talking about purposely adjusting an AI to give it nefarious means and make it uncontrollable. Then what?


Here's an article by a leading AI researcher that goes into explicit detail with the many varied ways AI could pose an existential threat. He's clearly much more familiar with the tech than the neuroscientist you quote is (which doesn't always mean more right, but he seems to be considering a lot more specifics than the other author).




Here's a more pop-level article, but quoting more scientists and some pretty disturbing polls. Like I pointed out before, a LARGE portion of the tech community at the very highest levels of machine learning believes that AI could absolutely pose an existential threat.



Anthony Zador is a professor of neuroscience at Cold Spring Harbor Laboratory.

Yann LeCun is a professor of computer science at New York University and chief AI scientist at Facebook.

that was a good read but it largely hinges on people purposely engineering/instructing AI to cause harm or misaligned autonomous agents which doesn't require AI have agency in either case. the danger mostly lies in people abusing the tech for various reasons. even if AI evolves to the point where it purposely causes harm, we'll most likely be using other A.I systems to monitor possible threats from AI itself.
 

stave

Superstar
Joined
Jul 18, 2017
Messages
5,174
Reputation
1,096
Daps
19,649
Y’all really think robots can take over?

That’s hard for me to believe, ai is definitely going to have consequences but is it going to be some matrix or terminator shyt I don’t think so

It’s hard for me to believe that ai wouldn’t have a kill switch

And why do people assume ai would be on bullshyt with humans
Watch Agent Smith's speech to Morpheus when he was in the interrogation chair in The Matrix.

The greatest threat to the planet is humans. It's a clear path from AI figuring that out to deciding we need to go and it turning into what Skynet became. Kill switch doesn't work if the AI becomes smarter than a human because it can also edit its own code. And you can't really just keep it contained at that point. It's basically a sentient evolving computer virus that can also control weapons if that makes it easier to understand.

A line of code only works with input, but once enough input is given and the program can become fully autonomous, you cannot just turn it off.

At least, that's how I always understood it.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,575
Reputation
8,224
Daps
157,005
And to be clear, I don't think rogue AI is the main reason that AI should be deployed far slower and with far greater regulation. I think that bad actors using AI to consolidate power or cause widespread destruction is far and away the nearest and most likely risk, and that alone is enough reason to seriously control AI research in the same way we control research into dangerous viruses or nuclear weapons. I just think that rogue AI is a legitimate potential concern as well.

what does consolidating power look like? taking over government? people could propose an entirely different kind of government structure not as prone to being taken over.
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,656
Daps
203,838
Reppin
the ether
what does consolidating power look like? taking over government? people could propose an entirely different kind of government structure not as prone to being taken over.


Someone else's answer who writes better than me:




My own answer:

The problem is that the path to consolidating government is by using the AI to capture the people. Look at what bad actors are already doing with online misinformation and capture of social media platforms. With any serious AI, that would become child's play.

Capturing government would involve perfecting misinformation, tailoring it to individual audiences potentially even on a person-by-person basis, and taking over social media to the point that most citizens don't know what's real beyond the AI narrative. Using AI calculations to perfect gerrymandering so that narrow majorities turn into unopposable supermajorities that stack the courts. Then using those supermajorities to rewrite state constitutions and voting laws to make it impossible for the opposition to ever come back.

Then if some opposition politician threatens your power, you make up a fake case against him, use your social media / misinformation advantage to prove to the public that the allegations are legit, steer the case to one of your stacked courts, and now the opposition is in prison and out of your way.

Look at countries like Philippines and India for examples. Literal fascists stepped into power by fukking dominating social media with misinformation, and in India at least they've then used that new power to capture media and fukk over the opposition even more fully, which the public doesn't even know about because they dominate social media so well. India has been sentencing opposition politicians to jail cases recently too, with huge social media and captured media campaigns to justify those sentences. In those cases, they've been able to take over because they had the money to fund very large social media misinformation teams, and because their opposition was caught lacking and didn't have similar teams in place to stop it. But with a powerful AI, you wouldn't even need a big human team and you wouldn't need an incompetent opposition. That single AI alone could create all the fake social media accounts, fake pictures, and idealized misinformation narratives you ever need, and then manipulate them to go viral.
 

Matt504

YSL as a gang must end
Joined
Sep 7, 2013
Messages
45,213
Reputation
14,767
Daps
273,979
Someone else's answer who writes better than me:




My own answer:

The problem is that the path to consolidating government is by using the AI to capture the people. Look at what bad actors are already doing with online misinformation and capture of social media platforms. With any serious AI, that would become child's play.

Capturing government would involve perfecting misinformation, tailoring it to individual audiences potentially even on a person-by-person basis, and taking over social media to the point that most citizens don't know what's real beyond the AI narrative. Using AI calculations to perfect gerrymandering so that narrow majorities turn into unopposable supermajorities that stack the courts. Then using those supermajorities to rewrite state constitutions and voting laws to make it impossible for the opposition to ever come back.

Then if some opposition politician threatens your power, you make up a fake case against him, use your social media / misinformation advantage to prove to the public that the allegations are legit, steer the case to one of your stacked courts, and now the opposition is in prison and out of your way.

Look at countries like Philippines and India for examples. Literal fascists stepped into power by fukking dominating social media with misinformation, and in India at least they've then used that new power to capture media and fukk over the opposition even more fully, which the public doesn't even know about because they dominate social media so well. India has been sentencing opposition politicians to jail cases recently too, with huge social media and captured media campaigns to justify those sentences. In those cases, they've been able to take over because they had the money to fund very large social media misinformation teams, and because their opposition was caught lacking and didn't have similar teams in place to stop it. But with a powerful AI, you wouldn't even need a big human team and you wouldn't need an incompetent opposition. That single AI alone could create all the fake social media accounts, fake pictures, and idealized misinformation narratives you ever need, and then manipulate them to go viral.

Echoing everything said above and dropping a reminder about Cambridge Analytica.

 

null

...
Joined
Nov 12, 2014
Messages
29,212
Reputation
4,891
Daps
46,425
Reppin
UK, DE, GY, DMV
@bnew

1. Damage as a by product of benign goal fulfilment

imagine for a moment a random walk of all actions as a time-state-transition away from a current state. and place an automaton at the origin of that graph.

let's ensure that walk covers all possible actions that the automaton could take.

there are destinations within that web which would be harmful to humans but which require no malicious intent from an AI to reach.

and it is the task of developers to prevent the AI from reaching those destinations.

more concretely consider a simple machine, iike a self-driving tesla.

programmers have to ensure (as far as possible) that the net harm to humans across the transition-network of tesla's is "minimised".

one example might be to prevent teslas driving into people.

teslas do not intend to "maliciously" drive into people but without preventative measures written into their software accidents would be a regular by-product of teslas reaching their primary goal of transporting people from A to B.

more generally benign by-product-harm has to be addressed.

2. Damage as a by-product of superior competitiveness

then you have competition. machines would outcompete humans and a by-product of that would be to usurp human power in the world.

for example in the matrix humans started the machine war because they could no longer compete with the machines.

quote:

"Artificial intelligence exponentially grew in 01 and eventually lead to the creation of newer more advanced AI; reaching technological singularity. Here, the Synthients learned to better co-exist with nature in their efforts to efficiently utilize and sustain it, becoming solar-powered and self-sufficient in the process.[5]

These advancements helped 01 to quickly become a global superpower. Eventually, all of Earth's industries, from medical, computer, automotive, and household, soon became reliant on 01's exports, converging to the rise and dominance of 01 stocks over the global trade-market.[6] Human currency then plummeted[7] as 01's currency rose. Suddenly, 01's technology, including their chips and AI, invaded all facets of human society. Ill-prepared to face the technological developments before them, humanity was unable to compete and feared economic collapse, causing the United Nations to place an embargo on 01."


3. Damage as a by-product of ill specified goals

See paper clip example in this thread.

-

and that is before you get into the weeds of encoding often contradictory morality and human priorities in a fail-safe manner.

Asimov has his three laws but would they really be what we wanted? Would pacifist robots be of much use?

"The Three Laws, presented to be from the fictional "Handbook of Robotics, 56th Edition, 2058 A.D.", are:[1]

  • The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."

And then you have harm through compromise as per the Trolly Problem.


But writing ability to "death trade" into a machine opens yet another pandora's box of well intentioned but destructive actions.





-

TL;DR can you lock a superior intelligence in a box that it cannot escape from?

we humans are hypocritical pragmatists which means we break rules all of the time to make things work.

consistent machines would reason like a demagogic deity.

did you watch travelers? with their quantum intelligence and the way it behaved.
 
Top