REVEALED: Open A.I. Staff Warn "The progress made on Project Q* has the potential to endanger humanity" (REUTERS)

that guy

Superstar
Joined
Jan 25, 2014
Messages
5,236
Reputation
548
Daps
17,042
This is where you clearly don't understand the difference between existing neural nets and General AI.

Neural nets already prove that computers can self-program, they just have restrictions on goals and access to information. General AI with be a huge evolutionary step above that.






Of course we aren't there yet. So let's pick a date. Because at the current rate, we are absolutely going to reach a point where the technological potential is there.






This is bullshyt. There are a LOT of experts in the field who think that AI could destroy humanity.




And those are just the researchers, with computer scientists and programmers being famously myopic and unaware of the potential social implications of their work. The true risk is likely much greater.

Think about that for a moment. These are the most specific experts in the field, and half of them think there's at least a 10% chance that AI will destroy humanity or something of similarly negative impact. If you had a virus that had a 10% or greater chance of destroying humanity, would you be allowed to study it freely, to perform human tests with it, to release it in the general population? fukk no, there would be massive restrictions on what you could do with that virus, you aren't even legally allowed to develop such viruses even as tests, and if it did exist it wouldn't be allowed outside of the most secure facility. But these computer programs which have the potential to be every bit of dangerous are being experimented on right in the public sphere with no regulation at all.
I don’t see the point of your first paragraph? I already explained to you that neural nets have to been extensively trained by highly intelligent computer programmers like a baby toddler being potty trained. That is a fact that’s not debatable. Generative AI works by recognizing patterns and then generating a response based on the identified patterns. It is literally the opposite of spontaneous thought. Generative AI as well as all AI models need to be fed vast amount of informational input in order to generate an output. As well as a prompt in order to do anything at all. There’s a reason why a powerful tool like ChatGPT is free. Every prompt is helping train the AI by learning how we communicate.

In the second paragraph is where I don’t think you understand how computers work on a basic level. A computer by definition can’t develop sentient thought. A computer is simply input and output. A computer is not physically capable of spontaneous thought, its capable of simulating thought based on the information it has available. Computers lack self-awareness, context, perception, etc. Human thought arises out of real life experiences, emotions, and interactions which a computer is incapable of experiencing.

Sometimes you have to stop and think about what you’re arguing against instead of disagreeing for the sake of arguing. You are not even arguing against the point that I’m making.
The point being that there are no respectable AI experts that think AI will destroy humanity by becoming spontaneous entities. They think AI will destroy humanity based on the implications of the technology which is what I have been saying.

AI is dangerous for the same reason nuclear energy is: because of what humans are capable of doing with the technology, not the technology itself. Your own article is parroting what I have been saying in this thread, im pretty sure you didn’t read it.

Here’s a quote from your article that you didn’t read:

Deepmind founder:
“I think it’s going to be the most beneficial thing ever to humanity, things like curing diseases, helping with climate, all of this stuff. But it’s a dual-use technology — it depends on how, as a society, we decide to deploy it — and what we use it for.”
 

Ethnic Vagina Finder

The Great Paper Chaser
Bushed
Joined
May 4, 2012
Messages
53,950
Reputation
2,486
Daps
152,952
Reppin
North Jersey but I miss Cali :sadcam:
Humans base intelligence on Human Intelligence. It’s the same reason why we think Aliens will automatically try to go to war and colonize earth. Because that’s what humans have done.

Human intelligence is laughable limited compared to the scale of the universe. So unless AI will discover the 4th dimension, worm hole travel or new equations and elements, it won’t surpass humans.

The biggest threat AI poses is weapons development and criminal activity.
 

null

...
Joined
Nov 12, 2014
Messages
28,835
Reputation
4,851
Daps
46,018
Reppin
UK, DE, GY, DMV


good video .. .one that everyone should watch.

note it was the non-techie with the argument "machines will not hurt us and we should stop assuming that they would think like us"

non-techie == the one on the right hand side towards the end (i'm watching while programming ...). EDIT: ylecun

shocking how little some know.

more study required.

good links :ehh:
 
Joined
May 16, 2012
Messages
39,602
Reputation
-17,821
Daps
84,254
Reppin
NULL
Yall are scared for no reason. Superintelligent AI already exists. It's what created our world. All we're doing now is creating AI in a simulation created by AI. A matrix within the matrix. Thus there's no reason to fear since it's already written into the code of our reality.
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
50,602
Reputation
19,531
Daps
201,503
Reppin
the ether
I don’t see the point of your first paragraph? I already explained to you that neural nets have to been extensively trained by highly intelligent computer programmers like a baby toddler being potty trained. That is a fact that’s not debatable. Generative AI works by recognizing patterns and then generating a response based on the identified patterns. It is literally the opposite of spontaneous thought. Generative AI as well as all AI models need to be fed vast amount of informational input in order to generate an output. As well as a prompt in order to do anything at all. There’s a reason why a powerful tool like ChatGPT is free. Every prompt is helping train the AI by learning how we communicate.


This is like saying that scientists can't think because they have to be fed 20 years of information from textbooks and teachers before they do anything on their own. :mjlol:

ALL intelligences need to be trained by other intelligences. People are no different from computers in that respect. But when you try and claim that feeding a neural net a shytload of information is somehow the equivalent of potty-training a toddler, that's cap.

You claimed "computers can only do what we program them to do", and that's nonsense. A chess computer, to use a really low bar, can make moves and devise strategies that no human ever told it to use, that in fact have never been used before in human history. Once it is sufficiently intelligent to understand the game and has a base of knowledge to work from (like ALL intelligences require, human or artificial), then it can train itself to a degree far beyond any human training.

And a general AI will be waaaay fukking beyond a simple chess computer.




In the second paragraph is where I don’t think you understand how computers work on a basic level. A computer by definition can’t develop sentient thought. A computer is simply input and output. A computer is not physically capable of spontaneous thought, its capable of simulating thought based on the information it has available. Computers lack self-awareness, context, perception, etc.

lol. Please, since you're apparently an expert on this matter, what specifically can an organic brain do that an artificial brain is incapable of doing, and why? You claim a computer is simply input and output....are not all brains input and output? What specifically occurs within a human brain which enables it to be something more, which could not occur in an artificial brain as well?




Human thought arises out of real life experiences, emotions, and interactions which a computer is incapable of experiencing.

Why is a computer incapable of real life experiences and interactions?

[emotions should be lumped in with the previous question]




Sometimes you have to stop and think about what you’re arguing against instead of disagreeing for the sake of arguing. You are not even arguing against the point that I’m making.
The point being that there are no respectable AI experts that think AI will destroy humanity by becoming spontaneous entities.

lol, artificial intelligence experts have been worried about this exact issue starting all the way back from Alan Turing and I.J. Good. Are you going to claim they're not respectable? How about Marvin Minsky, Bill Joy, Nick Bostrom, Roman Yampolskiy, Stuart Russell, and hundreds more that could be listed?

Here, for example, is an article from Geoff Hinton, winner of the 2018 Turing Award and considered one of the godfathers of modern A.I.




“The alarm bell I’m ringing has to do with the existential threat of them taking control,” Hinton said Wednesday, referring to powerful AI systems and speaking by video at EmTech Digital 2023, a conference hosted by the magazine MIT Technology Review. “I used to think it was a long way off, but I now think it's serious and fairly close.”



“Smart things can outsmart us,” he said.



Hinton said setting ‘guardrails’ and other safety measures around AI sounds promising but questioned their effectiveness once AI systems are vastly more intelligent than humans. “Imagine your two-year-old saying, ‘my dad does things I don't like so I'm going to make some rules for what my dad can do,’” he said, suggesting the intelligence gap that may one day exist between humans and AI. “You could probably figure out how to live with those rules and still get what you want.”




We evolved; we have certain built-in goals that we find very hard to turn off - like we try not to damage our bodies. That's what pain is about,” he said. “But these digital intelligences didn't evolve, we made them, so they don't have these built in goals. If we can put the goals in, maybe it'll be okay. But my big worry is, sooner or later someone will wire into them the ability to create their own sub goals … and if you give someone the ability to set sub goals in order to achieve other goals, they'll very quickly realize that getting more control is a very good sub goal because it helps you achieve other goals.”



“It's not clear there is a solution,” he said. I think it's quite conceivable that humanity is just a passing phase in the evolution of intelligence.”



“My one hope is that, because if we allowed it to take over it will be bad for all of us, we could get the U.S. and China to agree like we could with nuclear weapons, which were bad for all of us,” he said. “We're all in the same boat with respect to the existential threat.”





You clearly don't have the slightest fukking idea what you're talking about. You've tried to take the truth "some AI experts aren't afraid of General AI every becoming self-determinate", and from there tried to falsely claim that they all think that way.
 

Amestafuu (Emeritus)

Veteran
Supporter
Joined
May 8, 2012
Messages
69,237
Reputation
13,568
Daps
293,821
Reppin
Toronto
July 26, 2026 skynet goes active and starts learning at a geometric rate. scared & in panic they tried to pull the plug but it is too late... :francis:

EC3222795DC2961599F7FC0B69E0241C8EC7293E
All predictable. Question is do we pause progression or carry on. This was an inevitable cross roads. The most important thing is how we make decisions from here out.

You mean like how the current administration is forbidding high end chips from being sold/exported to other countries?


But that's still more of a competitive move than anything else.. but yeah, you're right.. the best defense we have is to continue to develop this tech as a leader in this space, but that comes with its own set of problems and could/will have negative effects on the rest of the world.. not just us.
They can't stop other counties from developing shyt those chips are most made in Taiwan and restricted to China. If China could make ones as good then that block would be meaningless
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
50,602
Reputation
19,531
Daps
201,503
Reppin
the ether
Please Expound? I don't know them


Larry Summers is a hardcore neoliberal economist, hedge fund guy, used to be chief economist of the World Bank, worked in the Clinton Administration and advised Obama on banking crisis. He's a huge worshipper of the market, wants to privitize everything, opposed environmental regulations and steps to prevent climate change on multiple occasions. He blamed California's energy crisis in 2000 on too much government regulation even though it came AFTER privatization and was proven to be caused by corporate market manipulation. He was a huge proponent of the repeal of Glass–Steagall, which removed a lot of banking regulations and helped cause the 2007-2008 financial crisis and recession. Then he advised Obama during the bailout, and was a big part of trying to make it less hard on the banking executives, more tax cuts, and less help for other people. He's also been involved in multiple scandals regarding conflicts of interests between his regulatory decisions and his own personal business profits.


Adam D'Angelo is an old high school friend of Mark Zuckerberg. He started out as a high-level executive at Facebook for a while before leaving to found Quora where he's still CEO. Quora is probably the most misinformation-friendly "info" site online. White Supremacists and racists run freely, bullying is easy, and basically whoever has the most followers "wins" any conversation. There's zero responsible moderation, most moderation is in the hands of whoever answers the question. Basically every feature of the site is designed to favor engagment over accuracy.


Neither one of them has shown any signs to me that they have any desire to be careful, or regulate, or push something other than their personal profits. They'd been in my category of "gross people in the businesss world" even before any of this story broke.


I didn't really know Bret Taylor, but he was CTO of Facebook and Chairman of the Board of Twitter, which are not good associations in my book.


So basically we have a neoliberal dinosaur and two Facebook guys with some control over the direction of humanity. What could go wrong?
 
Last edited:

Amestafuu (Emeritus)

Veteran
Supporter
Joined
May 8, 2012
Messages
69,237
Reputation
13,568
Daps
293,821
Reppin
Toronto
Please Expound? I don't know them
This
Larry Summers is a hardcore neoliberal economist, hedge fund guy, used to be chief economist of the World Bank, worked in the Clinton Administration and advised Obama on banking crisis. He's a huge worshipper of the market, wants to privitize everything, opposed environmental regulations and steps to prevent climate change on multiple occasions. He blamed California's energy crisis in 2000 on too much government regulation even though it came AFTER privitazation and was proven to be caused by corporate market manipulation. He was a huge proponent of the repeal of Glass–Steagall, which removed a lot of banking regulations and helped cause the 2007-2008 financial crisis and recession. Then he advised Obama during the bailout, and was a big part of trying to make it less hard on the banking executives, more tax cuts, and less help for other people. He's also been involved in multiple scandals regarding conflicts of interests between his regulatory decisions and his own personal business profits.


Adam D'Angelo is an old high school friend of Mark Zuckerberg. He started out as a high-level executive at Facebook for a while before leaving to found Quora where he's still CEO. Quora is probably the most misinformation-friendly "info" site online. White Supremacists and racists run freely, bullying is easy, and basically whoever has the most followers "wins" any conversation. There's zero responsible moderation, most moderation is in the hands of whoever answers the question. Basically every feature of the site is designed to favor engagment over accuracy.


Neither one of them has shown any signs to me that they have any desire to be careful, or regulate, or push something other than their personal profits. They'd been in my category of "gross people in the businesss world) even before any of this story broke.


I didn't really know Bret Taylor, but he was CTO of Facebook and Chairman of the Board of Twitter, which are not good associations in my book.


So basically we have a neoliberal dinosaur and two Facebook guys with some control over the direction of humanity. What could go wrong?
If unleashing super intelligent AI dooms humanity what's in it for them? Profits are meaningless in that scenario. This can't even be purely about greed. Maybe recklessness if they want to still roll the dice and progress with this development.
 
Top