REVEALED: Open A.I. Staff Warn "The progress made on Project Q* has the potential to endanger humanity" (REUTERS)

null

...
Joined
Nov 12, 2014
Messages
29,220
Reputation
4,894
Daps
46,431
Reppin
UK, DE, GY, DMV
I don’t think y’all understand AI. Computers are very stupid. They can only do what humans program it to do. And even then you have to hold its hand to make it do what you want. We are nowhere near AI remotely taking over the world :mjlol:

explains something about neural network engines.

we moved from telling computers what they know and how they should act to focusing on engines that could enable them to learn for themselves.



famous breakthrough came at the world renowned UCL about ten years ago.

"The start-up was founded by Demis Hassabis, Shane Legg and Mustafa Suleyman in September 2010.[20][21] Hassabis and Legg first met at the Gatsby Computational Neuroscience Unit at University College London (UCL).[22]"

 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,656
Daps
203,838
Reppin
the ether
What is ai reproducing, I think we underestimate nature

Depends on what its programming goals are, but the obvious ones are controlling more computer power or controlling more physical infrastructure so that it can achieve its programmed goals more efficiently. And those goals may be set by an outside programmer or may be self-programmed.




It would be illogical from the ai point of view

How do you know what is or isn't logical to an AI? A neural net can be programmed to seek whatever goals you wish it to seek. A General AI may be able to supersede/defy that programming, but we have no clue under what circumstances it would do that. So it's impossible for us to put any explicit limits on what AI will or will not do.




an emotionless "moral-less" AI could treat humans like any other resource.

rid yourself of your human-centric concepts.

"kill" for such a computer would be just a change of state.

no more significant than burning oil for energy or felling trees.

if AI did start caring about life per-se it might have a word or two to say about human behaviour.

we kill almost 73 billion chickens per year for food. and consume many more times that in eggs.

Yup. And who knows if it would do that? The issue isn't assuming it would do such things, the issue is the uncertainty within potential for extreme power.




the first humans to crack true strong AI will be the humans in charge ...

History suggests that great technological advancements often quickly fall out of the hands of those who created them and end up captured by people who have accumulated other types of power.
 

Gritsngravy

Superstar
Joined
Mar 11, 2022
Messages
8,150
Reputation
572
Daps
16,461
Depends on what its programming goals are, but the obvious ones are controlling more computer power or controlling more physical infrastructure so that it can achieve its programmed goals more efficiently. And those goals may be set by an outside programmer or may be self-programmed.






How do you know what is or isn't logical to an AI? A neural net can be programmed to seek whatever goals you wish it to seek. A General AI may be able to supersede/defy that programming, but we have no clue under what circumstances it would do that. So it's impossible for us to put any explicit limits on what AI will or will not do.






Yup. And who knows if it would do that? The issue isn't assuming it would do such things, the issue is the uncertainty within potential for extreme power.






History suggests that great technological advancements often quickly fall out of the hands of those who created them and end up captured by people who have accumulated other types of power.
No one can come with a decent argument to eliminate humans that’s why it’s illogical regardless how unpredictably ai can be

I need to hear an answer from the poster I replied to on what ai reproduces, cause I highly doubt u and that poster come to the same conclusion on that
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,656
Daps
203,838
Reppin
the ether
No one can come with a decent argument to eliminate humans that’s why it’s illogical regardless how unpredictably ai can be


Goal: Solve climate change.

There, suddenly you have an easy argument to eliminate humans. If a general AI is given the goal to solve climate change, it could easily come to the decision that eliminating humans is the only practical way to achieve that.


There are literally hundreds of other examples. People who worry about existential threats have already come up with them, so it's silly to say, "no one can come up with a decent argument" when there's an entire field of study that has done it. Read the actual research before making declarative statements about what is out there.




I need to hear an answer from the poster I replied to on what ai reproduces, cause I highly doubt u and that poster come to the same conclusion on that

Why do we need to? There are many possible answers to the question, and all of them are on the table.
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,656
Daps
203,838
Reppin
the ether
Sounds Like the typical STEM Coli breh that talks down on non STEM majors, and say that everyone else deserves poverty because,they can do high-end AI that does Calculus 3 with the blink of an eye.


In my experience, most STEM folk aren't openly malicious towards anyone. They're just naive and have tunnel vision. At my university, most students were apolitical. They generally didn't pay attention to major cultural issues or ethical dilemmas, they didn't get involved in activism or big political movements in either direction. They were just interested in their own tech and money/power in their own lives.

The main ethical issue I worry about from STEM folk doesn't have anything to do with negative thoughts about other people. It's that they don't even have the worldview to fully factor in how their work will affect other people until it's too late.

Just look at social media. At what point did all those tech bros begin considering what impact social media would have on the world? Some of them DID start having such thoughts eventually, but it was way to late, after the impacts had already materialized. And then they were just pushed to the side in favor of the people who wanted to keep expanding at all costs.




don't know why you're so sure it'll lead to catastrophe. this is the system functioning as it always has and has little to do with the morality of developers.


That's my whole point. "The system functioning as it always has" is why tech bros have always unquestioningly gotten behind every new military weapon, every new mining project and new polluting industry, every new government surveillance tool, every new society-destroying social media platform. It's why those tech bros who work for corporations have done virtually nothing to stop global warming, limit pesticides, prevent deforestation, keep power out of the hands of dictators, etc.

Developers have never been moral. They do whatever their bosses want them to do because that's what gets them more money and power. And when they don't have a boss, they form a startup whose goal is to make as much money as possible as quickly as possible. That's the goal our economic system has given them, so that's the goal they invariably accept, because they don't know any better.




people want to be at the forefront of advancing technology and scientific discoveries. openai even tells their investors that if they succeed theres a risk money may not matter anymore.


Both can be true. Tech bros can be motivated by money, and tech bros can be motivated by dreams of power, fame, of being the person who figures it all out. Either way, when they work for a corporation you get the exact same result.

If they were purely interested with advancing scientific discoveries and tech, then they would work for an academic institution or a government research center. No one works for a corporation without being at least partially financially motivated, the culture there is so all-encompassing that money rules all even if you don't want it to.




while energy is finite (and robot input materials finite) money will continue to matter.

solve fusion energy (or something similar) then maybe not.


Our economic system ensures that money will matter even if we have infinite energy. The system would collapse if money didn't matter, so those who rule the system will simply limit access to energy, or limit access to other resources (land, handware, minerals, etc.) that people need in order to take advantage of energy.
 

Ciggavelli

|∞||∞||∞||∞|
Supporter
Joined
May 21, 2012
Messages
28,006
Reputation
6,572
Daps
57,354
Reppin
Houston
What is ai reproducing, I think we underestimate nature

Is it even possible for ai to even have such desires? And we not going to talk infrastructure cause I haven’t heard anybody say anything about this obstacle
That’s a good question. Maybe it would be to make replicas or variants to deal out menial tasks. Maybe it would be to get more and more energy to achieve more complex tasks. Or it could be to create larger data warehouses. If the goal is to learn or create, that seems limitless to me, but it requires the infrastructure you mentioned. Resources are finite on earth, so ai will need to compete with biological life or expand to other planets. It’s really a question we can’t really predict though. Is the main goal of AI to survive? Maybe at a certain level it will be (maybe). With survival comes the need for resources and probably reproduction to survive “better” and longer.

I’m not 100% sure the evolutionary model will work for AI, but it may, especially since it’s being trained by biological life. It’s even possible AI is just a potential evolutionary path for biological life.
 

null

...
Joined
Nov 12, 2014
Messages
29,220
Reputation
4,894
Daps
46,431
Reppin
UK, DE, GY, DMV
History suggests that great technological advancements often quickly fall out of the hands of those who created them and end up captured by people who have accumulated other types of power.

that's true but any such AI could build protections that humans would be unable to break.

in the film automata (for example) AI built hardware bound encryption which prevented hardware modifications but that could not be understood by humans.

there are plenty of open mathematical questions out there that could give a solver great power.

for example:

 

fact

Fukk you thought it was?
Joined
Apr 30, 2012
Messages
14,613
Reputation
6,034
Daps
59,376
Reppin
How you gonna ROFL with a hollow back?
These people are purely after profits. They have no serious responsibility for what they create. The fact that they're commercializing so fast is wild.
That is the part that I don’t think we have ever seen before. The public usually gets things after all the “smartest guys in the room” have kicked the tires, and stripped it of everything that could shift public narratives/discourse, because they don’t anybody to be able to wield more power/influence than they do. The thing with AI is there is no “cap”, and they have no way to keep the lid on existential outcomes. How have they been making movies about this very thing since the 80’s (maximum overdrive being a movie that always scared the shyt out of me when I was younger), and now be playing scared. This was always going to be the outcome
 

null

...
Joined
Nov 12, 2014
Messages
29,220
Reputation
4,894
Daps
46,431
Reppin
UK, DE, GY, DMV
That’s a good question. Maybe it would be to make replicas or variants to deal out menial tasks. Maybe it would be to get more and more energy to achieve more complex tasks. Or it could be to create larger data warehouses. If the goal is to learn or create, that seems limitless to me, but it requires the infrastructure you mentioned. Resources are finite on earth, so ai will need to compete with biological life or expand to other planets. It’s really a question we can’t really predict though. Is the main goal of AI to survive? Maybe at a certain level it will be (maybe). With survival comes the need for resources and probably reproduction to survive “better” and longer.

I’m not 100% sure the evolutionary model will work for AI, but it may, especially since it’s being trained by biological life. It’s even possible AI is just a potential evolutionary path for biological life.

nuclear fusion energy is practically infinite.

(E = Mc2)

take that and add a little energy driven molecular manipulation and you have all the resources you could ever want.

like star trek.

we are already close to solving nuclear fusion without the need for super-intelligent computers.



:hubie:
 

that guy

Superstar
Bushed
Joined
Jan 25, 2014
Messages
5,479
Reputation
648
Daps
18,132
Not exactly true. With neural nets, we can already give computers a goal and then tell them to program themselves. That's how the computers that mastered chess, Go, and LLM work. They're using strategies humans have never seen before, strategies that humans don't even understand. They figured those out by playing billions of games against themselves until they had found ideal solutions.

That's not general AI. That's just simple neural nets, a technology that's been around for 25+ years and is steadily getting better. When we get General AI, then of course it will be able to set its own goals.

On top of that, what if the issue isn't AI setting its own goals? What if the issue is the ease with which it could get into the hands of a Putin, an Asad, a Duterte. Argentina just elected a literal insane person to the presidency, a right-wing extremist who wants to destroy government and who gets policy goals by talking to the spirits of his dead dogs (Javier Milei). What if he gets his hands on a computer superintelligence, or anyone else like him does?





First off, my main concern is not "AI taking over the world". My concern is AI superseding normal human speedbumps. You don't have to think Al Queda is taking over the world to still be concerned about what Al Queda could do with more power.

Second, you have to understand inflection points. We were nowhere close to humans flying a heavier-than-air vehicle....until we were. Then the moment it broke, suddenly our ability to fly accelerated very rapidly. We went from literally not being able to get off the ground to flying across the Atlantic Ocean in just 16 years, and into fukking space just 38 years after that. AI will advance much, much faster than flight did.
Neural networks have to be programmed by human being and extensively trained by human beings which goes to my point that computers can only do what we program them to do.

You’re comparing humans designing technology such as aircraft and then improving that technology to humans designing software and then that software becoming sentient and destroying the globe. We just aren’t there yet :mjlol:
Most people sensationalizing the power of AI aren’t actually computer programmers. Most software engineers will tell you the most dangerous aspect of AI is it’s effect on the job market, not it’s ability to gain consciousness and destroy the world
 

that guy

Superstar
Bushed
Joined
Jan 25, 2014
Messages
5,479
Reputation
648
Daps
18,132
explains something about neural network engines.

we moved from telling computers what they know and how they should act to focusing on engines that could enable them to learn for themselves.



famous breakthrough came at the world renowned UCL about ten years ago.

"The start-up was founded by Demis Hassabis, Shane Legg and Mustafa Suleyman in September 2010.[20][21] Hassabis and Legg first met at the Gatsby Computational Neuroscience Unit at University College London (UCL).[22]"


Neural networks require extensive programming and training by some of the smartest people in the world. So once again, computers can only do what they’re programmed by humans to do. It’s simply a matter of input and output. It’s not AI that’s dangerous, it’s the human beingS that are programming them that are.

AI is dangerous because of the implications it has for the job market, not because of the chance of it becoming sentient and stealing nuclear launch codes :mjlol:
 

Ciggavelli

|∞||∞||∞||∞|
Supporter
Joined
May 21, 2012
Messages
28,006
Reputation
6,572
Daps
57,354
Reppin
Houston
nuclear fusion energy is practically infinite.

(E = Mc2)

take that and add a little energy driven molecular manipulation and you have all the resources you could ever want.

like star trek.

we are already close to solving nuclear fusion without the need for super-intelligent computers.



:hubie:

It’s not just energy though. I thought about the nuclear angle earlier actually. They still need metals and other natural resources. Storing data is physical for one, and the chips need silicon and the like. Then if ai wants to grow, maybe it will want to start to take mineral gathering up for itself because humans are inefficient. Then they have to then compete with biological life.
 

null

...
Joined
Nov 12, 2014
Messages
29,220
Reputation
4,894
Daps
46,431
Reppin
UK, DE, GY, DMV
Neural networks have to be programmed by human being and extensively trained by human beings

not true.

why you write these untrue things bru ..?



you are talking about Supervised and possible Reinforcement techniques.

let it be known that computers can bootstrap each other.

why?

because solving a problem and testing whether a result is correct are not the same class of problem (N vs NP).

explained here:


e.g.

that is why for example asymmetric keys work.

Approaches[edit]​

Machine learning approaches are traditionally divided into three broad categories, which correspond to learning paradigms, depending on the nature of the "signal" or "feedback" available to the learning system:

  • Supervised learning: The computer is presented with example inputs and their desired outputs, given by a "teacher", and the goal is to learn a general rule that maps inputs to outputs.
  • Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden patterns in data) or a means towards an end (feature learning).
  • Reinforcement learning: A computer program interacts with a dynamic environment in which it must perform a certain goal (such as driving a vehicle or playing a game against an opponent). As it navigates its problem space, the program is provided feedback that's analogous to rewards, which it tries to maximize.[6] Although each algorithm has advantages and limitations, no single algorithm works for all problems.[34][35][36]

this also enables zero-trust communication.




which goes to my point that computers can only do what we program them to do.

You’re comparing humans designing technology such as aircraft and then improving that technology to humans designing software and then that software becoming sentient and destroying the globe. We just aren’t there yet :mjlol:
Most people sensationalizing the power of AI aren’t actually computer programmers. Most software engineers will tell you the most dangerous aspect of AI is it’s effect on the job market, not it’s ability to gain consciousness and destroy the world
 
Top