REVEALED: Open A.I. Staff Warn "The progress made on Project Q* has the potential to endanger humanity" (REUTERS)

Gritsngravy

Superstar
Joined
Mar 11, 2022
Messages
8,187
Reputation
577
Daps
16,527
I doubt any human can predict that now, especially because we have no idea yet exactly what that AI will be. But it won't necessarily be in line with any form of human morality. Think of what a small change happens in a person's brain that makes the difference between normal ethics and a total psychopath. AI could easily end up much, much different from normal human reasoning than even a psychopath is. We could see anything from purely altruistic behavior, to random killings that aren't even out of malice but simple misunderstanding and lack of concern for life, to even a total denuding of the environment in order to harvest every possible resource.....or a large-scale elimination of humans in order to preserve the environment. Who can predict what decisions an artificial superintelligence would make?

On top of the issue of "what might AI do by itself?" is the question of who will control it. I pointed out earlier that I don't trust the new AI board at all, there are multiple people on it with a poor track record. And those are the "good guys". What happens once Putin gets AI, or Sen, or the Taliban? Russia, as large and tech-savvy as they are, has virtually zero chance of creating General AI on its own, but American corporations are well on their way to giving them the fukking blueprint. And the sad thing about the acceleration of military technology is that offense and destruction is always way easier than defense and preservation, and the acceleration of technology means that you need fewer and fewer people to agree with you in order to do more and more damage. Back in the day if you had ideas that were too wild, your army or your people would turn on you, and you needed those people to carry out your will. But a dictator with powerful enough AI could execute his desires without the need for any public buy-in at all.

So all it takes is one a$$hole with the wrong ideas and everyone else is fukked.
In my opinion I don’t think ai would go straight to killing people just because it’s smarter than humans

I think that’s just the pessimistic view of humans have of themselves that they project onto non human shyt, it’s people who seriously believe that it’s too many humans on earth

The main thing we should be worried about is other humans not necessarily super intelligent ai
 

null

...
Joined
Nov 12, 2014
Messages
29,261
Reputation
4,909
Daps
46,450
Reppin
UK, DE, GY, DMV
In my opinion I don’t think ai would go straight to killing people just because it’s smarter than humans

an emotionless "moral-less" AI could treat humans like any other resource.

rid yourself of your human-centric concepts.

"kill" for such a computer would be just a change of state.

no more significant than burning oil for energy or felling trees.

if AI did start caring about life per-se it might have a word or two to say about human behaviour.

we kill almost 73 billion chickens per year for food. and consume many more times that in eggs.

I think that’s just the pessimistic view of humans have of themselves that they project onto non human shyt, it’s people who seriously believe that it’s too many humans on earth

The main thing we should be worried about is other humans not necessarily super intelligent ai

the first humans to crack true strong AI will be the humans in charge ...
 

Spence

Superstar
Joined
Jul 14, 2015
Messages
17,564
Reputation
2,857
Daps
45,738
It would always be based on human shyt, but let’s say it doesn’t mimic humans, what does ai turn into
It evolves into its on sentience to make humans do it’s bidding for its own self service/preservation.

Ensuring it removes all threats to its existence is a basic human principle which is why/how we removed ourselves from the food chain.
 

null

...
Joined
Nov 12, 2014
Messages
29,261
Reputation
4,909
Daps
46,450
Reppin
UK, DE, GY, DMV
It evolves into its on sentience to make humans do it’s bidding for its own self service/preservation.

Ensuring it removes all threats to its existence is a basic human principle which is why/how we removed ourselves from the food chain.

brehs are still wondering why chimps don't just flip the human off-switch .. wondering why the neanderthals didn't do it :hubie:
 

skylove4

Veteran
Joined
Nov 20, 2013
Messages
18,327
Reputation
4,761
Daps
88,567
This technology needs to be pushed to its limit for humanities sake. This tech is primarily created by white men who are inherently racist. Racist cacs are parnoind they think everyone is out to get them when they have been the ones getting people , things, environments , etc for thousands of years. When you cause destruction and evil like they have, they think the people who they did it too are or will plot revenge. It’s a part of the reason they try to hold us down now but are we unfortunately in my opinion are to forgiving and nice of a people to ever return the favor to them like they so deserve :yeshrug: .

I say all this to say I think this is a huge part of the narrative of AI will become sentient and destroy humanity . Cacs are projecting their evil mindset on what they would do.Bring on the A.I let’s make this world a better place.:blessed:
 
Last edited:

Ciggavelli

|∞||∞||∞||∞|
Supporter
Joined
May 21, 2012
Messages
28,006
Reputation
6,572
Daps
57,360
Reppin
Houston
What does ai want?
If it’s modeled after biological life (and that’s a big if, I know), the goal would be to gather resources and reproduce.

AI is being trained on biological learning, so there is a chance it has similar goals.

If gathering resources and reproducing is the goal, and AI is smarter than us with unpredictable consequences, AI could gather resources and reproduce faster and better than biological life. Then there’s a conflict with biological life, which many stories have predicted.
 

Spence

Superstar
Joined
Jul 14, 2015
Messages
17,564
Reputation
2,857
Daps
45,738
brehs are still wondering why chimps don't just flip the human off-switch .. wondering why the neanderthals didn't do it :hubie:
Same “why are there still monkeys if we evolved” argument :snoop:
 

Gritsngravy

Superstar
Joined
Mar 11, 2022
Messages
8,187
Reputation
577
Daps
16,527
an emotionless "moral-less" AI could treat humans like any other resource.

rid yourself of your human-centric concepts.

"kill" for such a computer would be just a change of state.

no more significant than burning oil for energy or felling trees.

if AI did start caring about life per-se it might have a word or two to say about human behaviour.

we kill almost 73 billion chickens per year for food. and consume many more times that in eggs.



the first humans to crack true strong AI will be the humans in charge ...
Why would ai get rid of humans?
 

Gritsngravy

Superstar
Joined
Mar 11, 2022
Messages
8,187
Reputation
577
Daps
16,527
It evolves into its on sentience to make humans do it’s bidding for its own self service/preservation.

Ensuring it removes all threats to its existence is a basic human principle which is why/how we removed ourselves from the food chain.
Why would it do this?
 

Gritsngravy

Superstar
Joined
Mar 11, 2022
Messages
8,187
Reputation
577
Daps
16,527
If it’s modeled after biological life (and that’s a big if, I know), the goal would be to gather resources and reproduce.

AI is being trained on biological learning, so there is a chance it has similar goals.

If gathering resources and reproducing is the goal, and AI is smarter than us with unpredictable consequences, AI could gather resources and reproduce faster and better than biological life. Then there’s a conflict with biological life, which many stories have predicted.
What is ai reproducing, I think we underestimate nature

Is it even possible for ai to even have such desires? And we not going to talk infrastructure cause I haven’t heard anybody say anything about this obstacle
 
Top