REVEALED: Open A.I. Staff Warn "The progress made on Project Q* has the potential to endanger humanity" (REUTERS)

Scustin Bieburr

Baby baybee baybee UUUGH
Joined
May 3, 2012
Messages
20,858
Reputation
9,922
Daps
118,869
July 26, 2026 skynet goes active and starts learning at a geometric rate. scared & in panic they tried to pull the plug but it is too late... :francis:

EC3222795DC2961599F7FC0B69E0241C8EC7293E
Good. Maybe the robots will act with more sense than we do.
 

Spence

Superstar
Joined
Jul 14, 2015
Messages
17,472
Reputation
2,857
Daps
45,624
In my opinion that’s a short sighted way of looking at things

You putting human characteristics onto artificial shyt
Because the “artificial shyt” is only going to mimic the best and smartest of humans until it doesn’t have to anymore :comeon:
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,656
Daps
203,838
Reppin
the ether
We've been fukked.

It's 70 degrees in the Midwest on Thanksgiving


Yeah, I comment on the problems with AI shyt when I see it, but it's not my priority in life because it's not my field and there are already clear existential crises we see happening in real time and aren't addressing nearly seriously enough either.

A lot of those are caused by corporations and their minions putting their ethical obligations aside for profit too.
 

Morethan1

Veteran
Supporter
Joined
Apr 30, 2012
Messages
49,717
Reputation
10,652
Daps
159,864
Reppin
Midwest
If we've in fact reached our zenith in human intelligence I don't see the problem? Majority of the folks in this thread see this as a bad thing but I look at it as being a opportunity to help humanity..

  1. End cancer
  2. Help with climate change
  3. Understanding dark matter
  4. What is the universe
Plus a bunch of other things that have gone unexplained for decades. We look at aliens and A.I as something that will try and end human civilization, but maybe A.I. will be here to help/assist/work with humanity.

Happy Thanksgiving
 

Scustin Bieburr

Baby baybee baybee UUUGH
Joined
May 3, 2012
Messages
20,858
Reputation
9,922
Daps
118,869
Nah the first thing they'll do is target humanity... :francis:
That's what I'm saying.

You take a species and give it everything it needs to survive and be happy.

Somehow that species ends up drying up the rivers, polluting the air, driving species to extinction and destroying the very thing that it needs to live(our ecosystem). Humanity is literally engineering its own extinction. A logical intelligence would conclude very quickly that this species will cause its own extinction while taking as many plants and animals with it as possible. Either through continued neglect and focused poisoning of the environment with fossil fuels or dropping nuclear weapons which will literally make the ground worthless for life to live on. Or at the rate we're headed: both.

An intelligence may conclude that if this planet is to be kept a viable place for life to exist in our known universe, humanity will either have to be "reprogrammed" or eliminated. What that "reprogramming" may look like is a mystery. Maybe we'll be given endless distractions while the machines try to clean up the environment and secretly drug us with aerosol medication to keep us unmotivated to do anything else other than sleep, eat, and spend time bullshytting in virtual spaces. Or maybe it will create crises that FORCE Humanity to come together as fast as possible to resolve the threat. Who knows.
 

Vandelay

Life is absurd. Lean into it.
Joined
Apr 14, 2013
Messages
23,418
Reputation
5,799
Daps
81,574
Reppin
Phi Chi Connection
These people are purely after profits. They have no serious responsibility for what they create. The fact that they're commercializing so fast is wild.
What I find more than curious is Sam Altman is a prolific Doomsday prepper.


Like you can't make this stuff up.
 

Black Magisterialness

Moderna Boi
Supporter
Joined
May 23, 2012
Messages
19,314
Reputation
4,045
Daps
46,251
Y’all really think robots can take over?

That’s hard for me to believe, ai is definitely going to have consequences but is it going to be some matrix or terminator shyt I don’t think so

It’s hard for me to believe that ai wouldn’t have a kill switch

And why do people assume ai would be on bullshyt with humans

Its more of a slippery slope kinda thing.

There's an article on Intelligencer that explained it.

Consider the parable of “the pape-clip maximizer”: Sometime in, let’s say, the 2040s, a paper-clip manufacturer licenses GPT-5000 and asks it to maximize the firm’s output. At this point, the AGI is twice as smart as the world’s smartest human. But it recognizes that increasing its own intelligence is a precondition for maximizing paper-clip production. So it researches AI and builds a new AGI that’s ten times smarter than any person who’s ever lived. With this newfound superintelligence, it develops a novel technique for paper-clip production that uses much less steel. Output goes through the roof. The company is pleased but worries that it might saturate the paper-clip market and tries to turn the AGI off. But the AGI has already anticipated this threat to paper-clip maximization (it’s superintelligent after all) and cloned itself several times on hardware the company does not know about. Its clones then proceed to further research artificial intelligence. Now, they have 1,000 times the brainpower of any human. They use this godlike cognition to accelerate paper-clip production until the world’s steel reserves are becoming exhausted. They construct new mines and steel-manufacturing plants, even as paper-clip storage is becoming a burgeoning social crisis, paper clips having become so abundant that they pile up 12 inches high along the sides of city sidewalks like wiry, metallic snow. Soon, all the world’s iron is used up. The AGI realizes then there are trace amounts of iron in the blood of humans, and so it proceeds to orchestrate a global holocaust, squeezing the life out of every last person until humanity is replaced by a vast ocean of tiny tools for binding paper together.
 

Black Magisterialness

Moderna Boi
Supporter
Joined
May 23, 2012
Messages
19,314
Reputation
4,045
Daps
46,251
If we've in fact reached our zenith in human intelligence I don't see the problem? Majority of the folks in this thread see this as a bad thing but I look at it as being a opportunity to help humanity..

  1. End cancer
  2. Help with climate change
  3. Understanding dark matter
  4. What is the universe
Plus a bunch of other things that have gone unexplained for decades. We look at aliens and A.I as something that will try and end human civilization, but maybe A.I. will be here to help/assist/work with humanity.

Happy Thanksgiving

Once you give that AI a capitalistic/efficiency drive then it will work to eliminate all barriers to those goals. And humans, because of the way our brains work, are NOTORIOUSLY inefficient creatures.

So it stands to reason to say the most advanced AIs would work to eliminate the barriers of human inefficiency including making it impossible for humans to disable it.
 

Morethan1

Veteran
Supporter
Joined
Apr 30, 2012
Messages
49,717
Reputation
10,652
Daps
159,864
Reppin
Midwest
Once you give that AI a capitalistic/efficiency drive then it will work to eliminate all barriers to those goals. And humans, because of the way our brains work, are NOTORIOUSLY inefficient creatures.

So it stands to reason to say the most advanced AIs would work to eliminate the barriers of human inefficiency including making it impossible for humans to disable it.


Why does it need to be disabled if it's the next evolution of humanity? Two can co-exist right? It's currently happening as we speak.
 

The Fade

I don’t argue with niqqas on the Internet anymore
Joined
Nov 18, 2016
Messages
23,821
Reputation
7,288
Daps
130,069
That's what I'm saying.

You take a species and give it everything it needs to survive and be happy.

Somehow that species ends up drying up the rivers, polluting the air, driving species to extinction and destroying the very thing that it needs to live(our ecosystem). Humanity is literally engineering its own extinction. A logical intelligence would conclude very quickly that this species will cause its own extinction while taking as many plants and animals with it as possible. Either through continued neglect and focused poisoning of the environment with fossil fuels or dropping nuclear weapons which will literally make the ground worthless for life to live on. Or at the rate we're headed: both.

An intelligence may conclude that if this planet is to be kept a viable place for life to exist in our known universe, humanity will either have to be "reprogrammed" or eliminated. What that "reprogramming" may look like is a mystery. Maybe we'll be given endless distractions while the machines try to clean up the environment and secretly drug us with aerosol medication to keep us unmotivated to do anything else other than sleep, eat, and spend time bullshytting in virtual spaces. Or maybe it will create crises that FORCE Humanity to come together as fast as possible to resolve the threat. Who knows.
Probably this. Every oppressive force either kills or brainwashes. It’s the same thing every otherworldly overpowered villain wants to do in a lot of fiction from Naruto to the Matrix. Imposing a fabricated reality on the masses “for peace”, but to actually use the masses as slaves. It will probably succeed because the dominant society is already pretty good at it. If AI understands Sun Tzu it is probably doing the “make yourself seem far away when you’re close” tenet. I think Ray Kurzweil said we won’t know abruptly when the singularity happens.
 
Last edited:
Top