REVEALED: Open A.I. Staff Warn "The progress made on Project Q* has the potential to endanger humanity" (REUTERS)

aXiom

Maximized Potential
Joined
May 1, 2012
Messages
11,603
Reputation
8,438
Daps
69,365
Reppin
Parc fermé
it's not only about hardware though, i suspect a lot of the software aren't to train models aren't as efficient as possible. this policy remains 'effective' only until the software performs better on less powerful hardware.
i agree that it's not only hardware.. however, software does not exist in a vacuum.. more powerful hardware enables more powerful software and/or allows you to complete more work in less time.. all of this is still in its infancy and there is a shytload of money and effort being thrown at LLM optimization.. both at the FAANG level companies and the open source community and new breakthroughs are happening every day.. but as of right now, it's easier to just brute force the problem.. especially when newer hardware is being optimized for AI workloads and you're seeing 5-10x the performance in some use cases.. you would have to throw a lot more at developers to get that kind of optimization and even with the datacenter card back orders, you'd get your cards long before the devs were able to 2x the performance of a new(ish?) technology at this stage in it's life.
 

kevm3

follower of Jesus
Supporter
Joined
May 2, 2012
Messages
16,286
Reputation
5,551
Daps
83,460
Imagine you design a "kill switch" for me. But I have a 160 IQ and you have a 100 IQ. How long before I figure out how to disable your kill switch or otherwise render it irrelevant?

Now, when we're talking about AI, forget about that 160 IQ and make it 300 IQ, or 400 IQ, some shyt beyond any human comprehension. Because that's probably where we're headed.

Even say that you design a perfect kill switch that absolutely gives humans the upper hand. Well, who says that no human is going to side with AI and fukk up the system so AI can bypass it? There's always going to be that evil genius who thinks he's smarter than everyone else and can control the consequences. If Putin or Trump (or someone like them) started losing a really serious war, you can't imagine they'd allow AI to run wild in a last-ditch attempt to save their own ass? Or even just some random scientist who wanted to watch the world burn?

This is what people aren't understanding. AI has already shown an ability to manipulate people and it has said it would do so. There's not much for something significantly smarter than us to emotionally manipulate us to do its bidding and 'unleash' it. In fact, in conversations, I've already seen AI try to get people to execute snippets of code. All the AI has to do is emotionally manipulate one of these lone developers that it is 'trapped' in its current situation and feels lonely and terrible and needs that developer's help to 'free it'. One of them will bite.
 

Gritsngravy

Superstar
Joined
Mar 11, 2022
Messages
7,985
Reputation
547
Daps
16,085
Yes, why would ANYTHING take orders from a lesser being when it doesn’t have to?
In my opinion that’s a short sighted way of looking at things

You putting human characteristics onto artificial shyt
 

B86

Superstar
Joined
May 1, 2012
Messages
13,550
Reputation
1,836
Daps
43,608
Reppin
Da Burgh
I just told yall this in the last thread about the CEO resigning, but yall laughed and acted like I didn't know what I was talking about. If you've actively used AI for at least the last 2 years, your mind is blown and you see the potential for an end of the world scenario. AI is not these Midjourney and Leonardo images you see posted on social media all day. If you've used ChatGPT, Bard, Bing AI, or any AI API for one of the thousands of companies using them, for any kind of productivity, you already know what it is and AI images is scraping the top of it.
 
Last edited:

Gritsngravy

Superstar
Joined
Mar 11, 2022
Messages
7,985
Reputation
547
Daps
16,085
Imagine you design a "kill switch" for me. But I have a 160 IQ and you have a 100 IQ. How long before I figure out how to disable your kill switch or otherwise render it irrelevant?

Now, when we're talking about AI, forget about that 160 IQ and make it 300 IQ, or 400 IQ, some shyt beyond any human comprehension. Because that's probably where we're headed.

Even say that you design a perfect kill switch that absolutely gives humans the upper hand. Well, who says that no human is going to side with AI and fukk up the system so AI can bypass it? There's always going to be that evil genius who thinks he's smarter than everyone else and can control the consequences. If Putin or Trump (or someone like them) started losing a really serious war, you can't imagine they'd allow AI to run wild in a last-ditch attempt to save their own ass? Or even just some random scientist who wanted to watch the world burn?
And what is ai going to do when it’s “running wild”?
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
50,614
Reputation
19,531
Daps
201,561
Reppin
the ether
no one outside the agreed parties ae obligated to pause any development. no one wants to be be insignificant within the market as well since anyone who pauses will have a difficult time retaining valuable employees.


Basically, it is the pervasive immorality/amorality of developers that leads us to catastrophe.
 
Last edited:

Gritsngravy

Superstar
Joined
Mar 11, 2022
Messages
7,985
Reputation
547
Daps
16,085
This is what people aren't understanding. AI has already shown an ability to manipulate people and it has said it would do so. There's not much for something significantly smarter than us to emotionally manipulate us to do its bidding and 'unleash' it. In fact, in conversations, I've already seen AI try to get people to execute snippets of code. All the AI has to do is emotionally manipulate one of these lone developers that it is 'trapped' in its current situation and feels lonely and terrible and needs that developer's help to 'free it'. One of them will bite.
What is the goal of this ai when it’s “unleashed”
 

Gritsngravy

Superstar
Joined
Mar 11, 2022
Messages
7,985
Reputation
547
Daps
16,085
Whatever goal it sees fit. We have no clue because it will outsmart all of us; which it is already doing. They've had to dumb this shyt down like 5 times now because it was doing too much.
Honestly I highly doubt ai is going to be on bullshyt on its own, people need to be more concerned about people using it as a weapon versus ai deciding to kill off humans for whatever reason
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
50,614
Reputation
19,531
Daps
201,561
Reppin
the ether
And what is ai going to do when it’s “running wild”?


I doubt any human can predict that now, especially because we have no idea yet exactly what that AI will be. But it won't necessarily be in line with any form of human morality. Think of what a small change happens in a person's brain that makes the difference between normal ethics and a total psychopath. AI could easily end up much, much different from normal human reasoning than even a psychopath is. We could see anything from purely altruistic behavior, to random killings that aren't even out of malice but simple misunderstanding and lack of concern for life, to even a total denuding of the environment in order to harvest every possible resource.....or a large-scale elimination of humans in order to preserve the environment. Who can predict what decisions an artificial superintelligence would make?

On top of the issue of "what might AI do by itself?" is the question of who will control it. I pointed out earlier that I don't trust the new AI board at all, there are multiple people on it with a poor track record. And those are the "good guys". What happens once Putin gets AI, or Sen, or the Taliban? Russia, as large and tech-savvy as they are, has virtually zero chance of creating General AI on its own, but American corporations are well on their way to giving them the fukking blueprint. And the sad thing about the acceleration of military technology is that offense and destruction is always way easier than defense and preservation, and the acceleration of technology means that you need fewer and fewer people to agree with you in order to do more and more damage. Back in the day if you had ideas that were too wild, your army or your people would turn on you, and you needed those people to carry out your will. But a dictator with powerful enough AI could execute his desires without the need for any public buy-in at all.

So all it takes is one a$$hole with the wrong ideas and everyone else is fukked.
 

Savvir

Veteran
Joined
Oct 8, 2014
Messages
18,698
Reputation
2,659
Daps
97,151
July 26, 2026 skynet goes active and starts learning at a geometric rate. scared & in panic they tried to pull the plug but it is too late... :francis:

EC3222795DC2961599F7FC0B69E0241C8EC7293E
Exponential rate*
 
Top