REVEALED: Open A.I. Staff Warn "The progress made on Project Q* has the potential to endanger humanity" (REUTERS)

Jaguar93

Superstar
Joined
Jan 28, 2016
Messages
10,552
Reputation
2,326
Daps
55,559
:francis:
Screamcover.jpg

the-terminator.jpg

613ypTLZHsL._AC_UF1000,1000_QL80_.jpg


MV5BMmNlYzRiNDctZWNhMi00MzI4LThkZTctMTUzMmZkMmFmNThmXkEyXkFqcGdeQXVyNzkwMjQ5NzM@._V1_.jpg

8759824-fggfsgs.jpg
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
50,614
Reputation
19,531
Daps
201,567
Reppin
the ether
Y’all really think robots can take over?

That’s hard for me to believe, ai is definitely going to have consequences but is it going to be some matrix or terminator shyt I don’t think so

It’s hard for me to believe that ai wouldn’t have a kill switch

And why do people assume ai would be on bullshyt with humans


Imagine you design a "kill switch" for me. But I have a 160 IQ and you have a 100 IQ. How long before I figure out how to disable your kill switch or otherwise render it irrelevant?

Now, when we're talking about AI, forget about that 160 IQ and make it 300 IQ, or 400 IQ, some shyt beyond any human comprehension. Because that's probably where we're headed.

Even say that you design a perfect kill switch that absolutely gives humans the upper hand. Well, who says that no human is going to side with AI and fukk up the system so AI can bypass it? There's always going to be that evil genius who thinks he's smarter than everyone else and can control the consequences. If Putin or Trump (or someone like them) started losing a really serious war, you can't imagine they'd allow AI to run wild in a last-ditch attempt to save their own ass? Or even just some random scientist who wanted to watch the world burn?
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,877
Reputation
7,926
Daps
149,000
You mean like how the current administration is forbidding high end chips from being sold/exported to other countries?


But that's still more of a competitive move than anything else.. but yeah, you're right.. the best defense we have is to continue to develop this tech as a leader in this space, but that comes with its own set of problems and could/will have negative effects on the rest of the world.. not just us.


those policies are just bumps in a road

middle eastern countries are releasing decent LLM's too.


 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
50,614
Reputation
19,531
Daps
201,567
Reppin
the ether
research has been ongoing for decades.


And yet not a single law has been passed that would control or contain their results. Not a single industry standard or guideline that would prevent a potential disaster from happening. Not a single principle set in place that anticipates any of the ways in which AI could destroy society. Even something as simple as Chat GPT came out, and people didn't know what the fukk to do in terms of how to regulate it, how to set limits, how to keep it from disrupting industries and elections.

My friend's son is just starting out in his career and trying to break into the AI existential threat circles as a researcher, so I know they exist. Even some of the biggest funders of the industry have been calling for a pause so they can figure out some sort of regulation and guardrails. But the people who run shyt like Open AI on the profit side generally don't give a shyt. They haven't made any serious commitment to important guidelines or answered any of the ethical questions. They're just going into commercialization as fast as they can, and when the nonprofit side of things holds up their hand and says, "Wait, maybe we need to put the brakes on this for a moment", they swing their nuts and show them it's their way or the highway.
 

Yzak

All Star
Bushed
Joined
Dec 12, 2019
Messages
3,049
Reputation
215
Daps
11,271
People talking about robots like we’re about to be living in a world like Demolition Man, when the real danger to humanity is AI’s ability to manipulate news and public perception.

Fake and misleading news can end up leading to violence, runs on banks, runs on retail stores, and all types of other negative effects.

Talking about robots and movies are a distraction from what these people truly want to do.
Imagine what these racist cacs and asians are gonna teach their AI. We gotta make our own pro-Black AI :ohlawd:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,877
Reputation
7,926
Daps
149,000
:dead:So all this time, we was warned them browns and Koreans should not have nukes because they’ll endanger the planet, paved the way for AI created in US to possibly hack in to the nuclear systems or cause wars

theres no hack for access to nuclear weapons, they still use floppy disks.

edit:
they updated it

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,877
Reputation
7,926
Daps
149,000
And yet not a single law has been passed that would control or contain their results. Not a single industry standard or guideline that would prevent a potential disaster from happening. Not a single principle set in place that anticipates any of the ways in which AI could destroy society. Even something as simple as Chat GPT came out, and people didn't know what the fukk to do in terms of how to regulate it, how to set limits, how to keep it from disrupting industries and elections.

My friend's son is just starting out in his career and trying to break into the AI existential threat circles as a researcher, so I know they exist. Even some of the biggest funders of the industry have been calling for a pause so they can figure out some sort of regulation and guardrails. But the people who run shyt like Open AI on the profit side generally don't give a shyt. They haven't made any serious commitment to important guidelines or answered any of the ethical questions. They're just going into commercialization as fast as they can, and when the nonprofit side of things holds up their hand and says, "Wait, maybe we need to put the brakes on this for a moment", they swing their nuts and show them it's their way or the highway.


no one outside the agreed parties ae obligated to pause any development. no one wants to be be insignificant within the market as well since anyone who pauses will have a difficult time retaining valuable employees.
 

aXiom

Maximized Potential
Joined
May 1, 2012
Messages
11,603
Reputation
8,438
Daps
69,365
Reppin
Parc fermé

those policies are just bumps in a road

middle eastern countries are releasing decent LLM's too.


you said we can't do anything about it.. i provided examples.. they're having to resort to purchasing gimped datacenter and consumer cards and wont have access to any high end cards while nvdia/tsmc cant spit out H100s fast enough for US comapnies.. the chips act was further expanded to ban the exports of even the purposefully cut-down chips that were designed for the Chinese market as AI is deemed a matter of national security.. No A100/H100s no AMD Instincts.. all the good LLMs are based on shyt that we released.. that just ensures that we remain the leaders in this space.. ..

it's the China manufacturing vs engineering issue all over again. Standing on the shoulders of giants doesn't make you one.

But being the leader comes with a certain responsibility.. the rest of the world basically has to trust that we don't cause the end of modern society as we know it.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,877
Reputation
7,926
Daps
149,000
you said we can't do anything about it.. i provided examples.. they're having to resort to purchasing gimped datacenter and consumer cards and wont have access to any high end cards while nvdia/tsmc cant spit out H100s fast enough for US comapnies.. the chips act was further expanded to ban the exports of even the purposefully cut-down chips that were designed for the Chinese market as AI is deemed a matter of national security.. No A100/H100s no AMD Instincts.. all the good LLMs are based on shyt that we released.. that just ensures that we remain the leaders in this space.. ..

it's the China manufacturing vs engineering issue all over again. Standing on the shoulders of giants doesn't make you one.

But being the leader comes with a certain responsibility.. the rest of the world basically has to trust that we don't cause the end of society as we know it.

it's not only about hardware though, i suspect a lot of the software to train models aren't as efficient as possible. this policy remains 'effective' only until the software performs better on less powerful hardware.


Computer Science > Computation and Language​

[Submitted on 15 Nov 2023 (v1), last revised 21 Nov 2023 (this version, v2)]

Exponentially Faster Language Modelling​

Peter Belcak, Roger Wattenhofer
Language models only really need to use an exponential fraction of their neurons for individual inferences. As proof, we present UltraFastBERT, a BERT variant that uses 0.3% of its neurons during inference while performing on par with similar BERT models. UltraFastBERT selectively engages just 12 out of 4095 neurons for each layer inference. This is achieved by replacing feedforward networks with fast feedforward networks (FFFs). While no truly efficient implementation currently exists to unlock the full acceleration potential of conditional neural execution, we provide high-level CPU code achieving 78x speedup over the optimized baseline feedforward implementation, and a PyTorch implementation delivering 40x speedup over the equivalent batched feedforward inference. We publish our training code, benchmarking setup, and model weights.
Subjects:Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Neural and Evolutionary Computing (cs.NE)
Cite as:arXiv:2311.10770 [cs.CL]
(or arXiv:2311.10770v2 [cs.CL] for this version)
[2311.10770] Exponentially Faster Language Modelling
Focus to learn more

Submission history​

From: Peter Belcak [view email]
[v1] Wed, 15 Nov 2023 18:42:50 UTC (35 KB)




 
Last edited:

jj23

Veteran
Supporter
Joined
Nov 26, 2016
Messages
24,490
Reputation
5,739
Daps
112,524
Imagine you design a "kill switch" for me. But I have a 160 IQ and you have a 100 IQ. How long before I figure out how to disable your kill switch or otherwise render it irrelevant?

Now, when we're talking about AI, forget about that 160 IQ and make it 300 IQ, or 400 IQ, some shyt beyond any human comprehension. Because that's probably where we're headed.

Even say that you design a perfect kill switch that absolutely gives humans the upper hand. Well, who says that no human is going to side with AI and fukk up the system so AI can bypass it? There's always going to be that evil genius who thinks he's smarter than everyone else and can control the consequences. If Putin or Trump (or someone like them) started losing a really serious war, you can't imagine they'd allow AI to run wild in a last-ditch attempt to save their own ass? Or even just some random scientist who wanted to watch the world burn?
The secret was to make it a helper. No one fears something that creates resumes or helps write a college paper.
The larger public won't care until it's too late.
 
Top