Biden issues U.S.′ first AI executive order, requiring safety assessments, civil rights guidance, research on labor market impact

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,361
Reputation
8,496
Daps
160,063



 

Savvir

Veteran
Joined
Oct 8, 2014
Messages
19,332
Reputation
2,833
Daps
100,323
for prioritizing this over brokering a cease fire in a genocide in the middle east and a looming debt crisis back home

this isnt unimportant, there are just WAY more important things people will remember when election time rolls around. no one will give a shyt this old nikka signed an EO on AI. they WILL care that he co-signed israel bombing kids and still hasn't cancelled their fukkin student loans
Wait a sec. Are you saying the entire executive branch should not do any work until the Israel situation is solved?

:gucci:

do you think they started work on this AFTER October 7th?

You nikkas have no idea how government works. Jesus.
 

42 Monks

Veteran
Supporter
Joined
Feb 20, 2014
Messages
56,359
Reputation
9,459
Daps
209,314
Reppin
Carolina
this has been on the board since like..... at least 2020 fwiw

for a first go at something *absolutely necessary at this level*, honestly its a solid benchmark to build off of. there's never going to be a perfect solution at this stage of the game, but decent guard rails will go a long ways.

edit: not surprised at how this thread turned out :dead:
 

acri1

The Chosen 1
Supporter
Joined
May 2, 2012
Messages
24,178
Reputation
3,818
Daps
106,055
Reppin
Detroit
Wait a sec. Are you saying the entire executive branch should not do any work until the Israel situation is solved?

:gucci:

do you think they started work on this AFTER October 7th?

You nikkas have no idea how government works. Jesus.

They just came in to complain because "Biden" was in the thread title. :mjlol:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,361
Reputation
8,496
Daps
160,063


FUTURE FORECAST

10:44 AM by NOOR AL-SIBAI

Google AI Chief Says There’s a 50% Chance We’ll Hit AGI in Just 5 Years​

"I think it's entirely plausible."​

/ Artificial Intelligence/ Agi/ Deep Mind/ Google
Getty Images


Image by Getty Images

More than a decade ago, the co-founder of Google's DeepMind artificial intelligence lab predicted that by 2028, AI will have a half-and-half shot of being about as smart as humans — and now, he's holding firm on that forecast.

In an interview with tech podcaster Dwarkesh Patel, DeepMind co-founder Shane Legg said that he still thinks that researchers have a 50-50 chance of achieving artificial general intelligence (AGI), a stance he publicly announced at the very end of 2011 on his blog.

It's a notable prediction considering the exponentially growing interest in the space. OpenAI CEO Sam Altman has long advocated for an AGI, a hypothetical agent that is capable of accomplishing intellectual tasks as well as a human, that can be of benefit to all. But whether we'll ever be able to get to that point — let alone agree on one definition of AGI — remains to be seen.

Legg apparently began looking towards his 2028 goalpost all the way back in 2001 after reading "The Age of Spiritual Machines," the groundbreaking 1999 book by fellow Google AI luminary Ray Kurzweil that predicts a future of superhuman AIs.

"There were two really important points in his book that I came to believe as true," he explained. "One is that computational power would grow exponentially for at least a few decades. And that the quantity of data in the world would grow exponentially for a few decades."

Paired with an understanding of the trends of the era, such as the deep learning method of teaching algorithms to "think" and process data the way human brains do, Legg wrote back at the start of the last decade that in the coming ones, AGI could well be achieved — so long as "nothing crazy happens like a nuclear war."

Today, the DeepMind co-founder said that there are caveats to his prediction that the AGI era will be upon us by the end of this decade.

The first, broadly, is that definitions of AGI are reliant on definitions of human intelligence — and that kind of thing is difficult to test precisely because the way we think is complicated.

"You'll never have a complete set of everything that people can do," Legg said — things like developing episodic memory, or the ability to recall complete "episodes" that happened in the past, or even understanding streaming video. But if researchers could assemble a battery of tests for human intelligence and an AI model were to perform well enough against them, he continued, then "you have an AGI."

When Patel asked if there could be a single simple test to see whether an AI system had reached general intelligence, such as beating Minecraft, Legg pushed back.

"There is no one thing that would do it, because I think that's the nature of it," the AGI expert said. "It's about general intelligence. So I'd have to make sure [an AI system] could do lots and lots of different things and it didn't have a gap."

The second biggest caveat, Legg added, was the ability to scale AI training models way, way up — a worthy point given how much energy AI companies are already using to churn out large language models like OpenAI's GPT-4.

"There's a lot of incentive to make a more scalable algorithm to harness all this computing data," Legg explained. "So I thought it would be very likely that we'll start to discover scalable algorithms to do this."

Asked where he thought we stand today on the path to AGI, Legg said that he thinks computational power is where it needs to be to make it happen, and the "first unlocking step" would be to "start training models now with the scale of the data that is beyond what a human can experience in a lifetime" — a feat he believes the AI industry is ready to achieve.

All that said, Legg reiterated his personal stance that he only believes there's a 50 percent chance researchers will achieve AGI before the end of this decade, and Futurism has reached out to DeepMind to see if the Google subsidiary has anything to add to that prognosis.

"I think it's entirely plausible," he said, "but I'm not going to be surprised if it doesn't happen by then."
 

MajesticLion

Veteran
Joined
Jul 17, 2018
Messages
29,643
Reputation
5,109
Daps
64,583
this has been on the board since like..... at least 2020 fwiw

for a first go at something *absolutely necessary at this level*, honestly its a solid benchmark to build off of. there's never going to be a perfect solution at this stage of the game, but decent guard rails will go a long ways.

edit: not surprised at how this thread turned out :dead:


Needed to come out the gate even more assertively than this, I think. It will get watered down even more otherwise.
 

MoshpitMazi

Moshpit Gxng/ Anti Fash
Bushed
Joined
Mar 11, 2022
Messages
8,680
Reputation
580
Daps
20,209
Reppin
Stone Mountain GA
regulating tech is fukking retarded and anti intellectual.
That part during his speech was crazy: potentially more technological progress in the next 5-10 years vs the last 50 years.

Considering what's happened the last 50 years, that's a bold statement, but I can see it happening.

I know some people will hate the regulations, but I can see the point of them.
 

Micky Mikey

Veteran
Supporter
Joined
Sep 27, 2013
Messages
15,998
Reputation
2,962
Daps
89,254
what kind of legislation are you thinking about specifically?
I'm no expert on A.I. and what it will take to curtail the risks involved but I can think of a few things that deeply worry me. The relative ease it will be to manufactor bioweapons (particularly ones that target specific groups of people :mjpls:), autonomous weapons, misinformation, human labor becoming obsolete (this can go very bad or very good). Research and lots of governmental funding needs to be put in place to mitigate these risks. Not sure how we can do all of this, especially if open source models aren't too far behind in terms of capability.
 
Top