Biden issues U.S.′ first AI executive order, requiring safety assessments, civil rights guidance, research on labor market impact

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,301
Reputation
8,205
Daps
156,331

FCC moves to outlaw AI-generated robocalls​

Devin Coldewey @techcrunch / 3:23 PM EST•January 31, 2024

An illustration of a humanoid robot emerging from a smartphone screen

Image Credits: Golden Sikorka / Getty Images

No one likes robocalls to begin with, but using AI-generated voices of people like President Biden makes them even worse. As such the FCC is proposing that using voice cloning tech in robocalls be ruled fundamentally illegal, making it easier to charge the operators of these frauds.

You may ask why it’s necessary if robocalls are illegal to begin with. In fact some automated calls are necessary and even desirable, and it’s only when a call operation is found to be breaking the law in some way that it becomes the business of the authorities.

For example, regarding the recent fake Biden calls in New Hampshire telling people not to vote, the attorney general there can (and did) say with confidence that the messages “appear to be an unlawful attempt to disrupt the New Hampshire Presidential Primary Election and to suppress New Hampshire voters.”

Under the law there, voter suppression is illegal and so, when they track down the perpetrators (and I’m emailing them constantly to find out if they have, by the way) that will be what they are charged with, likely among other things. But it remains that a crime must be committed, or reasonably suspected to have been committed, for the authorities to step in.

If employing voice cloning tech in automated calls, like what was obviously used on Biden, is itself illegal, that makes charging robocallers that much easier.

“That’s why the FCC is taking steps to recognize this emerging technology as illegal under existing law, giving our partners at State Attorneys General offices across the country new tools they can use to crack down on these scams and protect consumers,” said FCC Chairwoman Jessica Rosenworcel in a news release. They previously announced that they were looking into this back when the problem was relatively fresh.

The FCC already uses the Telephone Consumer Protection Act as the basis for charging robocallers and other telephone scammers. The TCPA already prohibits “artificial” voices, but it is not clear that cloned voices fall under that category. It’s arguable, for instance, that a company could use the generated voice of its CEO for legitimate business purposes.

But the fact is that legal applications of the tech are fewer in number and less immediately important than the illegal applications. Therefore the FCC proposes to issue a Declaratory Ruling that AI-powered voice cloning causes a call to fall under the “artificial” heading.

The law here is being rapidly iterated as telephone, messaging and generative voice tech all evolve. So don’t be surprised if it isn’t entirely clear what is and isn’t illegal, or why despite being obviously illegal, some calls or scams seem to operate with impunity. It’s a work in progress.

Update: FCC spokesman Will Wiquist told me that procedurally, this proposal will be propagated internally and voted on at Commissioners’ discretion. It will only be public when and if it is adopted.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,301
Reputation
8,205
Daps
156,331

US says leading AI companies join safety consortium to address risks​

By David Shepardson

February 8, 20245:02 AM EST
Updated 2 hours ago


llustration shows AI (Artificial Intelligence) letters and computer motherboard

AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo Purchase Licensing Rights, opens new tab

WASHINGTON, Feb 8 (Reuters) - The Biden administration on Thursday said leading artificial intelligence companies are among more than 200 entities joining a new U.S. consortium to support the safe development and deployment of generative AI.

Commerce Secretary Gina Raimondo announced the U.S. AI Safety Institute Consortium (AISIC), which includes OpenAI, Alphabet's Google (GOOGL.O), opens new tab, Anthropic and Microsoft (MSFT.O), opens new tab along with Facebook-parent Meta Platforms (META.O), opens new tab, Apple (AAPL.O), opens new tab, Amazon.com (AMZN.O), opens new tab, Nvidia (NVDA.O), opens new tab Palantir (PLTR.N), opens new tab, Intel, JPMorgan Chase (JPM.N), opens new tab and Bank of America (BAC.N), opens new tab.

"The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence," Raimondo said in a statement.

The consortium, which also includes BP (BP.L), opens new tab, Cisco Systems (CSCO.O), opens new tab, IBM (IBM.N), opens new tab, Hewlett Packard (HPE.N), opens new tab, Northop Grumman (NOC.N), opens new tab, Mastercard (MA.N), opens new tab, Qualcomm (QCOM.O), opens new tab, Visa (V.N), opens new tab and major academic institutions and government agencies, will be housed under the U.S. AI Safety Institute (USAISI).

The group is tasked with working on priority actions outlined in President Biden’s October AI executive order "including developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content."

Major AI companies last year pledged to watermark AI-generated content to make the technology safer. Red-teaming has been used for years in cybersecurity to identify new risks, with the term referring to U.S. Cold War simulations where the enemy was termed the "red team."

Biden's order directed agencies to set standards for that testing and to address related chemical, biological, radiological, nuclear, and cybersecurity risks.

In December, the Commerce Department said it was taking the first step toward writing key standards and guidance for the safe deployment and testing of AI.

The consortium represents the largest collection of test and evaluation teams and will focus on creating foundations for a "new measurement science in AI safety," Commerce said.

Generative AI - which can create text, photos and videos in response to open-ended prompts - has spurred excitement as well as fears it could make some jobs obsolete, upend elections and potentially overpower humans and catastrophic effects.

While the Biden administration is pursuing safeguards, efforts in Congress to pass legislation addressing AI have stalled despite numerous high-level forums and legislative proposals.

Reporting by David Shepardson; Editing by Jamie Freed

Our Standards: The Thomson Reuters Trust Principles.




































 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,301
Reputation
8,205
Daps
156,331

FCC votes to outlaw scam robocalls that use AI-generated voices​

By Brian Fung, CNN

2 minute read

Updated 11:51 AM EST, Thu February 8, 2024


GettyImages-1130064392.jpg

dramalens/iStockphoto/Getty Images

WashingtonCNN —

The Federal Communications Commission said Thursday it is immediately outlawing scam robocalls featuring fake, artificial intelligence-created voices, cracking down on so-called “deepfake” technology that experts say could undermine election security or supercharge fraud.

The unanimous FCC vote extends anti-robocall rules to cover unsolicited AI deepfake calls by recognizing those voices as “artificial” under a federal law governing telemarketing and robocalling.

The FCC’s move gives state attorneys general more legal tools to pursue illegal robocallers that use AI-generated voices to fool Americans, the FCC said.

“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters,” said FCC Chairwoman Jessica Rosenworcel in a statement. “We’re putting the fraudsters behind these robocalls on notice.”

The decision to interpret the 1991 Telephone Consumer Protection Act (TCPA) more broadly to include AI-generated voices comes weeks after a fake robocall that impersonated President Joe Biden targeted thousands of New Hampshire voters and urged them not to participate in the state’s primary.

Authorities said this week they had linked those fake calls to a Texas man and two companies in an ongoing investigation that could lead to civil and criminal penalties.

In its announcement Thursday, the FCC said those who wish to send robocalls “must obtain prior express consent from the called party before making a call that utilizes artificial or prerecorded voice simulated or generated through AI technology.”

With Thursday’s change, scam robocalls featuring cloned voices would be subject to the same fines and consequences associated with illegal robocalls that do not use the technology. The FCC had announced it was considering the proposal last week.

Violations of the TCPA can carry stiff civil penalties. In 2021, the FCC announced a $5 million proposed fine against right-wing operatives Jacob Wohl and Jack Burkman for allegedly using illegal robocalls to discourage voting in the 2020 election.

The number of robocalls placed in the US peaked at around 58.5 billion in 2019, according to estimates by YouMail, a robocall blocking service. Last year, the figure was closer to 55 billion.

As the FCC updates its interpretation of federal law, some US lawmakers have proposed revising the law directly to further deter illegal robocallers. House Democrats unveiled legislation this year that would double the TCPA’s maximum penalties when a robocall violation involves the use of AI.

This story has been updated.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,301
Reputation
8,205
Daps
156,331

President Biden Calls for Ban on AI Voice Impersonations During State of the Union​

By J. Kim Murphy

Plus Icon

joe biden

Win McNamee/Getty Images


President Biden included a nod to a rising issue in the entertainment and tech industries during his State of the Union address Thursday evening, calling for a ban on AI voice impersonations.

“Here at home, I have signed over 400 bipartisan bills. There’s more to pass my unity agenda,” President Biden said, beginning to list off a series of different proposals that he hopes to address if elected to a second term. “Strengthen penalties on fentanyl trafficking, pass bipartisan privacy legislation to protect our children online, harness the promise of AI to protect us from peril, ban AI voice impersonations and more.”

The president did not elaborate on the types of guardrails or penalties that he would plan to institute around the rising technology, or if it would extend to the entertainment industry. AI was a peak concern for SAG-AFTRA during the actors union’s negotiations with and strike against the major studios last year. The talks eventually finished with an agreement that established consent and compensation requirements for productions to utilize AI to replicate actors’ likenesses and voices. However, the deal did not block the studios from training AI systems to create “synthetic” performers that bear no resemblance to any real people.

Biden’s State of the Union address also saw a series of small hiccups from heckling Congress members, including Georgia’s Republican Rep. Marjorie Taylor Greene. Greene donned a “Make America Great Again” hat to the proceedings; later, the broadcast cut to reveal that Greene was yelling during Biden’s speech.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,301
Reputation
8,205
Daps
156,331

Exclusive: U.S. Must Move ‘Decisively’ to Avert ‘Extinction-Level’ Threat From AI, Government-Commissioned Report Says​


11 MINUTE READ

Extinction-AI-threat.jpg

Lon Tweeten for TIME; Getty Images

BY BILLY PERRIGO

MARCH 11, 2024 9:00 AM EDT

T
he U.S. government must move “quickly and decisively” to avert substantial national security risks stemming from artificial intelligence (AI) which could, in the worst case, cause an “extinction-level threat to the human species,” says a report commissioned by the U.S. government published on Monday.

“Current frontier AI development poses urgent and growing risks to national security,” the report, which TIME obtained ahead of its publication, says. “The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.” AGI is a hypothetical technology that could perform most tasks at or above the level of a human. Such systems do not currently exist, but the leading AI labs are working toward them and many expect AGI to arrive within the next five years or less.

The three authors of the report worked on it for more than a year, speaking with more than 200 government employees, experts, and workers at frontier AI companies—like OpenAI, Google DeepMind, Anthropic and Meta— as part of their research. Accounts from some of those conversations paint a disturbing picture, suggesting that many AI safety workers inside cutting-edge labs are concerned about perverse incentives driving decisionmaking by the executives who control their companies.

The finished document, titled “An Action Plan to Increase the Safety and Security of Advanced AI,” recommends a set of sweeping and unprecedented policy actions that, if enacted, would radically disrupt the AI industry. Congress should make it illegal, the report recommends, to train AI models using more than a certain level of computing power. The threshold, the report recommends, should be set by a new federal AI agency, although the report suggests, as an example, that the agency could set it just above the levels of computing power used to train current cutting-edge models like OpenAI’s GPT-4 and Google’s Gemini. The new AI agency should require AI companies on the “frontier” of the industry to obtain government permission to train and deploy new models above a certain lower threshold, the report adds. Authorities should also “urgently” consider outlawing the publication of the “weights,” or inner workings, of powerful AI models, for example under open-source licenses, with violations possibly punishable by jail time, the report says. And the government should further tighten controls on the manufacture and export of AI chips, and channel federal funding toward “alignment” research that seeks to make advanced AI safer, it recommends.

The report was commissioned by the State Department in November 2022 as part of a federal contract worth $250,000, according to public records. It was written by Gladstone AI, a four-person company that runs technical briefings on AI for government employees. (Parts of the action plan recommend that the government invests heavily in educating officials on the technical underpinnings of AI systems so they can better understand their risks.) The report was delivered as a 247-page document to the State Department on Feb. 26. The State Department did not respond to several requests for comment on the report. The recommendations “do not reflect the views of the United States Department of State or the United States Government,” the first page of the report says.

The report's recommendations, many of them previously unthinkable, follow a dizzying series of major developments in AI that have caused many observers to recalibrate their stance on the technology. The chatbot ChatGPT, released in November 2022, was the first time this pace of change became visible to society at large, leading many people to question whether future AIs might pose existential risks to humanity. New tools, with more capabilities, have continued to be released at a rapid clip since. As governments around the world discuss how best to regulate AI, the world’s biggest tech companies have fast been building out the infrastructure to train the next generation of more powerful systems—in some cases planning to use 10 or 100 times more computing power. Meanwhile, more than 80% of the American public believe AI could accidentally cause a catastrophic event, and 77% of voters believe the government should be doing more to regulate AI, according to recent polling by the AI Policy Institute.

Read More: Researchers Develop New Technique to Wipe Dangerous Knowledge From AI Systems

Outlawing the training of advanced AI systems above a certain threshold, the report states, may “moderate race dynamics between all AI developers” and contribute to a reduction in the speed of the chip industry manufacturing faster hardware. Over time, a federal AI agency could raise the threshold and allow the training of more advanced AI systems once evidence of the safety of cutting-edge models is sufficiently proven, the report proposes. Equally, it says, the government could lower the safety threshold if dangerous capabilities are discovered in existing models.

The proposal is likely to face political difficulties. “I think that this recommendation is extremely unlikely to be adopted by the United States government” says Greg Allen, director of the Wadhwani Center for AI and Advanced Technologies at the Center for Strategic and International Studies (CSIS), in response to a summary TIME provided of the report’s recommendation to outlaw AI training runs above a certain threshold. Current U.S. government AI policy, he notes, is to set compute thresholds above which additional transparency monitoring and regulatory requirements apply, but not to set limits above which training runs would be illegal. “Absent some kind of exogenous shock, I think they are quite unlikely to change that approach,” Allen says.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,301
Reputation
8,205
Daps
156,331
{continued}



Jeremie and Edouard Harris, the CEO and CTO of Gladstone respectively, have been briefing the U.S. government on the risks of AI since 2021. The duo, who are brothers, say that government officials who attended many of their earliest briefings agreed that the risks of AI were significant, but told them the responsibility for dealing with them fell to different teams or departments. In late 2021, the Harrises say Gladstone finally found an arm of the government with the responsibility to address AI risks: the State Department’s Bureau of International Security and Nonproliferation. Teams within the Bureau have an inter-agency mandate to address risks from emerging technologies including chemical and biological weapons, and radiological and nuclear risks. Following briefings by Jeremie and Gladstone's then-CEO Mark Beall, in October 2022 the Bureau put out a tender for report that could inform a decision whether to add AI to the list of other risks it monitors. (The State Department did not respond to a request for comment on the outcome of that decision.) The Gladstone team won that contract, and the report released Monday is the outcome.

The report focuses on two separate categories of risk. Describing the first category, which it calls “weaponization risk,” the report states: “such systems could potentially be used to design and even execute catastrophic biological, chemical, or cyber attacks, or enable unprecedented weaponized applications in swarm robotics.” The second category is what the report calls the “loss of control” risk, or the possibility that advanced AI systems may outmaneuver their creators. There is, the report says, “reason to believe that they may be uncontrollable if they are developed using current techniques, and could behave adversarially to human beings by default.”

Both categories of risk, the report says, are exacerbated by “race dynamics” in the AI industry. The likelihood that the first company to achieve AGI will reap the majority of economic rewards, the report says, incentivizes companies to prioritize speed over safety. “Frontier AI labs face an intense and immediate incentive to scale their AI systems as fast as they can,” the report says. “They do not face an immediate incentive to invest in safety or security measures that do not deliver direct economic benefits, even though some do out of genuine concern.”

The Gladstone report identifies hardware—specifically the high-end computer chips currently used to train AI systems—as a significant bottleneck to increases in AI capabilities. Regulating the proliferation of this hardware, the report argues, may be the “most important requirement to safeguard long-term global safety and security from AI.” It says the government should explore tying chip export licenses to the presence of on-chip technologies allowing monitoring of whether chips are being used in large AI training runs, as a way of enforcing proposed rules against training AI systems larger than GPT-4. However the report also notes that any interventions will need to account for the possibility that overregulation could bolster foreign chip industries, eroding the U.S.’s ability to influence the supply chain.

The report also raises the possibility that, ultimately, the physical bounds of the universe may not be on the side of those attempting to prevent proliferation of advanced AI through chips. “As AI algorithms continue to improve, more AI capabilities become available for less total compute. Depending on how far this trend progresses, it could ultimately become impractical to mitigate advanced AI proliferation through compute concentrations at all.” To account for this possibility, the report says a new federal AI agency could explore blocking the publication of research that improves algorithmic efficiency, though it concedes this may harm the U.S. AI industry and ultimately be unfeasible.

The Harrises recognize in conversation that their recommendations will strike many in the AI industry as overly zealous. The recommendation to outlaw the open-sourcing of advanced AI model weights, they expect, will not be popular. “Open source is generally a wonderful phenomenon and overall massively positive for the world,” says Edouard, the chief technology officer of Gladstone. “It’s an extremely challenging recommendation to make, and we spent a lot of time looking for ways around suggesting measures like this.” Allen, the AI policy expert at CSIS, says he is sympathetic to the idea that open-source AI makes it more difficult for policymakers to get a handle on the risks. But he says any proposal to outlaw the open-sourcing of models above a certain size would need to contend with the fact that U.S. law has a limited reach. “Would that just mean that the open source community would move to Europe?” he says. “Given that it's a big world, you sort of have to take that into account.”

Despite the challenges, the report’s authors say they were swayed by how easy and cheap it currently is for users to remove safety guardrails on an AI model if they have access to its weights. “If you proliferate an open source model, even if it looks safe, it could still be dangerous down the road,” Edouard says, adding that the decision to open-source a model is irreversible. “At that point, good luck, all you can do is just take the damage.”

The third co-author of the report, former Defense Department official Beall, has since left Gladstone in order to start a super PAC aimed at advocating for AI policy. The PAC, called Americans for AI Safety, officially launched on Monday. It aims to make AI safety and security "a key issue in the 2024 elections, with a goal of passing AI safety legislation by the end of 2024," the group said in a statement to TIME. The PAC did not disclose its funding commitments, but said it has "set a goal of raising millions of dollars to accomplish its mission."

Before co-founding Gladstone with Beall, the Harris brothers ran an AI company that went through YCombinator, the famed Silicon Valley incubator, at the time when OpenAI CEO Sam Altman was at the helm. The pair brandish these credentials as evidence they have the industry’s interests at heart, even as their recommendations, if implemented, would upend it. “Move fast and break things, we love that philosophy, we grew up with that philosophy,” Jeremie tells TIME. But the credo, he says, ceases to apply when the potential downside of your actions is so massive. “Our default trajectory right now,” he says, “seems very much on course to create systems that are powerful enough that they either can be weaponized catastrophically, or fail to be controlled.” He adds: “One of the worst-case scenarios is you get a catastrophic event that completely shuts down AI research for everybody, and we don't get to reap the incredible benefits of this technology.”

Are you an employee at an AI lab and have concerns that you might consider sharing with a journalist? You can contact the author of this piece on Signal at billyperrigo.01
 

SupaDupaFresh

Superstar
Joined
Jul 25, 2018
Messages
6,275
Reputation
5,356
Daps
32,433
Civil Rights guidance....on A.I.?!?!!? :what: MAYBE if you penalized the creators who make biased AI programs that would be a start!!! :stopitslime:

Imagine the President of the United fukking States having the sole authority to just "penalize" or shut corporations down for being racially biased. Imagine.

You definitely skipped civics in school. It's like you really think Presidents are monarchs who can just punish people and do things by command. We live in a damn Democracy where everything the President does is (rightfully) challenged through multiple branches of government. And with a right wing Supreme Court thats on a crusade against civil rights laws--which barbershop babbling dumbasses like you enabled with your both sides bull shyt--you can forget about Joe Biden alone just swiftly dismantling racist corporation, handing out your reparations check, or enacting laws "exclusive for black Americans" without a heated fight between 600+ elected people. And guess which "side" will be on the side of advancing black peoples rights and mobility and which side will be entirely against that. I'll give you time.

Came in with the quickness to downplay Democrats making an effort, by law, to ensure this emerging new technology is not riddled with racial bias, particularly towards hiring and housing. Whereas Republicans are fighting hard to eliminate "DEI" in hiring practice and give folks back the ery right to be racially biased. That's not when they're blaming Boeing's woes on DEI and diversity hire black pilots.

Both sides tho...
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,301
Reputation
8,205
Daps
156,331



1/3
Key diff between EU and US regulation= thresholds, where EU does major tests at 10^25+ flops and WH at 10^26 flops. But what does this mean in terms of dollars? It means $7m vs $70m, based on a napkin analysis. This is a big deal!

2/3
In other words, the EU regs hit way more companies / training runs than US ones. This is v important as it relates to diff gov capacity for third-party measurement - EU AI office will need to do tests/evals on way more models than US. Napkin analysis

3/3
yup, 10^26 cost ~$110+m on A100
GJxQdTjakAEmb6M.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,301
Reputation
8,205
Daps
156,331

A new bill wants to reveal what’s really inside AI training data​


Rep. Adam Schiff’s bill garnered support from several entertainment industry groups.​

By Emilia David, a reporter who covers AI. Prior to joining The Verge, she covered the intersection between technology, finance, and the economy.

Apr 10, 2024, 1:35 PM EDT

Photo illustration of the shape of a brain on a circuitboard.

Illustration: Cath Virginia / The Verge | Photos: Getty Images

A new bill would compel tech companies to disclose any copyrighted materials that are used to train their AI models.

The Generative AI Copyright Disclosure bill from Rep. Adam Schiff (D-CA) would require anyone making a training dataset for AI to submit reports on its contents to the Copyrights Register. The reports should include a detailed summary of the copyrighted material in the dataset and the URL for the dataset if it’s publicly available. This requirement will be extended to any changes made to the dataset.

Companies must submit a report “not later than 30 days” before the AI model that used the training dataset is released to the public. The bill will not be retroactive to existing AI platforms unless changes are made to their training datasets after it becomes law.

Schiff’s bill hits on an issue artists, authors, and other creators have been complaining about since the rise of generative AI: that AI models are often trained on copyrighted material without permission. Copyright and AI have always been tricky to navigate, especially as the question of how much AI models change or mimic protected content has not been settled. Artists and authors have turned to lawsuits to assert their rights.

Developers of AI models claim their models are trained on publicly available data, but the sheer amount of information means they don’t know specifically which data is copyrighted. Companies have said any copyrighted materials fall under fair use. Meanwhile, many of these companies have begun offering legal cover to some customers if they find themselves sued for copyright infringement.

Schiff’s bill garnered support from industry groups like the Writers Guild of America (WGA), the Recording Industry Association of America (RIAA), the Directors Guild of America (DGA), the Screen Actors Guild - American Federation of Television and Radio Artists (SAG-AFTRA), and the Authors Guild. Notably absent from the list of supporters is the Motion Picture Association (MPA), which normally backs moves to protect copyrighted work from piracy. (Disclosure: The Verge’s editorial staff is unionized with the Writers Guild of America, East.)

Other groups have sought to bring more transparency to training datasets. The group Fairly Trained wants to add labels to AI models if they prove they asked for permission to use copyrighted data.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,301
Reputation
8,205
Daps
156,331


Biden-​

  1. HOME
  2. BRIEFING ROOM
  3. STATEMENTS AND RELEASES


Six months ago, President Biden issued a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). Since then, agencies all across government have taken vital steps to manage AI’s safety and security risks, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more.



Today, federal agencies reported that they completed all of the 180-day actions in the E.O. on schedule, following their recent successes completing each 90-day, 120-day, and 150-day action on time. Agencies also progressed on other work tasked by the E.O. over longer timeframes.



Actions that agencies reported today as complete include the following:



Managing Risks to Safety and Security:

Over 180 days, the Executive Order directed agencies to address a broad range of AI’s safety and security risks, including risks related to dangerous biological materials, critical infrastructure, and software vulnerabilities. To mitigate these and other threats to safety, agencies have:

  • Established a framework for nucleic acid synthesis screening to help prevent the misuse of AI for engineering dangerous biological materials. This work complements in-depth study by the Department of Homeland Security (DHS), Department of Energy (DOE) and Office of Science and Technology Policy on AI’s potential to be misused for this purpose, as well as a DHS report that recommended mitigations for the misuse of AI to exacerbate chemical and biological threats. In parallel, the Department of Commerce has worked to engage the private sector to develop technical guidance to facilitate implementation. Starting 180 days after the framework is announced, agencies will require that grantees obtain synthetic nucleic acids from vendors that screen.
  • Released for public comment draft documents on managing generative AI risks, securely developing generative AI systems and dual-use foundation models, expanding international standards development in AI, and reducing the risks posed by AI-generated content. When finalized, these documents by the National Institute of Standards and Technology (NIST) will provide additional guidance that builds on NIST’s AI Risk Management Framework, which offered individuals, organizations, and society a framework to manage AI risks and has been widely adopted both in the U.S. and globally.
  • Developed the first AI safety and security guidelines for critical infrastructure owners and operators. These guidelines are informed by the completed work of nine agencies to assess AI risks across all sixteen critical infrastructure sectors.
  • Launched the AI Safety and Security Board to advise the Secretary of Homeland Security, the critical infrastructure community, other private sector stakeholders, and the broader public on the safe and secure development and deployment of AI technology in our nation’s critical infrastructure. The Board’s 22 inaugural members include representatives from a range of sectors, including software and hardware company executives, critical infrastructure operators, public officials, the civil rights community, and academia.
  • Piloted new AI tools for identifying vulnerabilities in vital government software systems. The Department of Defense (DoD) made progress on a pilot for AI that can find and address vulnerabilities in software used for national security and military purposes. Complementary to DoD’s efforts, DHS piloted different tools to identify and close vulnerabilities in other critical government software systems that Americans rely on every hour of every day.


Standing up for Workers, Consumers, and Civil Rights

The Executive Order directed bold steps to mitigate other risks from AI—including risks to workers, to consumers, and to Americans’ civil rights—and ensure that AI’s development and deployment benefits all Americans. Today, agencies reported that they have:

  • Developed bedrock principles and practices for employers and developers to build and deploy AI safely and in ways that empower workers. Agencies all across government are now starting work to establish these practices as requirements, where appropriate and authorized by law, for employers that receive federal funding.
  • Released guidance to assist federal contractors and employers comply with worker protection laws as they deploy AI in the workplace. The Department of Labor (DOL) developed a guide for federal contractors and subcontractors to answer questions and share promising practices to clarify federal contractors’ legal obligations, promote equal employment opportunity, and mitigate the potentially harmful impacts of AI in employment decisions. DOL also provided guidance regarding the application of the Fair Labor Standards Act and other federal labor standards as employers increasingly use of AI and other automated technologies in the workplace.
  • Released resources for job seekers, workers, and tech vendors and creators on how AI use could violate employment discrimination laws. The Equal Employment Opportunity Commission’s resources clarify that existing laws apply the use of AI and other new technologies in employment just as they apply to other employment practices.
  • Issued guidance on AI’s nondiscriminatory use in the housing sector. In two guidance documents, the Department of Housing and Urban Development affirmed that existing prohibitions against discrimination apply to AI’s use for tenant screening and advertisement of housing opportunities, and it explained how deployers of AI tools can comply with these obligations.
  • Published guidance and principles that set guardrails for the responsible and equitable use of AI in administering public benefits programs. The Department of Agriculture’s guidance explains how State, local, Tribal, and territorial governments should manage risks for uses of AI and automated systems in benefits programs such as SNAP. The Department of Health and Human Services (HHS) released a plan with guidelines on similar topics for benefits programs it oversees. Both agencies’ documents prescribe actions that align with the Office of Management and Budget’s policies, published last month, for federal agencies to manage risks in their own use of AI and harness AI’s benefits.
  • Announced a final rule clarifying that nondiscrimination requirements in health programs and activities continue to apply to the use of AI, clinical algorithms, predictive analytics, and other tools. Specifically, the rule applies the nondiscrimination principles under Section 1557 of the Affordable Care Act to the use of patient care decision support tools in clinical care, and it requires those covered by the rule to take steps to identify and mitigate discrimination when they use AI and other forms of decision support tools for care.
  • Developed a strategy for ensuring the safety and effectiveness of AI deployed in the health care sector. The strategy outlines rigorous frameworks for AI testing and evaluation, and it outlines future actions for HHS to promote responsible AI development and deployment.


Harnessing AI for Good

President Biden’s Executive Order also directed work to seize AI’s enormous promise, including by advancing AI’s use for scientific research, deepening collaboration with the private sector, and piloting uses of AI. Over the past 180 days, agencies have done the following:

  • Announced DOE funding opportunities to support the application of AI for science, including energy-efficient AI algorithms and hardware.
  • Prepared convenings for the next several months with utilities, clean energy developers, data center owners and operators, and regulators in localities experiencing large load growth. Today, DOE announced new actions to assess the potential energy opportunities and challenges of AI, accelerate deployment of clean energy, and advance AI innovation to manage the growing energy demand of AI.
  • Launched pilots, partnerships, and new AI tools to address energy challenges and advance clean energy. For example, DOE is piloting AI tools to streamline permitting processes and improving siting for clean energy infrastructure, and it has developed other powerful AI tools with applications at the intersection of energy, science, and security. Today, DOE also published a report outlining opportunities AI brings to advance the clean energy economy and modernize the electric grid.
  • Initiated a sustained effort to analyze the potential risks that deployment of AI may pose to the grid. DOE has started the process of convening energy stakeholders and technical experts over the coming months to collaboratively assess potential risks to the grid, as well as ways in which AI could potentially strengthen grid resilience and our ability to respond to threats—building off a new public assessment.
  • Authored a report on AI’s role in advancing scientific research to help tackle major societal challenges, written by the President’s Council of Advisors on Science and Technology.


Bringing AI Talent into Government

The AI and Tech Talent Task Force has made substantial progress on hiring through the AI Talent Surge.
Since President Biden signed the E.O., federal agencies have hired over 150 AI and AI-enabling professionals and, along with the tech talent programs, are on track to hire hundreds by Summer 2024. Individuals hired thus far are already working on critical AI missions, such as informing efforts to use AI for permitting, advising on AI investments across the federal government, and writing policy for the use of AI in government.

  • The General Services Administration has onboarded a new cohort of Presidential Innovation Fellows (PIF) and also announced their first-ever PIF AI cohort starting this summer.
  • DHS has launched the DHS AI Corps, which will hire 50 AI professionals to build safe, responsible, and trustworthy AI to improve service delivery and homeland security.
  • The Office of Personnel Management has issued guidance on skills-based hiring to increase access to federal AI roles for individuals with non-traditional academic backgrounds.
  • For more on the AI Talent Surge’s progress, read its report to the President. To explore opportunities, visit Join the National AI Talent Surge

    The table below summarizes many of the activities that federal agencies have completed in response to the Executive Order.


Picture1_735ae4.png
Picture2_3b5e1e.png
Picture3_eff5d9.png
Picture4_f94b36.png
 

ORDER_66

Demon Time coming 2024
Joined
Feb 2, 2014
Messages
146,692
Reputation
15,799
Daps
585,100
Reppin
Queens,NY
Imagine the President of the United fukking States having the sole authority to just "penalize" or shut corporations down for being racially biased. Imagine.

You definitely skipped civics in school. It's like you really think Presidents are monarchs who can just punish people and do things by command. We live in a damn Democracy where everything the President does is (rightfully) challenged through multiple branches of government. And with a right wing Supreme Court thats on a crusade against civil rights laws--which barbershop babbling dumbasses like you enabled with your both sides bull shyt--you can forget about Joe Biden alone just swiftly dismantling racist corporation, handing out your reparations check, or enacting laws "exclusive for black Americans" without a heated fight between 600+ elected people. And guess which "side" will be on the side of advancing black peoples rights and mobility and which side will be entirely against that. I'll give you time.

Came in with the quickness to downplay Democrats making an effort, by law, to ensure this emerging new technology is not riddled with racial bias, particularly towards hiring and housing. Whereas Republicans are fighting hard to eliminate "DEI" in hiring practice and give folks back the ery right to be racially biased. That's not when they're blaming Boeing's woes on DEI and diversity hire black pilots.

Both sides tho...

There you are always on bidens dikk...:mjlol:

Presidents enable genocides and kill people overseas like nothing through the arms of the CIA and others.. :what: So dont give me that bullshyt about they can't do it when they already BEEN doing it you goof... The president has alot of fukking power and he doesnt need the rest of the govt to do it... you talking about civics class... that shyt doesnt add up in 2024...
 

Gritsngravy

Superstar
Joined
Mar 11, 2022
Messages
8,120
Reputation
572
Daps
16,413
I heard a YouTuber make a good point about ai

It’s not ai ,workers should be worried about, it’s the 7% interest rate that’s killing jobs
 

JLova

Veteran
Supporter
Joined
May 6, 2012
Messages
57,529
Reputation
3,897
Daps
172,340
I heard a YouTuber make a good point about ai

It’s not ai ,workers should be worried about, it’s the 7% interest rate that’s killing jobs

They should be worried about AI as well, but the Fed wants job losses.
 
Top