New bipartisan bill would require labeling of AI-generated videos and audio

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,027
Reputation
8,229
Daps
157,702

SocialMedia-1024x683.jpg


New bipartisan bill would require labeling of AI-generated videos and audio​

Politics Mar 21, 2024 5:32 PM EDT

WASHINGTON (AP) — Bipartisan legislation introduced in the House Thursday would require the identification and labeling of online images, videos and audio generated using artificial intelligence, the latest effort to rein in rapidly developing technologies that, if misused, could easily deceive and mislead.

So-called deepfakes created by artificial intelligence can be hard or even impossible to tell from the real thing. AI has already been used to mimic President Joe Biden’s voice, exploit the likenesses of celebrities and impersonate world leaders, prompting fears it could lead to greater misinformation, sexual exploitation, consumer scams and a widespread loss of trust.

Key provisions in the legislation would require AI developers to identify content created using their products with digital watermarks or metadata, similar to how photo metadata records the location, time and settings of a picture. Online platforms like TikTok, YouTube or Facebook would then be required to label the content in a way that would notify users. Final details of the proposed rules would be crafted by the National Institute of Standards and Technology, a small agency within the U.S. Department of Commerce.

Violators of the proposed rule would be subject to civil lawsuits.

WATCH: Why more people are turning to artificial intelligence for companionship

“We’ve seen so many examples already, whether it’s voice manipulation or a video deepfake. I think the American people deserve to know whether something is a deepfake or not,” said Rep. Anna Eshoo, a Democrat who represents part of California’s Silicon Valley. Eshoo co-sponsored the bill with Republican Rep. Neal Dunn of Florida. “To me, the whole issue of deepfakes stands out like a sore thumb. It needs to be addressed, and in my view the sooner we do it the better.”

If passed, the bill would complement voluntary commitments by tech companies as well as an executive order on AI signed by Biden last fall that directed NIST and other federal agencies to set guidelines for AI products. That order also required AI developers to submit information about their product’s risks.

Eshoo’s bill is one of a few proposals put forward to address concerns about the risks posed by AI, worries shared by members of both parties. Many say they support regulation that would protect citizens while also ensuring that a rapidly growing field can continue to develop in ways that benefit a long list of industries like health care and education.

The bill will now be considered by lawmakers, who likely won’t be able to pass any meaningful rules for AI in time for them to take effect before the 2024 election.

“The rise of innovation in the world of artificial intelligence is exciting; however, it has potential to do some major harm if left in the wrong hands,” Dunn said in a statement announcing the legislation. Requiring the identification of deepfakes, he said, is a “simple safeguard” that would benefit consumers, children and national security.

Several organizations that have advocated for greater safeguards on AI said the bill introduced Thursday represented progress. So did some AI developers, like Margaret Mitchell, chief AI ethics scientist at Hugging Face, which has created a ChatGPT rival called Bloom. Mitchell said the bill’s focus on embedding identifiers in AI content — known as watermarking — will “help the public gain control over the role of generated content in our society.”

“We are entering a world where it is becoming unclear which content is created by AI systems, and impossible to know where different AI-generated content came from,” she said.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,027
Reputation
8,229
Daps
157,702
Key provisions in the legislation would require AI developers to identify content created using their products with digital watermarks or metadata, similar to how photo metadata records the location, time and settings of a picture. Online platforms like TikTok, YouTube or Facebook would then be required to label the content in a way that would notify users. Final details of the proposed rules would be crafted by the National Institute of Standards and Technology, a small agency within the U.S. Department of Commerce.


how would this work with open source software especially when the developers aren't located in the united states?
2JqMm1y.png
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,027
Reputation
8,229
Daps
157,702

A new bill wants to reveal what’s really inside AI training data​


Rep. Adam Schiff’s bill garnered support from several entertainment industry groups.​

By Emilia David, a reporter who covers AI. Prior to joining The Verge, she covered the intersection between technology, finance, and the economy.

Apr 10, 2024, 1:35 PM EDT

Photo illustration of the shape of a brain on a circuitboard.

Illustration: Cath Virginia / The Verge | Photos: Getty Images

A new bill would compel tech companies to disclose any copyrighted materials that are used to train their AI models.

The Generative AI Copyright Disclosure bill from Rep. Adam Schiff (D-CA) would require anyone making a training dataset for AI to submit reports on its contents to the Copyrights Register. The reports should include a detailed summary of the copyrighted material in the dataset and the URL for the dataset if it’s publicly available. This requirement will be extended to any changes made to the dataset.

Companies must submit a report “not later than 30 days” before the AI model that used the training dataset is released to the public. The bill will not be retroactive to existing AI platforms unless changes are made to their training datasets after it becomes law.

Schiff’s bill hits on an issue artists, authors, and other creators have been complaining about since the rise of generative AI: that AI models are often trained on copyrighted material without permission. Copyright and AI have always been tricky to navigate, especially as the question of how much AI models change or mimic protected content has not been settled. Artists and authors have turned to lawsuits to assert their rights.

Developers of AI models claim their models are trained on publicly available data, but the sheer amount of information means they don’t know specifically which data is copyrighted. Companies have said any copyrighted materials fall under fair use. Meanwhile, many of these companies have begun offering legal cover to some customers if they find themselves sued for copyright infringement.

Schiff’s bill garnered support from industry groups like the Writers Guild of America (WGA), the Recording Industry Association of America (RIAA), the Directors Guild of America (DGA), the Screen Actors Guild - American Federation of Television and Radio Artists (SAG-AFTRA), and the Authors Guild. Notably absent from the list of supporters is the Motion Picture Association (MPA), which normally backs moves to protect copyrighted work from piracy. (Disclosure: The Verge’s editorial staff is unionized with the Writers Guild of America, East.)

Other groups have sought to bring more transparency to training datasets. The group Fairly Trained wants to add labels to AI models if they prove they asked for permission to use copyrighted data.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,027
Reputation
8,229
Daps
157,702

McConnell opposes bill to ban use of deceptive AI to influence elections​

BY ALEXANDER BOLTON - 05/15/24 11:41 AM ET

Minority Leader Mitch McConnell (R-Ky.)
Greg Nash

Minority Leader Mitch McConnell (R-Ky.) addresses reporters after the weekly policy luncheon on Tuesday, October 31, 2023.

Senate Republican Leader Mitch McConnell (R-Ky.) announced Wednesday he will oppose bipartisan legislation coming out of the Senate Rules Committee that would ban the use of artificial intelligence (AI) to create deceptive content about federal candidates to influence elections.

McConnell, a longtime opponent of campaign finance restrictions, warned that the bills coming out of the Rules Committee “would tamper” with what he called the “well-developed legal regime” for taking down false ads and “create new definitions that could reach well beyond deepfakes.”

He argued that if his colleagues on the Rules panel viewed a dozen political ads, they “would differ on which ones were intentionally misleading.”

“The core question we’re facing is whether or not politicians should have another tool to take down speech they don’t like,” he said. “But if the amendment before us extends this authority to unpaid political speech, then we’re also talking about an extension of speech regulation that has not happened in the 50 years of our modern campaign finance regime.”

The Protect Elections from Deceptive AI Act, which would ban the use of AI to create misleading content, is backed by Senate Rules Committee Chair Amy Klobuchar (D-Minn.) and Sens. Josh Hawley (R-Mo.), Chris c00ns (D-Del.), Susan Collins (R-Maine), Michael Bennet (D-Colo.) and Pete Ricketts (R-Neb.).

But McConnell, citing testimony from Sen. Bill Hagerty (R-Tenn.), said the definitions in the bills to crack down on deepfakes are “nebulous, at best, and overly censorious if they’re applied most cynically.”

“They could wind up barring all manner of photos and videos as long as the ill-defined ‘reasonable person’ could deduce an alternative meaning from the content,” he said.

The Rules Committee also marked up Wednesday the AI Transparency in Elections Act, which requires disclaimers on political ads with images, audio or video generated by AI and the Preparing Election Administrators for AI Act, which requires federal agencies to develop voluntary guidelines for election offices.

McConnell said the proposal to require new disclaimers could be used to regulate content, which he opposes.

“I also have concerns about the disclaimer provisions and their application. Our political disclaimer regime has for its entire history served a singular purpose: to help voters understand who is paying for or endorsing an advertisement. It has never been applied to political advertisements as a content regulation tool,” he said.

He urged his colleagues to spend more time on the issue to reach consensus and announced he would oppose the AI-related bills moving forward.

“Until Congress reaches a consensus understanding of what AI is acceptable and what is not, leading with our chin is not going to cut it in the domain of political speech. So I will oppose S. 2770 or S. 3875 at this time. And I would urge my colleagues to do the same,” he said.

All three bills cleared the Rules Committee.
 
Top