AI Regulation - There have been multiple call to regulate AI. It is too early to do so.

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,839
Reputation
8,165
Daps
155,131
How should regulators think about "AI"?
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,839
Reputation
8,165
Daps
155,131



AI cannot be patent 'inventor', UK Supreme Court rules in landmark case​

Reuters

December 20, 20239:31 AM ESTUpdated an hour ago

Illustration shows miniature of robot and toy hand

Words reading "Artificial intelligence AI", miniature of robot and toy hand are pictured in this illustration taken December 14, 2023. REUTERS/Dado Ruvic/Illustration Acquire Licensing Rights

LONDON, Dec 20 (Reuters) - A U.S. computer scientist on Wednesday lost his bid to register patents over inventions created by his artificial intelligence system in a landmark case in Britain about whether AI can own patent rights.

Stephen Thaler wanted to be granted two patents in the UK for inventions he says were devised by his "creativity machine" called DABUS.

His attempt to register the patents was refused by the UK's Intellectual Property Office (IPO) on the grounds that the inventor must be a human or a company, rather than a machine.

Thaler appealed to the UK's Supreme Court, which on Wednesday unanimously rejected his appeal as under UK patent law "an inventor must be a natural person".

Judge David Kitchin said in the court's written ruling that the case was "not concerned with the broader question whether technical advances generated by machines acting autonomously and powered by AI should be patentable".

Thaler's lawyers said in a statement that the ruling "establishes that UK patent law is currently wholly unsuitable for protecting inventions generated autonomously by AI machines and as a consequence wholly inadequate in supporting any industry that relies on AI in the development of new technologies".

'LEGITIMATE QUESTIONS'​

A spokesperson for the IPO welcomed the decision "and the clarification it gives as to the law as it stands in relation to the patenting of creations of artificial intelligence machines".

They added that there are "legitimate questions as to how the patent system and indeed intellectual property more broadly should handle such creations" and the government will keep this area of law under review.

Thaler earlier this year lost a similar bid in the United States, where the Supreme Court declined to hear a challenge to the U.S. Patent and Trademark Office's refusal to issue patents for inventions created by his AI system.

Giles Parsons, a partner at law firm Browne Jacobson, who was not involved in the case, said the UK Supreme Court's ruling was unsurprising.

"This decision will not, at the moment, have a significant effect on the patent system," he said. "That's because, for the time being, AI is a tool, not an agent.

"I do expect that will change in the medium term, but we can deal with that problem as it arises."

Rajvinder Jagdev, an intellectual property partner at Powell Gilbert, said the ruling followed similar decisions by courts in Europe, Australia and the U.S. and has "given certainty that inventors must be a natural person."

But he added: "The judgment does not preclude a person using an AI to devise an invention – in such a scenario, it would be possible to apply for a patent provided that person is identified as the inventor."

In a separate case last month, London's High Court ruled that artificial neural networks can attract patent protection under UK law.

Reporting by Sam Tobin; editing by Kylie MacLellan, Jason Neely and Louise Heavens
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,839
Reputation
8,165
Daps
155,131

US companies and Chinese experts engaged in secret diplomacy on AI safety​

OpenAI, Anthropic and Cohere held back-channel talks with Chinese state-backed groups in Geneva

AI montage

The previously unreported talks are a rare sign of Sino-US co-operation amid a race for supremacy in cutting-edge technologies © Reuters

Madhumita Murgia in London


JANUARY 11 2024

41Print this page

Artificial intelligence companies OpenAI, Anthropic and Cohere have engaged in secret diplomacy with Chinese AI experts, amid shared concern about how the powerful technology may spread misinformation and threaten social cohesion.

According to multiple people with direct knowledge, two meetings took place in Geneva in July and October last year attended by scientists and policy experts from the North American AI groups, alongside representatives of Tsinghua University and other Chinese state-backed institutions.

Attendees said the talks allowed both sides to discuss the risks from the emerging technology and encourage investments in AI safety research. They added that the ultimate goal was to find a scientific path forward to safely develop more sophisticated AI technology.

“There is no way for us to set international standards around AI safety and alignment without agreement between this set of actors,” said one person present at the talks. “And if they agree, it makes it much easier to bring the others along.”

The previously unreported talks are a rare sign of Sino-US co-operation amid a race for supremacy between the two major powers in the area of cutting-edge technologies such as AI and quantum computing. Currently, Washington has blocked US exports of the high-performance chips made by the likes of Nvidia that are needed to develop sophisticated AI software.

But the topic of AI safety has become a point of common interest between developers of the technology across both countries, given the potential existential risks for humanity.

The Geneva meetings were arranged with the knowledge of the White House as well as that of UK and Chinese government officials, according to a negotiator present, who declined to be named. The White House declined to comment.

“China supports efforts to discuss AI governance and develop needful frameworks, norms and standards based on broad consensus,” said the Chinese embassy in the UK.

“China stands ready to carry out communication, exchange and practical co-operation with various parties on global AI governance, and ensure that AI develops in a way that advances human civilisation.”

The talks were convened by the Shaikh Group, a private mediation organisation that facilitates dialogue between key actors in regions of conflict, particularly in the Middle East.

“We saw an opportunity to bring together key US and Chinese actors working on AI. Our principal aim was to underscore the vulnerabilities, risks and opportunities attendant with the wide deployment of AI models that are shared across the globe,” said Salman Shaikh, the group’s chief executive.

“Recognising this fact can, in our view, become the bedrock for collaborative scientific work, ultimately leading to global standards around the safety of AI models.”

Those involved in the talks said Chinese AI companies such as ByteDance, Tencent and Baidu did not participate; while Google DeepMind was briefed of the details of the discussions, it did not attend.

During the talks, AI experts from both sides debated areas for engagement in technical co-operation, as well as more concrete policy proposals that fed into discussions around the UN Security Council meeting on AI in July 2023, and the UK’s AI summit in November last year.

The success of the meetings has led to plans for future discussions that will focus on scientific and technical proposals for how to align AI systems with the legal codes and the norms and values of each society, according to the negotiator present.

There have been growing calls for co-operation between leading powers to tackle the rise of AI.

In November, Chinese scientists working on artificial intelligence joined western academics to call for tighter controls on the technology, signing a statement that warned that advanced AI would pose an “existential risk to humanity” in the coming decades.

The group, which included Andrew Yao, one of China’s most prominent computer scientists, called for the creation of an international regulatory body, the mandatory registration and auditing of advanced AI systems, the inclusion of instant “shutdown” procedures, and for developers to spend 30 per cent of their research budget on AI safety.

OpenAI confirmed a member of its global affairs team attended the Geneva talks. Anthropic and Cohere declined to comment about their participation. Tsinghua University did not respond to a request for comment.

This article has been amended to make clear in the subheading that Anthropic not Inflection were involved in the Geneva talks
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,839
Reputation
8,165
Daps
155,131

Congress Is Trying to Stop AI Nudes and Deepfake Scams Because Celebrities Are Mad​

Lawmakers are introducing new bills to protect famous actors and musicians from ‘AI Fraud’—and maybe the rest of us.


By Janus Rose

NEW YORK, US

January 16, 2024, 11:12am

1705421503279-screen-shot-2024-01-16-at-111121-am.png

IMAGE: TIKTOK

If you’ve been on TikTok lately, you may have noticed weird videos of celebrities promoting extremely shady products, such as a robotic-sounding Taylor Swift promising viewers a free cookware set. All of these videos are scams created with generative AI—the latest example of how the technology is being used to create disturbing virtual clones of people without their consent.

Needless to say, this kind of thing has pissed off a lot of famous people. And now, Congress is proposing new legislation that aims to combat AI deepfakes—specifically when it comes to things like fake celebrity endorsements and non-consensual AI-generated nudes, which have become a problem online and in high schools. Despite the surging popularity of websites and apps designed to generate deepfakes, there's no comprehensive law on the books banning the creation of AI images.

The new bill, called the No AI FRAUD Act and introduced by Rep. María Elvira Salazar (R-FL) and Rep. Madeleine Dean (D-PA), would establish legal definitions for “likeness and voice rights,” effectively banning the use of AI deepfakes to nonconsensually mimic another person, living or dead. The draft bill proclaims that “every individual has a property right in their own likeness and voice,” and cites several recent incidents where people have been turned into weird AI robots. It specifically mentions recent viral videos that featured AI-generated songs mimicking the voices of pop artists like Justin Bieber, Bad Bunny, Drake, and The Weeknd.



The bill also specifically targets AI deepfake porn, saying that “any digital depiction or digital voice replica which includes child sexual abuse material, is sexually explicit, or includes intimate images” meets the definition of harm under the act.

The proposed Act is a companion to a similar bill in the Senate, called the Nurture Originals, Foster Art, and Keep Entertainment Safe Act ( NO FAKES Act), which was introduced last October in the aftermath of the viral deepfaked Drake song. The new bill was also introduced the same day as another measure proposed by lawmakers in Tennessee, called the Ensuring Likeness Voice and Image Security Act ( ELVIS Act).

Given that these bills seem to be a response to celebrities getting mad, either in whole or in part, the big question is whether or not they would in practice protect normal people—and not just the intellectual property rights of pop stars with multi-million dollar record deals.

“It’s really drafted with an eye toward the property rights that celebrities and recording artists have in their likeness and voices,” Carrie Goldberg, an attorney who specializes in deepfakes and other internet-based harassment, told Motherboard. “However, our legal system treats the intellectual property of celebrities differently than those of people not in the public eye.”

The most common example is paparazzi photos, Goldberg said. The law allows some redress for celebrities when their photos are taken without permission and used for commercial gain. But for the average person, the rights to their photos belong solely to the person who took them, and there’s not much they can do about someone reproducing their image for reasons other than profit—unless they have the money to spend on an expensive and often lengthy legal process.

“For normal people, when their image is exploited, it’s not usually for commercial gain but instead to embarrass or harass them; and the wrongdoer in these situations is rarely somebody who has the resources to make a lawsuit worthwhile for the victim,” said Goldberg.

The new bill states that everyone has a right to control their own voice and likeness against deepfakes, but the provisions for non-famous people depend heavily on the victim proving harm. Specifically, that means proving that the deepfake has resulted in “physical or physical injury,” caused “severe emotional distress,” or is sexually explicit in nature.

Of course, all of this is an attempt to regulate a symptom of a larger problem, which is that tech companies are building massive AI systems with data scraped from the internet and no robust mitigations against the harm they inevitably cause. In an ongoing lawsuit against ChatGPT creator OpenAI, the company recently argued that it shouldn’t be punished for training its AI models with illegal and copyrighted material because it’s “impossible” to create AI systems without doing so.

But the nature of black box AI systems built by companies like OpenAI, Microsoft, and Meta virtually guarantees that these bad things will happen. Recently, researchers found over 3,000 images of child sexual abuse material in a massive dataset used to train almost every major AI system on the market. Companies are also struggling to ensure that their generative AI systems will filter out illegal content, and deepfake porn has been found at the top of Google and Bing image search results. A major issue is that there are numerous apps made by smaller companies or individuals that are designed solely to create non-consensual AI nudes, which advertise their services on major social media platforms and are available on app stores.

Ultimately, says Goldberg, these problems won’t be fully addressed until the companies building these AI systems are held responsible.

“What our society really needs is to be attacking AI and deepfakes on a systemic level and going after the malicious products that are available on mainstream places like the AppStore and GooglePlay that are on the market solely to manipulate images,” said Goldberg. “We need to pressure search engines to not guide people to these products or promote sites that publish these images and we need to require that they make content removal simple for victims.”

TAGGED: SCAMS AI AI SCAMS CELEBRITIES
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,839
Reputation
8,165
Daps
155,131














 

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,839
Reputation
8,165
Daps
155,131

Exclusive: U.S. Must Move ‘Decisively’ to Avert ‘Extinction-Level’ Threat From AI, Government-Commissioned Report Says​


11 MINUTE READ

Extinction-AI-threat.jpg

Lon Tweeten for TIME; Getty Images

BY BILLY PERRIGO

MARCH 11, 2024 9:00 AM EDT

T
he U.S. government must move “quickly and decisively” to avert substantial national security risks stemming from artificial intelligence (AI) which could, in the worst case, cause an “extinction-level threat to the human species,” says a report commissioned by the U.S. government published on Monday.

“Current frontier AI development poses urgent and growing risks to national security,” the report, which TIME obtained ahead of its publication, says. “The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.” AGI is a hypothetical technology that could perform most tasks at or above the level of a human. Such systems do not currently exist, but the leading AI labs are working toward them and many expect AGI to arrive within the next five years or less.

The three authors of the report worked on it for more than a year, speaking with more than 200 government employees, experts, and workers at frontier AI companies—like OpenAI, Google DeepMind, Anthropic and Meta— as part of their research. Accounts from some of those conversations paint a disturbing picture, suggesting that many AI safety workers inside cutting-edge labs are concerned about perverse incentives driving decisionmaking by the executives who control their companies.

The finished document, titled “An Action Plan to Increase the Safety and Security of Advanced AI,” recommends a set of sweeping and unprecedented policy actions that, if enacted, would radically disrupt the AI industry. Congress should make it illegal, the report recommends, to train AI models using more than a certain level of computing power. The threshold, the report recommends, should be set by a new federal AI agency, although the report suggests, as an example, that the agency could set it just above the levels of computing power used to train current cutting-edge models like OpenAI’s GPT-4 and Google’s Gemini. The new AI agency should require AI companies on the “frontier” of the industry to obtain government permission to train and deploy new models above a certain lower threshold, the report adds. Authorities should also “urgently” consider outlawing the publication of the “weights,” or inner workings, of powerful AI models, for example under open-source licenses, with violations possibly punishable by jail time, the report says. And the government should further tighten controls on the manufacture and export of AI chips, and channel federal funding toward “alignment” research that seeks to make advanced AI safer, it recommends.

The report was commissioned by the State Department in November 2022 as part of a federal contract worth $250,000, according to public records. It was written by Gladstone AI, a four-person company that runs technical briefings on AI for government employees. (Parts of the action plan recommend that the government invests heavily in educating officials on the technical underpinnings of AI systems so they can better understand their risks.) The report was delivered as a 247-page document to the State Department on Feb. 26. The State Department did not respond to several requests for comment on the report. The recommendations “do not reflect the views of the United States Department of State or the United States Government,” the first page of the report says.

The report's recommendations, many of them previously unthinkable, follow a dizzying series of major developments in AI that have caused many observers to recalibrate their stance on the technology. The chatbot ChatGPT, released in November 2022, was the first time this pace of change became visible to society at large, leading many people to question whether future AIs might pose existential risks to humanity. New tools, with more capabilities, have continued to be released at a rapid clip since. As governments around the world discuss how best to regulate AI, the world’s biggest tech companies have fast been building out the infrastructure to train the next generation of more powerful systems—in some cases planning to use 10 or 100 times more computing power. Meanwhile, more than 80% of the American public believe AI could accidentally cause a catastrophic event, and 77% of voters believe the government should be doing more to regulate AI, according to recent polling by the AI Policy Institute.

Read More: Researchers Develop New Technique to Wipe Dangerous Knowledge From AI Systems

Outlawing the training of advanced AI systems above a certain threshold, the report states, may “moderate race dynamics between all AI developers” and contribute to a reduction in the speed of the chip industry manufacturing faster hardware. Over time, a federal AI agency could raise the threshold and allow the training of more advanced AI systems once evidence of the safety of cutting-edge models is sufficiently proven, the report proposes. Equally, it says, the government could lower the safety threshold if dangerous capabilities are discovered in existing models.

The proposal is likely to face political difficulties. “I think that this recommendation is extremely unlikely to be adopted by the United States government” says Greg Allen, director of the Wadhwani Center for AI and Advanced Technologies at the Center for Strategic and International Studies (CSIS), in response to a summary TIME provided of the report’s recommendation to outlaw AI training runs above a certain threshold. Current U.S. government AI policy, he notes, is to set compute thresholds above which additional transparency monitoring and regulatory requirements apply, but not to set limits above which training runs would be illegal. “Absent some kind of exogenous shock, I think they are quite unlikely to change that approach,” Allen says.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,839
Reputation
8,165
Daps
155,131
{continued}



Jeremie and Edouard Harris, the CEO and CTO of Gladstone respectively, have been briefing the U.S. government on the risks of AI since 2021. The duo, who are brothers, say that government officials who attended many of their earliest briefings agreed that the risks of AI were significant, but told them the responsibility for dealing with them fell to different teams or departments. In late 2021, the Harrises say Gladstone finally found an arm of the government with the responsibility to address AI risks: the State Department’s Bureau of International Security and Nonproliferation. Teams within the Bureau have an inter-agency mandate to address risks from emerging technologies including chemical and biological weapons, and radiological and nuclear risks. Following briefings by Jeremie and Gladstone's then-CEO Mark Beall, in October 2022 the Bureau put out a tender for report that could inform a decision whether to add AI to the list of other risks it monitors. (The State Department did not respond to a request for comment on the outcome of that decision.) The Gladstone team won that contract, and the report released Monday is the outcome.

The report focuses on two separate categories of risk. Describing the first category, which it calls “weaponization risk,” the report states: “such systems could potentially be used to design and even execute catastrophic biological, chemical, or cyber attacks, or enable unprecedented weaponized applications in swarm robotics.” The second category is what the report calls the “loss of control” risk, or the possibility that advanced AI systems may outmaneuver their creators. There is, the report says, “reason to believe that they may be uncontrollable if they are developed using current techniques, and could behave adversarially to human beings by default.”

Both categories of risk, the report says, are exacerbated by “race dynamics” in the AI industry. The likelihood that the first company to achieve AGI will reap the majority of economic rewards, the report says, incentivizes companies to prioritize speed over safety. “Frontier AI labs face an intense and immediate incentive to scale their AI systems as fast as they can,” the report says. “They do not face an immediate incentive to invest in safety or security measures that do not deliver direct economic benefits, even though some do out of genuine concern.”

The Gladstone report identifies hardware—specifically the high-end computer chips currently used to train AI systems—as a significant bottleneck to increases in AI capabilities. Regulating the proliferation of this hardware, the report argues, may be the “most important requirement to safeguard long-term global safety and security from AI.” It says the government should explore tying chip export licenses to the presence of on-chip technologies allowing monitoring of whether chips are being used in large AI training runs, as a way of enforcing proposed rules against training AI systems larger than GPT-4. However the report also notes that any interventions will need to account for the possibility that overregulation could bolster foreign chip industries, eroding the U.S.’s ability to influence the supply chain.

The report also raises the possibility that, ultimately, the physical bounds of the universe may not be on the side of those attempting to prevent proliferation of advanced AI through chips. “As AI algorithms continue to improve, more AI capabilities become available for less total compute. Depending on how far this trend progresses, it could ultimately become impractical to mitigate advanced AI proliferation through compute concentrations at all.” To account for this possibility, the report says a new federal AI agency could explore blocking the publication of research that improves algorithmic efficiency, though it concedes this may harm the U.S. AI industry and ultimately be unfeasible.

The Harrises recognize in conversation that their recommendations will strike many in the AI industry as overly zealous. The recommendation to outlaw the open-sourcing of advanced AI model weights, they expect, will not be popular. “Open source is generally a wonderful phenomenon and overall massively positive for the world,” says Edouard, the chief technology officer of Gladstone. “It’s an extremely challenging recommendation to make, and we spent a lot of time looking for ways around suggesting measures like this.” Allen, the AI policy expert at CSIS, says he is sympathetic to the idea that open-source AI makes it more difficult for policymakers to get a handle on the risks. But he says any proposal to outlaw the open-sourcing of models above a certain size would need to contend with the fact that U.S. law has a limited reach. “Would that just mean that the open source community would move to Europe?” he says. “Given that it's a big world, you sort of have to take that into account.”

Despite the challenges, the report’s authors say they were swayed by how easy and cheap it currently is for users to remove safety guardrails on an AI model if they have access to its weights. “If you proliferate an open source model, even if it looks safe, it could still be dangerous down the road,” Edouard says, adding that the decision to open-source a model is irreversible. “At that point, good luck, all you can do is just take the damage.”

The third co-author of the report, former Defense Department official Beall, has since left Gladstone in order to start a super PAC aimed at advocating for AI policy. The PAC, called Americans for AI Safety, officially launched on Monday. It aims to make AI safety and security "a key issue in the 2024 elections, with a goal of passing AI safety legislation by the end of 2024," the group said in a statement to TIME. The PAC did not disclose its funding commitments, but said it has "set a goal of raising millions of dollars to accomplish its mission."

Before co-founding Gladstone with Beall, the Harris brothers ran an AI company that went through YCombinator, the famed Silicon Valley incubator, at the time when OpenAI CEO Sam Altman was at the helm. The pair brandish these credentials as evidence they have the industry’s interests at heart, even as their recommendations, if implemented, would upend it. “Move fast and break things, we love that philosophy, we grew up with that philosophy,” Jeremie tells TIME. But the credo, he says, ceases to apply when the potential downside of your actions is so massive. “Our default trajectory right now,” he says, “seems very much on course to create systems that are powerful enough that they either can be weaponized catastrophically, or fail to be controlled.” He adds: “One of the worst-case scenarios is you get a catastrophic event that completely shuts down AI research for everybody, and we don't get to reap the incredible benefits of this technology.”

Are you an employee at an AI lab and have concerns that you might consider sharing with a journalist? You can contact the author of this piece on Signal at billyperrigo.01
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,839
Reputation
8,165
Daps
155,131

POSTED MAR 13, 2024
AT 7:34 AM EDT

0 Comments
JESS WEATHERBED

The EU has officially adopted its sweeping AI law.

After two years of debate and revisions, European Parliament members gave the Artificial Intelligence Act their final approval on Wednesday.

While the law officially comes into force 20 days after it’s published in the Official Journal (likely happening in May), some rules — like those impacting general-purpose AI systems like chatbots — will take effect 12 months later to give AI providers time to comply.


Artificial Intelligence Act: MEPs adopt landmark law | News | European Parliament
[WWW.EUROPARL.EUROPA.EU]





Artificial Intelligence Act: MEPs adopt landmark law​

Press Releases
PLENARY SESSION
IMCO
LIBE


30 minutes ago

  • Safeguards on general purpose artificial intelligence
  • Limits on the use of biometric identification systems by law enforcement
  • Bans on social scoring and AI used to manipulate or exploit user vulnerabilities
  • Right of consumers to launch complaints and receive meaningful explanations

Personal identification technologies in street surveillance cameras

The untargeted scraping of facial images from CCTV footage to create facial recognition databases will be banned © Alexander / Adobe Stock

On Wednesday, Parliament approved the Artificial Intelligence Act that ensures safety and compliance with fundamental rights, while boosting innovation.

The regulation, agreed in negotiations with member states in December 2023, was endorsed by MEPs with 523 votes in favour, 46 against and 49 abstentions.

It aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.

Banned applications

The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.

Law enforcement exemptions

The use of biometric identification systems (RBI) by law enforcement is prohibited in principle, except in exhaustively listed and narrowly defined situations. “Real-time” RBI can only be deployed if strict safeguards are met, e.g. its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorisation. Such uses may include, for example, a targeted search of a missing person or preventing a terrorist attack. Using such systems post-facto (“post-remote RBI”) is considered a high-risk use case, requiring judicial authorisation being linked to a criminal offence.

Obligations for high-risk systems

Clear obligations are also foreseen for other high-risk AI systems (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law). Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections). Such systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight. Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.

Transparency requirements

General-purpose AI (GPAI) systems, and the GPAI models they are based on, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training. The more powerful GPAI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents.

Additionally, artificial or manipulated images, audio or video content (“deepfakes”) need to be clearly labelled as such.

Measures to support innovation and SMEs

Regulatory sandboxes and real-world testing will have to be established at the national level, and made accessible to SMEs and start-ups, to develop and train innovative AI before its placement on the market.

Quotes

During the plenary debate on Tuesday, the Internal Market Committee co-rapporteur Brando Benifei (S&D, Italy) said: “We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency. Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected. The AI Office will now be set up to support companies to start complying with the rules before they enter into force. We ensured that human beings and European values are at the very centre of AI’s development”.

Civil Liberties Committee co-rapporteur Dragos Tudorache (Renew, Romania) said: “The EU has delivered. We have linked the concept of artificial intelligence to the fundamental values that form the basis of our societies. However, much work lies ahead that goes beyond the AI Act itself. AI will push us to rethink the social contract at the heart of our democracies, our education models, labour markets, and the way we conduct warfare. The AI Act is a starting point for a new model of governance built around technology. We must now focus on putting this law into practice”.

Next steps

The regulation is still subject to a final lawyer-linguist check and is expected to be finally adopted before the end of the legislature (through the so-called corrigendum procedure). The law also needs to be formally endorsed by the Council.

It will enter into force twenty days after its publication in the official Journal, and be fully applicable 24 months after its entry into force, except for: bans on prohibited practises, which will apply six months after the entry into force date; codes of practise (nine months after entry into force); general-purpose AI rules including governance (12 months after entry into force); and obligations for high-risk systems (36 months).

Background

The Artificial Intelligence Act responds directly to citizens’ proposals from the Conference on the Future of Europe (COFE), most concretely to proposal 12(10) on enhancing EU’s competitiveness in strategic sectors, proposal 33(5) on a safe and trustworthy society, including countering disinformation and ensuring humans are ultimately in control, proposal 35 on promoting digital innovation, (3) while ensuring human oversight and (8) trustworthy and responsible use of AI, setting safeguards and ensuring transparency, and proposal 37 (3) on using AI and digital tools to improve citizens’ access to information, including persons with disabilities.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,839
Reputation
8,165
Daps
155,131

A new bill wants to reveal what’s really inside AI training data​


Rep. Adam Schiff’s bill garnered support from several entertainment industry groups.​

By Emilia David, a reporter who covers AI. Prior to joining The Verge, she covered the intersection between technology, finance, and the economy.

Apr 10, 2024, 1:35 PM EDT

Photo illustration of the shape of a brain on a circuitboard.

Illustration: Cath Virginia / The Verge | Photos: Getty Images

A new bill would compel tech companies to disclose any copyrighted materials that are used to train their AI models.

The Generative AI Copyright Disclosure bill from Rep. Adam Schiff (D-CA) would require anyone making a training dataset for AI to submit reports on its contents to the Copyrights Register. The reports should include a detailed summary of the copyrighted material in the dataset and the URL for the dataset if it’s publicly available. This requirement will be extended to any changes made to the dataset.

Companies must submit a report “not later than 30 days” before the AI model that used the training dataset is released to the public. The bill will not be retroactive to existing AI platforms unless changes are made to their training datasets after it becomes law.

Schiff’s bill hits on an issue artists, authors, and other creators have been complaining about since the rise of generative AI: that AI models are often trained on copyrighted material without permission. Copyright and AI have always been tricky to navigate, especially as the question of how much AI models change or mimic protected content has not been settled. Artists and authors have turned to lawsuits to assert their rights.

Developers of AI models claim their models are trained on publicly available data, but the sheer amount of information means they don’t know specifically which data is copyrighted. Companies have said any copyrighted materials fall under fair use. Meanwhile, many of these companies have begun offering legal cover to some customers if they find themselves sued for copyright infringement.

Schiff’s bill garnered support from industry groups like the Writers Guild of America (WGA), the Recording Industry Association of America (RIAA), the Directors Guild of America (DGA), the Screen Actors Guild - American Federation of Television and Radio Artists (SAG-AFTRA), and the Authors Guild. Notably absent from the list of supporters is the Motion Picture Association (MPA), which normally backs moves to protect copyrighted work from piracy. (Disclosure: The Verge’s editorial staff is unionized with the Writers Guild of America, East.)

Other groups have sought to bring more transparency to training datasets. The group Fairly Trained wants to add labels to AI models if they prove they asked for permission to use copyrighted data.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,839
Reputation
8,165
Daps
155,131

UK’s antitrust enforcer sounds the alarm over Big Tech’s grip on GenAI​

Natasha Lomas @riptari / 1:21 PM EDT•April 11, 2024

SAN FRANCISCO, CALIFORNIA - NOVEMBER 06: Microsoft CEO Satya Nadella speaks during the OpenAI DevDay event on November 06, 2023 in San Francisco, California. Altman delivered the keynote address at the first-ever Open AI DevDay conference.(Photo by Justin Sullivan/Getty Images)

Image Credits: Justin Sullivan / Getty Images

The U.K.’s competition watchdog, Competition and Markets Authority (CMA), has sounded a warning over Big Tech’s entrenching grip on the advanced AI market, with CEO Sarah Cardell expressing “real concerns” over how the sector is developing.

In an Update Paper on foundational AI models published Thursday, the CMA cautioned over increasing interconnection and concentration between developers in the cutting-edge tech sector responsible for the boom in generative AI tools.

The CMA’s paper points to the recurring presence of Google, Amazon, Microsoft, Meta and Apple (aka GAMMA) across the AI value chain: compute, data, model development, partnerships, release and distribution platforms. And while the regulator also emphasized that it recognizes that partnership arrangements “can play a pro-competitive role in the technology ecosystem,” it coupled that with a warning that “powerful partnerships and integrated firms” can pose risks to competition that run counter to open markets.

CMA GAMMA diagram

Image Credits: CMA’s Foundation Models. Update Paper

“We are concerned that the FM [foundational model] sector is developing in ways that risk negative market outcomes,” the CMA wrote, referencing a type of AI that’s developed with large amounts of data and compute power and may be used to underpin a variety of applications.

“In particular, the growing presence across the FM value chain of a small number of incumbent technology firms, which already hold positions of market power in many of today’s most important digital markets, could profoundly shape FM-related markets to the detriment of fair, open and effective competition, ultimately harming businesses and consumers, for example by reducing choice and quality, and by raising prices,” it warned.

The CMA undertook an initial review of the top end of the AI market last May and went on to publish a set of principles for “responsible” generative AI development that it said would guide its oversight of the fast-moving market. Although, Will Hayter, senior director of the CMA’s Digital Markets Unit, told TechCrunch last fall that it was not in a rush to regulate advanced AI because it wanted to give the market a chance to develop.

Since then, the watchdog has stepped in to scrutinize the cozy relationship between OpenAI, the developer behind the viral AI chatbot ChatGPT, and Microsoft, a major investor in OpenAI. Its update paper remarks on the giddy pace of change in the market. For example, it flagged research by the U.K.’s internet regulator, Ofcom, in a report last year that found 31% of adults and 79% of 13- to 17-year-olds in the U.K. have used a generative AI tool, such as ChatGPT, Snapchat My AI or Bing Chat (aka Copilot). So there are signs the CMA is revising its initial chillaxed position on the GenAI market amid the commercial “whirlwind” sucking up compute, data and talent.

Its Update Paper identifies three “key interlinked risks to fair, effective, and open competition,” as it puts it, which the omnipresence of GAMMA speaks to: (1) Firms controlling “critical inputs” for developing foundational models (known as general-purpose AI models), which might allow them to restrict access and build a moat against competition; (2) tech giants’ ability to exploit dominant positions in consumer- or business-facing markets to distort choice for GenAI services and restrict competition in deployment of these tools; and (3) partnerships involving key players, which the CMA says “could exacerbate existing positions of market power through the value chain.”

CMA GAMMA GenAI dev relalationships

Image Credits: CMA

In a speech delivered Thursday in Washington, D.C., at a legal event focused on generative AI, Cardell pointed to the “winner-take-all dynamics” seen in earlier web dev eras, when Big Tech built and entrenched their Web 2.0 empires while regulators sat on their heels. She said it’s important that competition enforcers don’t repeat the same mistakes with this next generation of digital development.

“The benefits we wish to see flowing from [advanced AI], for businesses and consumers, in terms of quality, choice and price, and the very best innovations, are much more likely in a world where those firms are themselves subject to fair, open and effective competition, rather than one where they are simply able to leverage foundation models to further entrench and extend their existing positions of power in digital markets,” she said, adding: “So we believe it is important to act now to ensure that a small number of firms with unprecedented market power don’t end up in a position to control not just how the most powerful models are designed and built, but also how they are embedded and used across all parts of our economy and our lives.”

How is the CMA going to intervene at the top end of the AI market? It does not have concrete measures to announce, as yet, but Cardell said it’s closely tracking GAMMA’s partnerships and stepping up its use of merger review to see whether any of these arrangements fall within existing merger rules.

That would unlock formal powers of investigation, and even the ability to block connections it deems anti-competitive. But for now the CMA has not gone that far, despite clear and growing concerns about cozy GAMMA GenAI ties. Its review of the links between OpenAI and Microsoft — for example, to determine whether the partnership constitutes a “relevant merger situation” — continues.

“Some of these arrangements are quite complex and opaque, meaning we may not have sufficient information to assess this risk without using our merger control powers to build that understanding,” Cardell also told the audience, explaining the challenges of trying to understand the power dynamics of the AI market without unlocking formal merger review powers. “It may be that some arrangements falling outside the merger rules are problematic, even if not ultimately remediable through merger control. They may even have been structured by the parties to seek to avoid the scope of merger rules. Equally some arrangements may not give rise to competition concerns.”

“By stepping up our merger review, we hope to gain more clarity over which types of partnerships and arrangements may fall within the merger rules, and under what circumstances competition concerns may arise — and that clarity will also benefit the businesses themselves,” she added.

The CMA’s Update report sets out some “indicative factors,” which Cardell said may trigger greater concern about and attention to FM partnerships, such as the upstream power of the partners, over AI inputs; and the downstream power, over distribution channels. She also said the watchdog will be looking closely at the nature of the partnership and the level of “ influence and alignment of incentives” between partners.

Meanwhile, the U.K. regulator is urging AI giants to follow the seven development principles it set out last fall to steer market developments onto responsible rails where competition and consumer protection are baked in. (The short version of what it wants to see is: accountability, access, diversity, choice, flexibility, fair dealing, and transparency.)

“We’re committed to applying the principles we have developed and to using all legal powers at our disposal — now and in the future — to ensure that this transformational and structurally critical technology delivers on its promise,” Cardell said in a statement.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,839
Reputation
8,165
Daps
155,131

UK’s antitrust enforcer sounds the alarm over Big Tech’s grip on GenAI​

Natasha Lomas @riptari / 1:21 PM EDT•April 11, 2024

SAN FRANCISCO, CALIFORNIA - NOVEMBER 06: Microsoft CEO Satya Nadella speaks during the OpenAI DevDay event on November 06, 2023 in San Francisco, California. Altman delivered the keynote address at the first-ever Open AI DevDay conference.(Photo by Justin Sullivan/Getty Images)

Image Credits: Justin Sullivan / Getty Images

The U.K.’s competition watchdog, Competition and Markets Authority (CMA), has sounded a warning over Big Tech’s entrenching grip on the advanced AI market, with CEO Sarah Cardell expressing “real concerns” over how the sector is developing.

In an Update Paper on foundational AI models published Thursday, the CMA cautioned over increasing interconnection and concentration between developers in the cutting-edge tech sector responsible for the boom in generative AI tools.

The CMA’s paper points to the recurring presence of Google, Amazon, Microsoft, Meta and Apple (aka GAMMA) across the AI value chain: compute, data, model development, partnerships, release and distribution platforms. And while the regulator also emphasized that it recognizes that partnership arrangements “can play a pro-competitive role in the technology ecosystem,” it coupled that with a warning that “powerful partnerships and integrated firms” can pose risks to competition that run counter to open markets.

CMA GAMMA diagram

Image Credits: CMA’s Foundation Models. Update Paper

“We are concerned that the FM [foundational model] sector is developing in ways that risk negative market outcomes,” the CMA wrote, referencing a type of AI that’s developed with large amounts of data and compute power and may be used to underpin a variety of applications.

“In particular, the growing presence across the FM value chain of a small number of incumbent technology firms, which already hold positions of market power in many of today’s most important digital markets, could profoundly shape FM-related markets to the detriment of fair, open and effective competition, ultimately harming businesses and consumers, for example by reducing choice and quality, and by raising prices,” it warned.

The CMA undertook an initial review of the top end of the AI market last May and went on to publish a set of principles for “responsible” generative AI development that it said would guide its oversight of the fast-moving market. Although, Will Hayter, senior director of the CMA’s Digital Markets Unit, told TechCrunch last fall that it was not in a rush to regulate advanced AI because it wanted to give the market a chance to develop.

Since then, the watchdog has stepped in to scrutinize the cozy relationship between OpenAI, the developer behind the viral AI chatbot ChatGPT, and Microsoft, a major investor in OpenAI. Its update paper remarks on the giddy pace of change in the market. For example, it flagged research by the U.K.’s internet regulator, Ofcom, in a report last year that found 31% of adults and 79% of 13- to 17-year-olds in the U.K. have used a generative AI tool, such as ChatGPT, Snapchat My AI or Bing Chat (aka Copilot). So there are signs the CMA is revising its initial chillaxed position on the GenAI market amid the commercial “whirlwind” sucking up compute, data and talent.

Its Update Paper identifies three “key interlinked risks to fair, effective, and open competition,” as it puts it, which the omnipresence of GAMMA speaks to: (1) Firms controlling “critical inputs” for developing foundational models (known as general-purpose AI models), which might allow them to restrict access and build a moat against competition; (2) tech giants’ ability to exploit dominant positions in consumer- or business-facing markets to distort choice for GenAI services and restrict competition in deployment of these tools; and (3) partnerships involving key players, which the CMA says “could exacerbate existing positions of market power through the value chain.”

CMA GAMMA GenAI dev relalationships

Image Credits: CMA

In a speech delivered Thursday in Washington, D.C., at a legal event focused on generative AI, Cardell pointed to the “winner-take-all dynamics” seen in earlier web dev eras, when Big Tech built and entrenched their Web 2.0 empires while regulators sat on their heels. She said it’s important that competition enforcers don’t repeat the same mistakes with this next generation of digital development.

“The benefits we wish to see flowing from [advanced AI], for businesses and consumers, in terms of quality, choice and price, and the very best innovations, are much more likely in a world where those firms are themselves subject to fair, open and effective competition, rather than one where they are simply able to leverage foundation models to further entrench and extend their existing positions of power in digital markets,” she said, adding: “So we believe it is important to act now to ensure that a small number of firms with unprecedented market power don’t end up in a position to control not just how the most powerful models are designed and built, but also how they are embedded and used across all parts of our economy and our lives.”

How is the CMA going to intervene at the top end of the AI market? It does not have concrete measures to announce, as yet, but Cardell said it’s closely tracking GAMMA’s partnerships and stepping up its use of merger review to see whether any of these arrangements fall within existing merger rules.

That would unlock formal powers of investigation, and even the ability to block connections it deems anti-competitive. But for now the CMA has not gone that far, despite clear and growing concerns about cozy GAMMA GenAI ties. Its review of the links between OpenAI and Microsoft — for example, to determine whether the partnership constitutes a “relevant merger situation” — continues.

“Some of these arrangements are quite complex and opaque, meaning we may not have sufficient information to assess this risk without using our merger control powers to build that understanding,” Cardell also told the audience, explaining the challenges of trying to understand the power dynamics of the AI market without unlocking formal merger review powers. “It may be that some arrangements falling outside the merger rules are problematic, even if not ultimately remediable through merger control. They may even have been structured by the parties to seek to avoid the scope of merger rules. Equally some arrangements may not give rise to competition concerns.”

“By stepping up our merger review, we hope to gain more clarity over which types of partnerships and arrangements may fall within the merger rules, and under what circumstances competition concerns may arise — and that clarity will also benefit the businesses themselves,” she added.

The CMA’s Update report sets out some “indicative factors,” which Cardell said may trigger greater concern about and attention to FM partnerships, such as the upstream power of the partners, over AI inputs; and the downstream power, over distribution channels. She also said the watchdog will be looking closely at the nature of the partnership and the level of “ influence and alignment of incentives” between partners.

Meanwhile, the U.K. regulator is urging AI giants to follow the seven development principles it set out last fall to steer market developments onto responsible rails where competition and consumer protection are baked in. (The short version of what it wants to see is: accountability, access, diversity, choice, flexibility, fair dealing, and transparency.)

“We’re committed to applying the principles we have developed and to using all legal powers at our disposal — now and in the future — to ensure that this transformational and structurally critical technology delivers on its promise,” Cardell said in a statement.
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,656
Daps
203,814
Reppin
the ether
"We took too long to regulate that shyt" vs. "We started regulating that shyt too early."


The first line seems 100x more common in corporate history than the second line does.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,839
Reputation
8,165
Daps
155,131




1/4
I see that Dan Hendrycks and the Center for AI Safety have put their new textbook on AI regulation online. The chapter on international governance (8.7) contains the usual aspirational thinking about treaties to regulate AI & computation globally (all the while ignoring how to get China, Russia, or anyone else to go along).

But their list of possible regulatory strategies also includes an apparent desire for a global AI monopoly as an easier instrument of regulatory control. This is absolutely dangerous thinking that must be rejected.
8.7: International Governance | AI Safety, Ethics, and Society Textbook

2/4
equally problematic is the Center for AI Safety proposals to use the Biological Weapons Convention of 1972 as a model for global AI regulatory coordination. That would be a terrible model.

When the US and the Soviet Union signed on to the agreement in1972, it was hailed as a

3/4
I wrote about these and other problematic ideas for global AI control in my big
@RSI white paper on "Existential Risks and Global Governance Issues Around AI and Robotics."

4/4
last year, I debated these proposals for global AI regulation with Dan Hendrycks and the Center for AI Safety at a September Brookings event. My remarks begin around the 51:00 mark.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GO1kPSKWEAAwmqk.png

GO1keXAXwAApLBI.png

GO3t1lWbMAA-y90.jpg

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,839
Reputation
8,165
Daps
155,131
Top