bnew

Veteran
Joined
Nov 1, 2015
Messages
55,677
Reputation
8,224
Daps
157,176

Behind the Compute: Benchmarking Compute Solutions​

11 Mar

Behind the Compute is a series of blog posts that chronicle elements of our business, offering insights for others to harness the power of generative AI.

Behind_The_Compute_Inline2.jpg

In our last installment, we spoke about how we plan to utilize our state-of-the-art AI Supercomputer.

In this installment, we delve deeper into performance benchmarks and benefits of various compute solutions.

Our commitment to developing cutting-edge open models in multiple modalities necessitates a compute solution capable of handling diverse tasks with efficiency. To this end, we conducted a performance analysis, training two of our models, including the highly anticipated Stable Diffusion 3.

In our analysis, we compared the training speed of Intel Gaudi 2 accelerators versus Nvidia's A100 and H100, two of the most common choices for startups and developers training LLMs.

Model 1:

Stable Diffusion 3 is our most capable text-to-image model, soon to be in early preview.

Upon public release of Stable Diffusion 3, it will be available in sizes ranging from 800M to 8B parameters. Our analysis utilized the 2B parameter version and showed pleasantly surprising results.

We measured the training throughput for the 2B Multimodal Diffusion Transformer (MMDiT) architecture model with d=24, BFloat16mixed precision, optimized attention (xFormers for A100 and the FusedSDPA for Intel Gaudi). We call this model version MMDiT-ps2-d24.

First, let’s examine our training benchmark results across 2 nodes, a total of 16 accelerators (Gaudi/GPU). Here’s an excerpt of the raw data:

graphic_1.png

Keeping the batch size constant at 16 per accelerator, this Gaudi 2 system processed 927 training images per second - 1.5 times faster than the H100-80GB. Even better, we were able to fit a batch size of 32 per accelerator in the Gaudi 2 96GB of High Bandwidth Memory (HBM2E) to further increase the training rate to 1,254 images/sec.

As we scaled up the distributed training to 32 Gaudi 2 nodes (a total of 256 accelerators), we continued to measure very competitive performance:

graphic_2.png

In this configuration, the Gaudi 2 cluster processed over 3x more images per second, compared to A100-80GB GPUs. This is particularly impressive considering that the A100s have a very optimized software stack.

On inference tests with the Stable Diffusion 3 8B parameter model the Gaudi 2 chips offer inference speed similar to Nvidia A100 chips using base PyTorch. However, with TensorRT optimization, the A100 chips produce images 40% faster than Gaudi 2. We anticipate that with further optimization, Gaudi 2 will soon outperform A100s on this model. In earlier tests on our SDXL model with base PyTorch, Gaudi 2 generates a 1024x1024 image in 30 steps in 3.2 seconds, versus 3.6 seconds for PyTorch on A100s and 2.7 seconds for a generation with TensorRT on an A100.

The higher memory and fast interconnect of Gaudi 2, plus other design considerations, make it competitive to run the Diffusion Transformer architecture that underpins this next generation of media models.

Model 2:

Stable Beluga 2.5 70B is our fine-tuned version of LLaMA 2 70B, building on the Stable Beluga 2 model which was the first open model to best ChatGPT 3.5 in select benchmarks. We ran this training benchmark on 256 Gaudi 2 accelerators. Running our PyTorch code out of the box, with no extra optimizations, we measured an impressive total average throughput of 116,777 tokens/second. More specifically, this involves using a FP16 datatype, a global batch size of 1024, gradient accumulation steps of 2, and micro batch size of 2.

On inference tests with our 70B language model on Gaudi 2, it generates 673 tokens/second per accelerator, using an input token size of 128 and output token size of 2048. In comparison to TensorRT-LLM, Gaudi 2 appears to be 28% faster than the 525 tokens/second for the A100. We also anticipate further speed improvements with FP8.

Companies like ours face an increasing demand for more powerful and efficient computing solutions. Our findings underscore the need for alternatives like the Gaudi 2, which not only offers superior performance to other 7nm chips, but also addresses critical market needs such as affordability, reduced lead times, and superior price-to-performance ratios. Ultimately, the opportunity for choice in computing options broadens participation and innovation, thereby making advanced AI technologies more accessible to all.

Stay tuned for more insights in our next installment of "Behind the Compute."
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,677
Reputation
8,224
Daps
157,176







1/8
Today we're excited to introduce Devin, the first AI software engineer.

Devin is the new state-of-the-art on the SWE-Bench coding benchmark, has successfully passed practical engineering interviews from leading AI companies, and has even completed real jobs on Upwork.

Devin is an autonomous agent that solves engineering tasks through the use of its own shell, code editor, and web browser.

When evaluated on the SWE-Bench benchmark, which asks an AI to resolve GitHub issues found in real-world open-source projects, Devin correctly resolves 13.86% of the issues unassisted, far exceeding the previous state-of-the-art model performance of 1.96% unassisted and 4.80% assisted.

Check out what Devin can do in the thread below.

2/8
Today we're excited to introduce Devin, the first AI software engineer.

Devin is the new state-of-the-art on the SWE-Bench coding benchmark, has successfully passed practical engineering interviews from leading AI companies, and has even completed real jobs on Upwork.

Devin is an autonomous agent that solves engineering tasks through the use of its own shell, code editor, and web browser.

When evaluated on the SWE-Bench benchmark, which asks an AI to resolve GitHub issues found in real-world open-source projects, Devin correctly resolves 13.86% of the issues unassisted, far exceeding the previous state-of-the-art model performance of 1.96% unassisted and 4.80% assisted.

Check out what Devin can do in the thread below.Today we're excited to introduce Devin, the first AI software engineer.

Devin is the new state-of-the-art on the SWE-Bench coding benchmark, has successfully passed practical engineering interviews from leading AI companies, and has even completed real jobs on Upwork.

Devin is an autonomous agent that solves engineering tasks through the use of its own shell, code editor, and web browser.

When evaluated on the SWE-Bench benchmark, which asks an AI to resolve GitHub issues found in real-world open-source projects, Devin correctly resolves 13.86% of the issues unassisted, far exceeding the previous state-of-the-art model performance of 1.96% unassisted and 4.80% assisted.

Check out what Devin can do in the thread below.

3/8
1/4 Devin can learn how to use unfamiliar technologies.

4/8
2/4 Devin can contribute to mature production repositories.

5/8
3/4 Devin can train and fine tune its own AI models.

6/8
4/4 We even tried giving Devin real jobs on Upwork and it could do those too!

7/8
For more details on Devin, check out our blog post here: For more details on Devin, check out our blog post here: See Devin in action
If you have any project ideas, drop them below and we'll forward them to Devin.

See Devin in action
If you have any project ideas, drop them below and we'll forward them to Devin.

8/8
We'd like to thank all our supporters who have helped us get to where we are today, including @patrickc, @collision, @eladgil, @saranormous, Chris Re, @eglyman, @karimatiyeh, @bernhardsson, @t_xu, @FEhrsam, @foundersfund, and many more.

If you’re excited to solve some of the…












 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,677
Reputation
8,224
Daps
157,176

Exclusive: U.S. Must Move ‘Decisively’ to Avert ‘Extinction-Level’ Threat From AI, Government-Commissioned Report Says​


11 MINUTE READ

Extinction-AI-threat.jpg

Lon Tweeten for TIME; Getty Images

BY BILLY PERRIGO

MARCH 11, 2024 9:00 AM EDT

T
he U.S. government must move “quickly and decisively” to avert substantial national security risks stemming from artificial intelligence (AI) which could, in the worst case, cause an “extinction-level threat to the human species,” says a report commissioned by the U.S. government published on Monday.

“Current frontier AI development poses urgent and growing risks to national security,” the report, which TIME obtained ahead of its publication, says. “The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.” AGI is a hypothetical technology that could perform most tasks at or above the level of a human. Such systems do not currently exist, but the leading AI labs are working toward them and many expect AGI to arrive within the next five years or less.

The three authors of the report worked on it for more than a year, speaking with more than 200 government employees, experts, and workers at frontier AI companies—like OpenAI, Google DeepMind, Anthropic and Meta— as part of their research. Accounts from some of those conversations paint a disturbing picture, suggesting that many AI safety workers inside cutting-edge labs are concerned about perverse incentives driving decisionmaking by the executives who control their companies.

The finished document, titled “An Action Plan to Increase the Safety and Security of Advanced AI,” recommends a set of sweeping and unprecedented policy actions that, if enacted, would radically disrupt the AI industry. Congress should make it illegal, the report recommends, to train AI models using more than a certain level of computing power. The threshold, the report recommends, should be set by a new federal AI agency, although the report suggests, as an example, that the agency could set it just above the levels of computing power used to train current cutting-edge models like OpenAI’s GPT-4 and Google’s Gemini. The new AI agency should require AI companies on the “frontier” of the industry to obtain government permission to train and deploy new models above a certain lower threshold, the report adds. Authorities should also “urgently” consider outlawing the publication of the “weights,” or inner workings, of powerful AI models, for example under open-source licenses, with violations possibly punishable by jail time, the report says. And the government should further tighten controls on the manufacture and export of AI chips, and channel federal funding toward “alignment” research that seeks to make advanced AI safer, it recommends.

The report was commissioned by the State Department in November 2022 as part of a federal contract worth $250,000, according to public records. It was written by Gladstone AI, a four-person company that runs technical briefings on AI for government employees. (Parts of the action plan recommend that the government invests heavily in educating officials on the technical underpinnings of AI systems so they can better understand their risks.) The report was delivered as a 247-page document to the State Department on Feb. 26. The State Department did not respond to several requests for comment on the report. The recommendations “do not reflect the views of the United States Department of State or the United States Government,” the first page of the report says.

The report's recommendations, many of them previously unthinkable, follow a dizzying series of major developments in AI that have caused many observers to recalibrate their stance on the technology. The chatbot ChatGPT, released in November 2022, was the first time this pace of change became visible to society at large, leading many people to question whether future AIs might pose existential risks to humanity. New tools, with more capabilities, have continued to be released at a rapid clip since. As governments around the world discuss how best to regulate AI, the world’s biggest tech companies have fast been building out the infrastructure to train the next generation of more powerful systems—in some cases planning to use 10 or 100 times more computing power. Meanwhile, more than 80% of the American public believe AI could accidentally cause a catastrophic event, and 77% of voters believe the government should be doing more to regulate AI, according to recent polling by the AI Policy Institute.

Read More: Researchers Develop New Technique to Wipe Dangerous Knowledge From AI Systems

Outlawing the training of advanced AI systems above a certain threshold, the report states, may “moderate race dynamics between all AI developers” and contribute to a reduction in the speed of the chip industry manufacturing faster hardware. Over time, a federal AI agency could raise the threshold and allow the training of more advanced AI systems once evidence of the safety of cutting-edge models is sufficiently proven, the report proposes. Equally, it says, the government could lower the safety threshold if dangerous capabilities are discovered in existing models.

The proposal is likely to face political difficulties. “I think that this recommendation is extremely unlikely to be adopted by the United States government” says Greg Allen, director of the Wadhwani Center for AI and Advanced Technologies at the Center for Strategic and International Studies (CSIS), in response to a summary TIME provided of the report’s recommendation to outlaw AI training runs above a certain threshold. Current U.S. government AI policy, he notes, is to set compute thresholds above which additional transparency monitoring and regulatory requirements apply, but not to set limits above which training runs would be illegal. “Absent some kind of exogenous shock, I think they are quite unlikely to change that approach,” Allen says.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,677
Reputation
8,224
Daps
157,176
{continued}



Jeremie and Edouard Harris, the CEO and CTO of Gladstone respectively, have been briefing the U.S. government on the risks of AI since 2021. The duo, who are brothers, say that government officials who attended many of their earliest briefings agreed that the risks of AI were significant, but told them the responsibility for dealing with them fell to different teams or departments. In late 2021, the Harrises say Gladstone finally found an arm of the government with the responsibility to address AI risks: the State Department’s Bureau of International Security and Nonproliferation. Teams within the Bureau have an inter-agency mandate to address risks from emerging technologies including chemical and biological weapons, and radiological and nuclear risks. Following briefings by Jeremie and Gladstone's then-CEO Mark Beall, in October 2022 the Bureau put out a tender for report that could inform a decision whether to add AI to the list of other risks it monitors. (The State Department did not respond to a request for comment on the outcome of that decision.) The Gladstone team won that contract, and the report released Monday is the outcome.

The report focuses on two separate categories of risk. Describing the first category, which it calls “weaponization risk,” the report states: “such systems could potentially be used to design and even execute catastrophic biological, chemical, or cyber attacks, or enable unprecedented weaponized applications in swarm robotics.” The second category is what the report calls the “loss of control” risk, or the possibility that advanced AI systems may outmaneuver their creators. There is, the report says, “reason to believe that they may be uncontrollable if they are developed using current techniques, and could behave adversarially to human beings by default.”

Both categories of risk, the report says, are exacerbated by “race dynamics” in the AI industry. The likelihood that the first company to achieve AGI will reap the majority of economic rewards, the report says, incentivizes companies to prioritize speed over safety. “Frontier AI labs face an intense and immediate incentive to scale their AI systems as fast as they can,” the report says. “They do not face an immediate incentive to invest in safety or security measures that do not deliver direct economic benefits, even though some do out of genuine concern.”

The Gladstone report identifies hardware—specifically the high-end computer chips currently used to train AI systems—as a significant bottleneck to increases in AI capabilities. Regulating the proliferation of this hardware, the report argues, may be the “most important requirement to safeguard long-term global safety and security from AI.” It says the government should explore tying chip export licenses to the presence of on-chip technologies allowing monitoring of whether chips are being used in large AI training runs, as a way of enforcing proposed rules against training AI systems larger than GPT-4. However the report also notes that any interventions will need to account for the possibility that overregulation could bolster foreign chip industries, eroding the U.S.’s ability to influence the supply chain.

The report also raises the possibility that, ultimately, the physical bounds of the universe may not be on the side of those attempting to prevent proliferation of advanced AI through chips. “As AI algorithms continue to improve, more AI capabilities become available for less total compute. Depending on how far this trend progresses, it could ultimately become impractical to mitigate advanced AI proliferation through compute concentrations at all.” To account for this possibility, the report says a new federal AI agency could explore blocking the publication of research that improves algorithmic efficiency, though it concedes this may harm the U.S. AI industry and ultimately be unfeasible.

The Harrises recognize in conversation that their recommendations will strike many in the AI industry as overly zealous. The recommendation to outlaw the open-sourcing of advanced AI model weights, they expect, will not be popular. “Open source is generally a wonderful phenomenon and overall massively positive for the world,” says Edouard, the chief technology officer of Gladstone. “It’s an extremely challenging recommendation to make, and we spent a lot of time looking for ways around suggesting measures like this.” Allen, the AI policy expert at CSIS, says he is sympathetic to the idea that open-source AI makes it more difficult for policymakers to get a handle on the risks. But he says any proposal to outlaw the open-sourcing of models above a certain size would need to contend with the fact that U.S. law has a limited reach. “Would that just mean that the open source community would move to Europe?” he says. “Given that it's a big world, you sort of have to take that into account.”

Despite the challenges, the report’s authors say they were swayed by how easy and cheap it currently is for users to remove safety guardrails on an AI model if they have access to its weights. “If you proliferate an open source model, even if it looks safe, it could still be dangerous down the road,” Edouard says, adding that the decision to open-source a model is irreversible. “At that point, good luck, all you can do is just take the damage.”

The third co-author of the report, former Defense Department official Beall, has since left Gladstone in order to start a super PAC aimed at advocating for AI policy. The PAC, called Americans for AI Safety, officially launched on Monday. It aims to make AI safety and security "a key issue in the 2024 elections, with a goal of passing AI safety legislation by the end of 2024," the group said in a statement to TIME. The PAC did not disclose its funding commitments, but said it has "set a goal of raising millions of dollars to accomplish its mission."

Before co-founding Gladstone with Beall, the Harris brothers ran an AI company that went through YCombinator, the famed Silicon Valley incubator, at the time when OpenAI CEO Sam Altman was at the helm. The pair brandish these credentials as evidence they have the industry’s interests at heart, even as their recommendations, if implemented, would upend it. “Move fast and break things, we love that philosophy, we grew up with that philosophy,” Jeremie tells TIME. But the credo, he says, ceases to apply when the potential downside of your actions is so massive. “Our default trajectory right now,” he says, “seems very much on course to create systems that are powerful enough that they either can be weaponized catastrophically, or fail to be controlled.” He adds: “One of the worst-case scenarios is you get a catastrophic event that completely shuts down AI research for everybody, and we don't get to reap the incredible benefits of this technology.”

Are you an employee at an AI lab and have concerns that you might consider sharing with a journalist? You can contact the author of this piece on Signal at billyperrigo.01
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,677
Reputation
8,224
Daps
157,176

Midjourney bans all Stability AI employees over alleged data scraping​


Midjourney blamed a near 24-hour service outage on ‘botnet-like activity’ from two accounts linked to the Stable Diffusion creator.​


By Jess Weatherbed, a news writer focused on creative industries, computing, and internet culture. Jess started her career at TechRadar, covering news and hardware reviews.

Mar 11, 2024, 3:22 PM EDT


13 Comments

A cartoon illustration shows a shadowy figure carrying off a red directory folder, which has a surprised-looking face on its side.

Stability.AI CEO Emad Mostaque is currently investigating the situation.

VRG_Illo_STK001_B_Sala_Hacker.jpg

Illustration: Beatrice Sala

Midjourney says it has banned Stability AI staffers from using its service, accusing employees at the rival generative AI company of causing a systems outage earlier this month during an attempt to scrape Midjourney’s data.

Midjourney posted an update to its Discord server on March 2nd that acknowledged an extended server outage was preventing generated images from appearing in user galleries. In a summary of a business update call on March 6th, Midjourney claimed that “botnet-like activity from paid accounts” — which the company specifically links to Stability AI employees — was behind the outage.


A screenshot taken from MidJourney’s Discord channel discussing action against Stability.AI employees.
Midjourney_office_hours.jpg

These meeting notes were posted to Midjourney’s official Discord channel following an “office hours” call on March 6th.
Image: Midjourney / Discord

According to Midjourney user Nick St. Pierre on X, who listened to the call, Midjourney said that the service was brought down because “someone at Stability AI was trying to grab all the prompt and image pairs in the middle of a night on Saturday.” St. Pierre said that Midjourney had linked multiple paid accounts to an individual on the Stability AI data team.

In its summary of the business update call on March 6th (which Midjourney refers to as “office hours”), the company says it’s banning all Stability AI employees from using its service “indefinitely” in response to the outage. Midjourney is also introducing a new policy that will similarly ban employees of any company that exercises “aggressive automation” or causes outages to the service.



St. Pierre flagged the accusations to Stability AI CEO Emad Mostaque, who replied on X, saying he was investigating the situation and that Stability hadn’t ordered the actions in question. “Very confusing how 2 accounts would do this team also hasn’t been scraping as we have been using synthetic & other data given SD3 outperforms all other models,” said Mostaque, referring to the Stable Diffusion 3 AI model currently in preview. He claimed that if the outage was caused by a Stability employee, then it was unintentional and “obviously not a DDoS attack.”

Midjourney founder David Holz responded to Mostaque in the same thread, claiming to have sent him “some information” to help with his internal investigation.

The situation is otherwise still developing, and no additional updates have been provided since that conversation on March 6th. At the time of writing, neither Midjourney nor Stability AI have responded to the Verge’s request for comment.

It does seem odd that scraping activity from just two accounts allegedly managed to cause such an extended server outage. The irony of this situation also hasn’t been lost on online creatives, who have extensively criticized both companies (and generative AI systems in general) for training their models on masses of online data scraped from their works without consent. Stable Diffusion and Midjourney have both been targeted with several copyright lawsuits, with the latter being accused of creating an artist database for training purposes in December.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,677
Reputation
8,224
Daps
157,176

Midjourney is testing a highly requested “consistent characters” feature.The generative AI service’s new algorithm can now use the same character across multiple images and styles without deviating too far from their original design.
Instructions on how to use “consistent characters” can be found on Midjourney’s Discord channel. The feature isn’t designed to replicate real people from photographs, and works best on characters generated via Midjourney.





1/1
We're testing a new algorithm today to help you have "consistent characters" across your images. Check out our announcement channel for more instructions. It works for both MJ6 and Niji6 models. We hope this helps you play with telling stories and building new worlds <3
GIcdsn8aMAATQ9a.png

GIcdxr5bkAA-7cl.png

GIcd02qbIAADFKu.png

GIcd7afaMAAG6eQ.jpg









1/7
It's similar to the style reference feature, except instead of matching style, it makes your characters match your Character Reference (--cref) image

I used the image on the left as my character reference

Prompts in ALT

2/7
It's similar to the style reference feature, except instead of matching style, it makes your characters match your Character Reference (--cref) image

I used the image on the left as my character reference

Prompts in ALT

3/7
It also works across image styles, which is pretty sick and very fun to play with

4/7
You can use the Character Weight parameter (--cw N) to control the level of character detail you carry over.

At lower values like --cw 0 it will focus mostly on the face, but at higher values like --cw 100 it'll pull more of the outfit in too

Top left is ref image

5/7
On the left is the character reference

On the right is the character reference used in a totally different prompt that included style references

It's definitely not perfect, but it's wayyy better than any other solution we've had previously

6/7
You can use more than one reference too, and start to blend things together like I did here

I used both examples in a single prompt here (i'll go into this in more detail in a future post

It also works through inpainting (I'll do a post on that too)

6/7
NOTES:
> precision is currently limited
> --cref works in niji 6 & v6 models
> --cw 100 is default (face, hair, & clothes)
> works best with MJ generated characters
> wont copy exact dimples/freckles/or logos

Messing w/ this all night tn
I'll let you know what else I figure out
GIbQ8oTWsAA1eA1.jpg

GIbPFNsWgAAFwyJ.jpg

GIbQdrvWcAEPI15.jpg

GIbRxqiXYAAylr8.jpg

GIbR3CBWAAAWlZb.jpg

GIbS8FRXMAAq-lQ.jpg

GIbXAREW8AELvn1.jpg

GIbZkmnWQAAPwnA.jpg

GIbYKQlXwAAz4Jm.jpg

GIbYpsIWgAA3dvz.jpg

GIbdNyyXQAAJqVA.jpg

GIbdybOXUAA-Stn.jpg

GIbfmXmXwAAQqKX.jpg

GIbiwAdXMAALZrR.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,677
Reputation
8,224
Daps
157,176

POSTED MAR 13, 2024
AT 7:34 AM EDT

0 Comments
JESS WEATHERBED

The EU has officially adopted its sweeping AI law.

After two years of debate and revisions, European Parliament members gave the Artificial Intelligence Act their final approval on Wednesday.

While the law officially comes into force 20 days after it’s published in the Official Journal (likely happening in May), some rules — like those impacting general-purpose AI systems like chatbots — will take effect 12 months later to give AI providers time to comply.


Artificial Intelligence Act: MEPs adopt landmark law | News | European Parliament
[WWW.EUROPARL.EUROPA.EU]





Artificial Intelligence Act: MEPs adopt landmark law​

Press Releases
PLENARY SESSION
IMCO
LIBE


30 minutes ago

  • Safeguards on general purpose artificial intelligence
  • Limits on the use of biometric identification systems by law enforcement
  • Bans on social scoring and AI used to manipulate or exploit user vulnerabilities
  • Right of consumers to launch complaints and receive meaningful explanations

Personal identification technologies in street surveillance cameras

The untargeted scraping of facial images from CCTV footage to create facial recognition databases will be banned © Alexander / Adobe Stock

On Wednesday, Parliament approved the Artificial Intelligence Act that ensures safety and compliance with fundamental rights, while boosting innovation.

The regulation, agreed in negotiations with member states in December 2023, was endorsed by MEPs with 523 votes in favour, 46 against and 49 abstentions.

It aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.

Banned applications

The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.

Law enforcement exemptions

The use of biometric identification systems (RBI) by law enforcement is prohibited in principle, except in exhaustively listed and narrowly defined situations. “Real-time” RBI can only be deployed if strict safeguards are met, e.g. its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorisation. Such uses may include, for example, a targeted search of a missing person or preventing a terrorist attack. Using such systems post-facto (“post-remote RBI”) is considered a high-risk use case, requiring judicial authorisation being linked to a criminal offence.

Obligations for high-risk systems

Clear obligations are also foreseen for other high-risk AI systems (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law). Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections). Such systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight. Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.

Transparency requirements

General-purpose AI (GPAI) systems, and the GPAI models they are based on, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training. The more powerful GPAI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents.

Additionally, artificial or manipulated images, audio or video content (“deepfakes”) need to be clearly labelled as such.

Measures to support innovation and SMEs

Regulatory sandboxes and real-world testing will have to be established at the national level, and made accessible to SMEs and start-ups, to develop and train innovative AI before its placement on the market.

Quotes

During the plenary debate on Tuesday, the Internal Market Committee co-rapporteur Brando Benifei (S&D, Italy) said: “We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency. Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected. The AI Office will now be set up to support companies to start complying with the rules before they enter into force. We ensured that human beings and European values are at the very centre of AI’s development”.

Civil Liberties Committee co-rapporteur Dragos Tudorache (Renew, Romania) said: “The EU has delivered. We have linked the concept of artificial intelligence to the fundamental values that form the basis of our societies. However, much work lies ahead that goes beyond the AI Act itself. AI will push us to rethink the social contract at the heart of our democracies, our education models, labour markets, and the way we conduct warfare. The AI Act is a starting point for a new model of governance built around technology. We must now focus on putting this law into practice”.

Next steps

The regulation is still subject to a final lawyer-linguist check and is expected to be finally adopted before the end of the legislature (through the so-called corrigendum procedure). The law also needs to be formally endorsed by the Council.

It will enter into force twenty days after its publication in the official Journal, and be fully applicable 24 months after its entry into force, except for: bans on prohibited practises, which will apply six months after the entry into force date; codes of practise (nine months after entry into force); general-purpose AI rules including governance (12 months after entry into force); and obligations for high-risk systems (36 months).

Background

The Artificial Intelligence Act responds directly to citizens’ proposals from the Conference on the Future of Europe (COFE), most concretely to proposal 12(10) on enhancing EU’s competitiveness in strategic sectors, proposal 33(5) on a safe and trustworthy society, including countering disinformation and ensuring humans are ultimately in control, proposal 35 on promoting digital innovation, (3) while ensuring human oversight and (8) trustworthy and responsible use of AI, setting safeguards and ensuring transparency, and proposal 37 (3) on using AI and digital tools to improve citizens’ access to information, including persons with disabilities.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,677
Reputation
8,224
Daps
157,176





1/6
South Korea's local governments are deploying around 7,000 AI-robot dolls to seniors and dementia patients.

The $1,800 robot doll by Hyodal can hold full conversations to tackle loneliness and remind users to take medication.

Dystopian, yes, but the data is fascinating:

1. Studies (with over 9,000 users) found that depression levels reduced from 5.73 to 3.14, and medicine intake improved from 2.69 to 2.87.

2. The doll comes with a companion app and web monitoring platform for caretakers to monitor remotely.

3. Safety features are installed to alert when no movement has been detected for a certain period, essentially always watching the user.

4. The doll also offers touch interaction, 24-hour voice reminders, check-ins, voice messages, a health coach, quizzes, exercise, music, and more.

5. Caregivers have access to the app, allowing them to send/receive voice messages, make group announcements, and monitor motion detection.

I'd definitely have some privacy and data collection concerns here before handing this off to my family, but the product actually seems really cool.

Will be interesting to watch the data to see if this idea has legs.

Keep in mind, SK has a rapidly aging population and one of the world's lowest birth rates, so it makes sense for the local governments to be early adopters here.

2/6
South Korea's local governments are deploying around 7,000 AI-robot dolls to seniors and dementia patients.

The $1,800 robot doll by Hyodal can hold full conversations to tackle loneliness and remind users to take medication.

Dystopian, yes, but the data is fascinating:

1. Studies (with over 9,000 users) found that depression levels reduced from 5.73 to 3.14, and medicine intake improved from 2.69 to 2.87.

2. The doll comes with a companion app and web monitoring platform for caretakers to monitor remotely.

3. Safety features are installed to alert when no movement has been detected for a certain period, essentially always watching the user.

4. The doll also offers touch interaction, 24-hour voice reminders, check-ins, voice messages, a health coach, quizzes, exercise, music, and more.

5. Caregivers have access to the app, allowing them to send/receive voice messages, make group announcements, and monitor motion detection.

I'd definitely have some privacy and data collection concerns here before handing this off to my family, but the product actually seems really cool.

Will be interesting to watch the data to see if this idea has legs.

Keep in mind, SK has a rapidly aging population and one of the world's lowest birth rates, so it makes sense for the local governments to be early adopters here.

3/6
100%

4/6
TV doesn’t monitor you and collect your data, but very valid point

5/6
Yeah, also got to remember that this is the worst AI this will ever be. It’ll only get better.

Watching this AI loneliness space closely to see what the long term data looks like (if users stay interested, etc.)

6/6
Hyodol. Here's the website (it's in Korean)
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,677
Reputation
8,224
Daps
157,176

Google’s new AI will play video games with you — but not to win​


Google DeepMind trained its video game playing AI agent on games like Valheim, No Man’s Sky, and Goat Simulator.​


By Emilia David, a reporter who covers AI. Prior to joining The Verge, she covered the intersection between technology, finance, and the economy.

Mar 13, 2024, 10:00 AM EDT

2 Comments


Google logo with colorful shapes
STK093_Google_04.jpg

Illustration: The Verge

Google DeepMind unveiled SIMA, an AI agent training to learn gaming skills so it plays more like a human instead of an overpowered AI that does its own thing. SIMA, which stands for Scalable, Instructable, Multiworld Agent, is currently only in research.

SIMA will eventually learn how to play any video game, even games with no linear path to end the game and open-world games. Though it’s not intended to replace existing game AI, think of it more as another player that meshes well with your party. It mixes natural language instruction with understanding 3D worlds and image recognition.

“SIMA isn’t trained to win a game; it’s trained to run it and do what it’s told,” said Google DeepMind researcher and SIMA co-lead Tim Harley during a briefing with reporters.

Google worked with eight game developers, including Hello Games, Embracer, Tuxedo Labs, Coffee Stain, and others, to train and test SIMA. Researchers plugged SIMA into games like No Man’s Sky, Teardown, Valheim, and Goat Simulator 3 to teach the AI agent the basics of playing the games. In a blog post, Google said that SIMA doesn’t need a custom API to play the games or access source codes.

Harley said the team chose games that were more focused on open play than narrative to help SIMA learn general gaming skills. If you’ve played or watched a playthrough of Goat Simulator, you know that doing random, spontaneous things is the point of the game, and Harley said it was this kind of spontaneity they hoped SIMA would learn.

To do this, the team first built a new environment in the Unity engine where the agents needed to create sculptures to test their understanding of object manipulation. Then, Google recorded pairs of human players — one controlling the game and the other giving instructions on what to do next — to capture language instructions. Afterward, players played independently to show what led to their actions in the game. All of this was fed to the SIMA agents to learn to predict what would happen next on the screen.

SIMA currently has about 600 basic skills, such as turning left, climbing a ladder, and opening the menu to use a map. Eventually, Harley said, SIMA could be instructed to do more complex functions within a game. Tasks like “find resources and build a camp” are still difficult because AI agents can’t perform actions for humans.

SIMA isn’t meant to be an AI-powered NPC like the ones from Nvidia and Convai, but another player in a game that impacts the result. SIMA project co-lead Frederic Besse said it’s too early to tell what kind of uses AI agents like it could bring to gaming outside of the research sphere.

Like AI NPCs, however, SIMA may eventually learn to talk, but it’s far from that. SIMA is still learning how to play games and adapt to ones it hasn’t played before. Google said that with more advanced AI models, SIMA may eventually be able to do more complex tasks and be the perfect AI party member to lead you to victory.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,677
Reputation
8,224
Daps
157,176

OpenAI’s Sora text-to-video generator will be publicly available later this year​


OpenAI CTO Mira Murati tells The Wall Street Journal that Sora will eventually incorporate sound as well.​


By Emma Roth, a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.

Mar 13, 2024, 9:37 AM EDT

1 Comment


A screenshot of an AI-generated video produced by Sora
openai_sora_screen.png

This is an example of the kind of content Sora can produce.
Image: OpenAI

You’ll soon get to try out OpenAI’s buzzy text-to-video generator for yourself. In an interview with The Wall Street Journal, OpenAI chief technology officer Mira Murati says Sora will be available “this year” and that it “could be a few months.”


OpenAI first showed off Sora, which is capable of generating hyperrealistic scenes based on a text prompt, in February. The company only made the tool available for visual artists, designers, and filmmakers to start, but that didn’t stop some Sora-generated videos from making their way onto platforms like X.

In addition to making the tool available to the public, Murati says OpenAI has plans to “eventually” incorporate audio, which has the potential to make the scenes even more realistic. The company also wants to allow users to edit the content in the videos Sora produces, as AI tools don’t always create accurate images. “We’re trying to figure out how to use this technology as a tool that people can edit and create with,” Murati tells the Journal.

When pressed on what data OpenAI used to train Sora, Murati didn’t get too specific and seemed to dodge the question. “I’m not going to go into the details of the data that was used, but it was publicly available or licensed data,” she says. Murati also says she isn’t sure whether it used videos from YouTube, Facebook, and Instagram. She only confirmed to the Journal that Sora uses content from Shutterstock, with which OpenAI has a partnership.

Murati also told the Journal that Sora is “much more expensive” to power. OpenAI is trying to make the tool “available at similar costs” to DALL-E, the company’s AI text-to-image model, when it’s released to the public. You can see even more examples of what kinds of videos this tool can produce in the Journal’s report, including an animated bull in a China shop and a mermaid smartphone reviewer.

As we approach the 2024 presidential election, concerns about generative AI tools and their potential to create misinformation have only increased. When released, Murati says Sora likely won’t be able to produce images of public figures, similar to DALL-E’s policies. Videos will also have a watermark to distinguish them from the real thing, but as my colleague Emilia David points out, watermarks aren’t a perfect solution.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,677
Reputation
8,224
Daps
157,176

The EU AI Act passed — here’s what comes next​

The EU’s sweeping AI regulations have (almost) passed their final hurdle.


By Emilia David and Jess Weatherbed

Updated Mar 13, 2024, 8:30 AM EDT

3 Comments


STK450_EU_E.jpg

Now EU MEPs just need to figure out how to implement and enforce it.
Cath Virginia / The Verge

European Union lawmakers have officially approved the bloc’s landmark AI regulation, paving the way for the EU to prohibit certain uses of the technology and demand transparency from providers. In a majority vote on Wednesday, 523 European Parliament members elected to formally adopt the Artificial Intelligence Act (AI Act), and will now work towards its enforcement and implementation.

The AI Act has been hotly debated since it was first proposed in 2021, with some of its strictest regulations — such as a proposed total ban on biometric systems for mass public surveillance — being softened by last-minute compromises. While Wednesday’s announcement means the law has almost passed its final hurdle, it will still take years for some rules to be enforced.

The legal language of the text is still awaiting final approval, either via a separate announcement or a plenary session vote on April 10th/11th, with the AI Act then officially coming into force 20 days after it’s published in the Official Journal — which is anticipated to happen in May or June this year. Provisions will then take effect in stages: countries will have six months to ban prohibited AI systems, 12 months to enforce rules against “general-purpose AI systems” like chatbots, and up to 36 months for AI systems the law has designated as “high risk.”

Prohibited systems include things like social scoring, emotion recognition at work or schools, or systems that are designed to influence behavior or exploit user vulnerabilities. Examples of “high-risk” AI systems include those applied to critical infrastructure, education, and vocational training, certain law enforcement systems, and those that can be used to influence democratic processes like elections.

“In the very short run, the compromise on the EU AI Act won’t have much direct effect on established AI designers based in the US, because, by its terms, it probably won’t take effect until 2025,” said Paul Barrett back, deputy director of the NYU Stern Center for Business and Human Rights, back in December 2023 when the EU provisionally agreed on the landmark AI regulation. So for now, Barrett says major AI players like OpenAI, Microsoft, Google, and Meta will likely continue to fight for dominance, particularly as they navigate regulatory uncertainty in the US.

The AI Act got its start before the explosion in general-purpose AI (GPAI) tools like OpenAI’s GPT-4 large language model, and regulating them became a remarkably complicated sticking point in last-minute discussions. The act divides its rules on the level of risk an AI system has on society, or as the EU said in a statement, “the higher the risk, the stricter the rules.”

But some member states grew concerned that this strictness could make the EU an unattractive market for AI. France, Germany, and Italy all lobbied to water down restrictions on GPAI during negotiations. They won compromises, including limiting what can be considered “high-risk” systems, which would then be subject to some of the strictest rules. Instead of classifying all GPAI as high-risk, there will be a two-tier system and law enforcement exceptions for outright prohibited uses of AI like remote biometric identification.

That still hasn’t satisfied all critics. French President Emmanuel Macron attacked the rules, saying the AI Act creates a tough regulatory environment that hampers innovation. Barrett said some new European AI companies could find it challenging to raise capital with the current rules, which gives an advantage to American companies. Companies outside of Europe may even choose to avoid setting up shop in the region or block access to platforms so they don’t get fined for breaking the rules — a potential risk Europe has faced in the non-AI tech industry as well, following regulations like the Digital Markets Act and Digital Services Act.

But the rules also sidestep some of the most controversial issues around generative AI

AI models trained on publicly available — but sensitive and potentially copyrighted — data have become a big point of contention for organizations, for instance. The approved rules, however, do not create new laws around data collection. While the EU pioneered data protection laws through GDPR, its AI rules do not prohibit companies from gathering information, beyond requiring that it follow GDPR guidelines.

“Under the rules, companies may have to provide a transparency summary or data nutrition labels,” Susan Ariel Aaronson, director of the Digital Trade and Data Governance Hub and a research professor of international affairs at George Washington University said when the EU provisionally approved the rules. “But it’s not really going to change the behavior of companies around data.”

Aaronson points out that the AI Act still hasn’t clarified how companies should treat copyrighted material that’s part of model training data, beyond stating that developers should follow existing copyright laws (which leave lots of gray areas around AI). So it offers no incentive for AI model developers to avoid using copyrighted data.

The AI Act also won’t apply its potentially stiff fines to open-source developers, researchers, and smaller companies working further down the value chain — a decision that’s been lauded by open-source developers in the field. GitHub chief legal officer Shelley McKinley said it is “a positive development for open innovation and developers working to help solve some of society’s most pressing problems.” (GitHub, a popular open-source development hub, is a subsidiary of Microsoft.)

Observers think the most concrete impact could be pressuring other political figures, particularly American policymakers, to move faster. It’s not the first major regulatory framework for AI — in July, China passed guidelines for businesses that want to sell AI services to the public. But the EU’s relatively transparent and heavily debated development process has given the AI industry a sense of what to expect. Aaronson said the provisional text (which has since been approved) at least shows that the EU has listened and responded to public concerns around the technology.

Lothar Determann, data privacy and information technology partner at law firm Baker McKenzie, says the fact that it builds on existing data rules could also encourage governments to take stock of what regulations they have in place. And Blake Brannon, chief strategy officer at data privacy platform OneTrust, said more mature AI companies set up privacy protection guidelines in compliance with laws like GDPR and in anticipation of stricter policies. He said that depending on the company, the AI Act is “an additional sprinkle” to strategies already in place.

The US, by contrast, has largely failed to get AI regulation off the ground — despite being home to major players like Meta, Amazon, Adobe, Google, Nvidia, and OpenAI. Its biggest move so far has been a Biden administration executive order directing government agencies to develop safety standards and build on voluntary, non-binding agreements signed by large AI players. The few bills introduced in the Senate have mostly revolved around deepfakes and watermarking, and the closed-door AI forums held by Sen. Chuck Schumer (D-NY) have offered little clarity on the government’s direction in governing the technology.

Now, policymakers may look at the EU’s approach and take lessons from it

This doesn’t mean the US will take the same risk-based approach, but it may look to expand data transparency rules or allow GPAI models a little more leniency.

Navrina Singh, founder of Credo AI and a national AI advisory committee member, believes that while the AI Act is a huge moment for AI governance, things will not change rapidly, and there’s still a ton of work ahead.

“The focus for regulators on both sides of the Atlantic should be on assisting organizations of all sizes in the safe design, development, and deployment of AI that are both transparent and accountable,” Singh told The Verge in December. She adds there’s still a lack of standards and benchmarking processes, particularly around transparency.

The act does not retroactively regulate existing models or apps, but future versions of OpenAI’s GPT, Meta’s Llama, or Google’s Gemini will need to take into account the transparency requirements set by the EU. It may not produce dramatic changes overnight — but it demonstrates where the EU stands on AI.

Update March 12th, 8:30ET AM: Updated the original article following the EU Act being officially adopted.
 
Top