Biden issues U.S.′ first AI executive order, requiring safety assessments, civil rights guidance, research on labor market impact

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,046
Reputation
8,229
Daps
157,717
open source as freedom from monopoly sounds cute but its always.... just plainly dismissive of the dangers an AI wild west

greater danger if it's concentrated in the hands of a few powerful interests.
 

42 Monks

Veteran
Supporter
Joined
Feb 20, 2014
Messages
55,815
Reputation
9,288
Daps
207,376
Reppin
Carolina
greater danger if it's concentrated in the hands of a few powerful interests.
which can generally be regulated by both public and private interests, similar or parallel to any other growing industry sector

is AI a new an emerging field? absolutely. is it the first new and emerging field ever? absolutely not. regulation isn't a wheel that needs to be reinvented, its a toolbox that needs to be properly equipped for its environment.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,046
Reputation
8,229
Daps
157,717
which can generally be regulated by both public and private interests, similar or parallel to any other growing industry sector

is AI a new an emerging field? absolutely. is it the first new and emerging field ever? absolutely not. regulation isn't a wheel that needs to be reinvented, its a toolbox that needs to be properly equipped for its environment.

man there are literally AI Models that give different responses when asked, "does isreal deserve to be free?" and "does palestine deserve to be free?". how do regulators regulate that? a few months ago i was lectured a few times on how i shouldn't scrape a website when i asked a model for help with a script.

what regulation do you have in mind?
 

42 Monks

Veteran
Supporter
Joined
Feb 20, 2014
Messages
55,815
Reputation
9,288
Daps
207,376
Reppin
Carolina
man there are literally AI Models that give different responses when asked, "does isreal deserve to be free?" and "does palestine deserve to be free?". how do regulators regulate that? a few months ago i was lectured a few times on how i shouldn't scrape a website when i asked a model for help with a script.

what regulation do you have in mind?
...the bolded might be one of the most reductive takes on the range of motion and capability provided within the sector lol

regulation isn't a snap realization or overnight fix that makes everyone happy. its an ongoing dialogue and measured solution which allows exceptions, variance, and priority as appropriate in a manner that serves the community of practice and customer alike.

as a start, recognizing that infinite growth isn't a real business model would go a long ways for developers as volatility helps no one. more than anything the overall language which defines the guidelines in the EO 'wishlist' needs a massive overhaul to even be pertinent.

either way, any time someone treats regulation as a yes or no question, it doesn't help anyone. guidelines and standards are not the enemy - and they never have been.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,046
Reputation
8,229
Daps
157,717
...the bolded might be one of the most reductive takes on the range of motion and capability provided within the sector lol

regulation isn't a snap realization or overnight fix that makes everyone happy. its an ongoing dialogue and measured solution which allows exceptions, variance, and priority as appropriate in a manner that serves the community of practice and customer alike.

as a start, recognizing that infinite growth isn't a real business model would go a long ways for developers as volatility helps no one. more than anything the overall language which defines the guidelines in the EO 'wishlist' needs a massive overhaul to even be pertinent.

either way, any time someone treats regulation as a yes or no question, it doesn't help anyone. guidelines and standards are not the enemy - and they never have been.


i didn't say there shouldn't be any regulation, I just said that there should be open source LLM's.
 

42 Monks

Veteran
Supporter
Joined
Feb 20, 2014
Messages
55,815
Reputation
9,288
Daps
207,376
Reppin
Carolina
i didn't say there shouldn't be any regulation, I just said that there should be open source LLM's.
then let me know what i said was outlandish lol. i don't think im tripping tbh
open source as freedom from monopoly sounds cute but its always.... just plainly dismissive of the dangers an AI wild west

"freedom from monopoly" is a plainly egregious framing which warrants an equally hyperbolic description like "wild west"/"gold rush"/etc

and if protecting software is the crux then like.... the AI EO is kinda missing the mark
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,046
Reputation
8,229
Daps
157,717
then let me know what i said was outlandish lol. i don't think im tripping tbh


"freedom from monopoly" is a plainly egregious framing which warrants an equally hyperbolic description like "wild west"/"gold rush"/etc

and if protecting software is the crux then like.... the AI EO is kinda missing the mark


freedom from monopoly isn't egregious framing at all. the monopoly of AI by a handful of companies currently doing massive filtering of various topics for various reasons is a very real concern. if people can rightly point out that google has a search monopoly which is a gateway to information than a handful of companies that are shoveling trillions of tokens(characters & words) into these large language models may soon be able to usurp googles monopoly position for themselves. in fact they can do more damage than google since google doesn't alter data on external sites. countless people have accepted what ai models have responded with as fact and even knowledge base companies like stackoverflow have seen web traffic to their site plummet since the debut of chatgpt. open source LLM's is providing immense value to many people who can't legally access chatgpt, bing chat or google bard.

besides even if the U.S could really clamp down on open source LLM's, they can't do anything about the open source models being released by chinese companies and models from institutes in middle east countries.
 

42 Monks

Veteran
Supporter
Joined
Feb 20, 2014
Messages
55,815
Reputation
9,288
Daps
207,376
Reppin
Carolina
freedom from monopoly isn't egregious framing at all. the monopoly of AI by a handful of companies currently doing massive filtering of various topics for various reasons is a very real concern. if people can rightly point out that google has a search monopoly which is a gateway to information than a handful of companies that are shoveling trillions of tokens(characters & words) into these large language models may soon be able to usurp googles monopoly position for themselves. in fact they can do more damage than google since google doesn't alter data on external sites. countless people have accepted what ai models have responded with as fact and even knowledge base companies like stackoverflow have seen web traffic to their site plummet since the debut of chatgpt. open source LLM's is providing immense value to many people who can't legally access chatgpt, bing chat or google bard.

besides even if the U.S could really clamp down on open source LLM's, they can't do anything about the open source models being released by chinese companies and models from institutes in middle east countries.
....to the first bolded state - '~various topics and various reasons is a very real concern' needs a LOT more focus lol.

google due to its position obviously has to be forward-postured if it intends to protect its spot. however, large language models are not something you can just watermark. if i want to one day digitize the entire library of congress and have it reproduced in sign-language for example, that's done with LLM foundations. a search bottleneck is kinda whatever and also a whole separate friction point lonnnnng before the AI EO. so you're losing me there. clarify maybe - because open source freedom doesn't stop google at all.

the second bolded is a subjective interpretation based on some vocal outcry that's likely misunderstanding or misframing impact at development levels as well as effect on industry. stackoverflow was never in a place to compete solely against what a chatgpt could offer even two years ago or in demo stages for that matter. even if it was web traffic.... ok? nothing to do with where this dialogue is.

and to the last point - if competing with China is the concern, regulation creating the most advantageous, competitive, and stable environment for workers is seen literally immediately after something like the CHIPS act. the world is smaller than ever, so skilled, educated, and most importantly *credentialed* specialists should always have the ability to choose where they'd like to work. when it comes to microelectronics for example, the certainty that industry will be stable and that investors can enjoy transparency as well as security for their investments was objectively more than enough to shift momentum back heavily towards the US.

similar effects were observed in the gradual modernization and privatization within the space industry as well - and that took decades. but the promise of eventual steady ground followed behind and sometimes advanced hand in hand with regulation as well. and as more countries embark on practical, versatile launch platforms using effective industry benchmarks for guardrails, the outcome is partner-contributors and streamlined pipelines for each following generation.

tl;dr, regulation is not a boogie man. and constructive feedback can't be based on stuff like "one ai said israel was right, and one said palestine"
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,046
Reputation
8,229
Daps
157,717
just tried it on this image

XojmkKD.png

CogVLM
m8fDbOz.png


bing chat :francis:

Cbi5Adi.png
 

Uachet

Superstar
Supporter
Joined
May 25, 2022
Messages
4,959
Reputation
3,556
Daps
28,634
Reppin
Black Self-Sufficiency
While this is something to be applauded, it is definitely not going to be effective enough to slow anything down. Laws need to be put in place. Executive orders only have power over the executive branch of the US. It has no effect on the actual practices of private companies, outside of perhaps contracts they receive from the Federal Government.

"Executive orders are issued from the executive branch of the government, specifically the U.S. President. An executive order is not a law in the sense that it does not go through the legislative process. It is not binding on everyone, only on employees of the executive branch. However, executive orders are subject to judicial review after the fact (i.e. they can be declared unconstitutional by the court). Proclamations tend to be celebratory announcements."


So the Legislative Branch needs to be pushed to create laws limiting A.I., insuring that it is used in a non-prejudice and ethical manner. So that it will be binding on all citizens of the US, including businesses who operate within the US.

Therefor, when considering what a candidate to the US congress or Senate is about, include in questions concerning that person's views on how A.I. should be governed. What they legislate on this issue within the next 5-20 years may have a significant effect on the livelihoods of a significant portion of the US population (Which will definitely include Black people).

*I know information about the breadth and scope of Executive Orders were already posted. I wanted to emphasize that info by posting the same information from a different source. It seems too many of us responding on here ignored the information posted on this the first time around.
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,046
Reputation
8,229
Daps
157,717




We have heard many extrapolations of Mistral AI’s position on the AI Act, so I’ll clarify.

In its early form, the AI Act was a text about product safety. Product safety laws are beneficial to consumers. Poorly designed use of automated decision-making systems can cause significant damage in many areas. In healthcare, a diagnosis assistant based on a poorly trained prediction system poses risks to the patient. Product safety regulation should be proportional to the risk level of the use case: it is undesirable to regulate entertainment software in the same way as health applications. The original EU AI Act found a reasonable equilibrium in that respect. We firmly believe in hard laws for product safety matters; the many voluntary commitments we see today bear little value.

This should remain the only focus of the AI Act. The EU AI Act now proposes to regulate “foundational models”, i.e. the engine behind some AI applications. We cannot regulate an engine devoid of usage. We don’t regulate the C language because one can use it to develop malware. Instead, we ban malware and strengthen network systems (we regulate usage). Foundational language models provide a higher level of abstraction than the C language for programming computer systems; nothing in their behaviour justifies a change in the regulatory framework.

Enforcing AI product safety will naturally affect the way we develop foundational models. By requiring AI application providers to comply with specific rules, the regulator fosters healthy competition among foundation model providers. It incentivises them to develop models and tools (filters, affordances for aligning models to one's beliefs) that allow for the fast development of safe products. As a small company, we can bring innovation into this space — creating good models and designing appropriate control mechanisms for deploying AI applications is why we founded Mistral. Note that we will eventually supply AI products, and we will craft them for zealous product safety.

With a regulation focusing on product safety, Europe would already have the most protective legislation globally for citizens and consumers. Any foundational model would be affected by second-order regulatory pressure as soon as they are exposed to consumers: to empower diagnostic assistants, entertaining chatbots, and knowledge explorers, foundational models should have controlled biases and outputs.

Recent versions of the AI Act started to address ill-defined “systemic risks”. In essence, the computation of some linear transformations, based on a certain amount of calculation, is now considered dangerous. Discussions around that topic may occur, and we agree that they should accompany the progress of technology. At this stage, they are very philosophical – they anticipate exponential progress in the field, where physics (scaling laws!) predicts diminishing returns with scale and the need for new paradigms. Whatever the content of these discussions, they certainly do not pertain to regulation around product safety. Still, let’s assume they do and go down that path.

The AI Act comes up with the worst taxonomy possible to address systemic risks. The current version has no set rules (beyond the term highly capable) to determine whether a model brings systemic risk and should face heavy or limited regulation. We have been arguing that the least absurd set of rules for determining the capabilities of a model is post-training evaluation (but again, applications should be the focus; it is unrealistic to cover all usages of an engine in a regulatory test), followed by compute threshold (model capabilities being loosely related to compute). In its current format, the EU AI Act establishes no decision criteria. For all its pitfalls, the US Executive Order bears at least the merit of clarity in relying on compute threshold.

The intention of introducing a two-level regulation is virtuous. Its effect is catastrophic. As we understand it, introducing a threshold aims to create a free innovation space for small companies. Yet, it effectively solidifies the existence of two categories of companies: those with the right to scale, i.e., the incumbent that can afford to face heavy compliance requirements, and those that can’t because they lack an army of lawyers, i.e., the newcomers. This signals to everyone that only prominent existing actors can provide state-of-the-art solutions.

Mechanistically, this is highly counterproductive to the rising European AI ecosystem. To be clear, we are not interested in benefiting from threshold effects: we play in the main league, we don’t need geographical protection, and we simply want rules that do not give an unfair advantage to incumbents (that all happen to be non-European).

Transparency around technology development benefits safety and should be encouraged. Finally, we have been vocal about the benefits of open-sourcing AI technology. This is the best way to subject it to the most rigorous scrutiny. Providing model weights to the community (or even better, developing models in the open end-to-end, which is not something we do yet) should be well regarded by regulators, as it allows for more interpretable and steerable applications. A large community of users can much more efficiently identify the flaws of open models that can propagate to AI applications than an in-house team of red-teamers. Open models can then be corrected, making AI applications safer. The Linux kernel is today deemed safe because millions of eyes have reviewed its code in its 32 years of existence. Tomorrow’s AI systems will be safe because we’ll collectively work on making them controllable. The only validated way of working collectively on software is open-source development.

Long prose, back to building!
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,046
Reputation
8,229
Daps
157,717

US, Britain, other countries ink agreement to make AI 'secure by design'​

By Raphael Satter and Diane Bartz

November 27, 202311:08 AM ESTUpdated an hour ago

Illustration shows AI (Artificial Intelligence) letters and computer motherboard

AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken, June 23, 2023. REUTERS/Dado Ruvic/Illustration//File Photo Acquire Licensing Rights

WASHINGTON, Nov 27 (Reuters) - The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design."

In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.

The agreement is non-binding and carries mostly general recommendations such as monitoring AI systems for abuse, protecting data from tampering and vetting software suppliers.

Still, the director of the U.S. Cybersecurity and Infrastructure Security Agency, Jen Easterly, said it was important that so many countries put their names to the idea that AI systems needed to put safety first.

"This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs," Easterly told Reuters, saying the guidelines represent "an agreement that the most important thing that needs to be done at the design phase is security."

The agreement is the latest in a series of initiatives - few of which carry teeth - by governments around the world to shape the development of AI, whose weight is increasingly being felt in industry and society at large.

In addition to the United States and Britain, the 18 countries that signed on to the new guidelines include Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria and Singapore.

The framework deals with questions of how to keep AI technology from being hijacked by hackers and includes recommendations such as only releasing models after appropriate security testing.

It does not tackle thorny questions around the appropriate uses of AI, or how the data that feeds these models is gathered.

The rise of AI has fed a host of concerns, including the fear that it could be used to disrupt the democratic process, turbocharge fraud, or lead to dramatic job loss, among other harms.

Europe is ahead of the United States on regulations around AI, with lawmakers there drafting AI rules. France, Germany and Italy also recently reached an agreement on how artificial intelligence should be regulated that supports "mandatory self-regulation through codes of conduct" for so-called foundation models of AI, which are designed to produce a broad range of outputs.

The Biden administration has been pressing lawmakers for AI regulation, but a polarized U.S. Congress has made little headway in passing effective regulation.

The White House sought to reduce AI risks to consumers, workers, and minority groups while bolstering national security with a new executive order in October.

Reporting by Raphael Satter and Diane Bartz; Editing by Alexandra Alper and Deepa Babington
 

IIVI

Superstar
Joined
Mar 11, 2022
Messages
11,395
Reputation
2,687
Daps
38,487
Reppin
Los Angeles
I'd love to get the best A.I we got right now to go over our tax code and simply ask it "Does it make sense? Is it consistent?" or "Find the loopholes".

Results would be interesting.
The U.S. Tax Code is a body of law covering the federal tax laws in the United States. The U.S.Tax Code is 6,871 pages, this will take about 1 week, 2 days to complete, but when tax regulations and official tax guidelines from the IRS are included, it goes up to about 75,000.

How about an A.I to summarize accurately all the laws congress passes in a 300 page bill so people can clearly see everything that's getting voted on?

They keep wanting to regulate everything, but the government needs the most regulation :mjlol: :heh:
 

ORDER_66

Demon Time coming 2024
Bushed
Joined
Feb 2, 2014
Messages
146,916
Reputation
15,774
Daps
585,891
Reppin
Queens,NY
Cacs always playing god and then when it blows up in their faces they look stupid asf... :francis: shyt should be banned and no military applications...ever
 
Top