AI one-percenters seizing power forever is the real doomsday scenario, warns AI godfather

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,339
Reputation
8,496
Daps
160,019

AI one-percenters seizing power forever is the real doomsday scenario, warns AI godfather​

Hasan Chowdhury

Oct 30, 2023, 10:11 AM EDT

Yann LeCun

AI godfather Yann LeCun has fired shots at notable AI leaders. Kevin Dietsch/Getty Images
  • An AI godfather has had it with the doomsdayers.
  • Meta's Yann LeCun thinks tech bosses' bleak comments on AI risks could do more harm than good.
  • The naysaying is actually about keeping control of AI in the hands of a few, he said.
AI godfather Yann LeCun wants us to forget some of the more far-fetched doomsday scenarios.

He sees a different, real threat on the horizon: the rise of power hungry one-percenters who rob everyone else of AI's riches.

Over the weekend, Meta's chief AI scientist accused some of the most prominent founders in AI of "fear-mongering" and "massive corporate lobbying" to serve their own interests.

He named OpenAI's Sam Altman, Google DeepMind's Demis Hassabis, and Anthropic's Dario Amodei in a lengthy weekend post on X.

"Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment," LeCun wrote, referring to these founders' role in shaping regulatory conversations about AI safety. "They are the ones who are attempting to perform a regulatory capture of the AI industry."

He added that if these efforts succeed, the outcome would be a "catastrophe" because "a small number of companies will control AI."

That's significant since, as almost everyone who matters in tech agrees, AI is the biggest development in technology since the microchip or the internet.


Altman, Hassabis, and Amodei did not immediately respond to Insider's request for comment.

LeCun's comments came in response to a post on X from physicist Max Tegmark, who suggested that LeCun wasn't taking the AI doomsday arguments seriously enough.

"Thanks to @RishiSunak & @vonderleyen for realizing that AI xrisk arguments from Turing, Hinton, Bengio, Russell, Altman, Hassabis & Amodei can't be refuted with snark and corporate lobbying alone," Tegmark wrote, referring to the UK's upcoming global AI safety summit.



LeCun says founder fretting is just lobbying

Since the launch of ChatGPT, AI's power players have become major public figures.

But, LeCun said, founders such as Altman and Hassabis have spent a lot of time drumming up fear about the very technology they're selling.

In March, more than 1,000 tech leaders, including Elon Musk, Altman, Hassabis, and Amodei, signed a letter calling for a minimum six-month pause on AI development.

The letter cited "profound risks to society and humanity" posed by hypothetical AI systems. Tegmark, one of the letter's signatories, has described AI development as "a suicide race."

LeCun and others say these kinds of headline-grabbing warnings are just about cementing power and skating over the real, imminent risks of AI.

Those risks include worker exploitation and data theft that generates profit for "a handful of entities," according to the Distributed AI Research Institute (DAIR).

The focus on hypothetical dangers also divert attention away from the boring-but-important question of how AI development actually takes shape.

LeCun has described how people are "hyperventilating about AI risk" because they have fallen for what he describes as the myth of the "hard take-off." This is the idea that "the minute you turn on a super-intelligent system, humanity is doomed."

But imminent doom is unlikely, he argues, because every new technology in fact goes through a very ordered development process before wider release.



So the area to focus on, is in fact, how AI is developed right now. And for LeCun, the real danger is that the development of AI is locked into private, for-profit entities who never release their findings, while AI's open-source community gets obliterated.

His consequent worry is that regulators let it happen because they're distracted by killer robot arguments.

Leaders like LeCun have championed open-source developers as their work on tools that rival, say, OpenAI's ChatpGPT, brings a new level of transparency to AI development.

LeCun's employer, Meta, made its own large language model that competes with GPT, LLaMa 2, (somewhat) open source. The idea is that the broader tech community can look under the hood of the model. No other big tech company has done a similar open-source release, though OpenAI is rumored to be thinking about it.

For LeCun, keeping AI development closed is a real reason for alarm.

"The alternative, which will inevitably happen if open source AI is regulated out of existence, is that a small number of companies from the West Coast of the US and China will control AI platform and hence control people's entire digital diet," he wrote.

"What does that mean for democracy? What does that mean for cultural diversity?"
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,339
Reputation
8,496
Daps
160,019

AI Exec Order Breakdown

TLDR:
- Foreign AI model training going to require KYC on US clouds
- Otherwise reporting reqs domestically on >100k A100 clusters / GPT4 size training runs
- This allows the big players to share data and benchmark against each other without running into anti-trust (nice exemption to have)
- Establishes the large cloud operators as KYC gatekeepers for FOREIGN LARGE AI devs… but you will probably see creep over time in domestic/small unless fought
- Bends US immigration rules in favor of AI and other critical tech (not stated what so, discretionary) and you can start to see the beginnings of a talent recruitment program but more of a marketing program than new laws. Ie Biden admin is trying to use existing pathways and rules and nudge/winks to give some certainty. Will they ever get to the pre-approved green card in 3 months that’s the international standard now 🤷‍♀️
- Gets US Gov agencies ready to rollout AI across govt, establishing standards for deployment and safety -> helpful rather than to have every agency invent its own standards to be Deloitted to hell

Details
-> US Gov defines AI as any predictive system, so all ML falls in, incl regression! 😂 Excel users down bad!
-> “dual use foundation model” defined as >20 billion param self supervised model which is good at wide range of tasks which could include a) making easier non expert creation of WMD b) hacking c) evading human control
Google search would definitely fall under a) but from the tone seems to be grandfathered in exempted

What the US Gov shall do
A) NIST shall develop standards for safe deployment of AI systems -> this is good! This means once these are out, US govt agencies and old economy firms will just adopt these, and you can sell into them by just following one set of standards rather than modify for each project manager’s idiosyncrasies and fears.
B) Dept of Energy shall in next 9 months build out a framework to evaluate AI generating WMD threats
C) Secretary of Commerce shall put out a rule in next 3 months requiring foundation model developers to report training, security, ownership and possession of model weights, red team results using the NIST standards. Subject to update by Secretary, these reporting reqs fall on firms with >100k A100s in a single high speed networked cluster, or training a bio model of more than 10x Llama2 70b compute or normal language model of approx GPT-4 compute.
D) Secretary of Commerce to propose KYC REGULATION for foreigners using US cloud services directly or through resellers for more than GPT4 sized training runs/compute.
E) Immigration -> lots of visa process streamlining around stupid stuff like onshore visa renewals, but the big news here is the start of a softball Thousand Talents program to the extent permitted by law including expedited visa processing, expedited green cards etc for AI and other critical fields. This feels like a typical Biden move: no new law, but leaning on the admin state to get to desired outcomes by using bureaucratic discretion. Unfortunate that this makes govt only as good as the people who run it, and not good regardless of who runs it.
F) Directs USPTO and Copyright Office to provide guidance on generative AI. This will of course be litigated after but at least it won’t be on a case by case basis
G) Lots of preparation to deploy in government in the VA, etc. Basically the NIST rule making clears the way for Fed govt approved AI to be deployed. Rest of govt is supposed to figure out what they want to do in the meantime while waiting on NIST
H) DoJ directed to study and provide guidance on use of AI in sentencing and predictive policing. Told to combat algorithmic bias. Same for HHS and a host of other depts.
I) Gov Agencies told not to ban AI use, appoint AI officers to implement AI use in safe manner using NIST guidelines
Oct 31, 2023 · 2:10 AM UTC


 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,666
Daps
203,884
Reppin
the ether
I don't think spreading out AI technology actually decreases the risk at all. Even if everyone has access to AI, the problem is that people don't have equal access to power. One powerful individual can use AI to do the work of a billion people. A single individual, with enough money to pay the necessary workers and buy enough CPU time, can overwhelm the AI output of millions of other people.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,339
Reputation
8,496
Daps
160,019
I don't think spreading out AI technology actually decreases the risk at all. Even if everyone has access to AI, the problem is that people don't have equal access to power. One powerful individual can use AI to do the work of a billion people. A single individual, with enough money to pay the necessary workers and buy enough CPU time, can overwhelm the AI output of millions of other people.

better algorithms will lead to more powerful LLM's or whatever succeeds it running on consumer grade hardware and on distributed networks. I wouldn't underestimate a swarm of people with a common goal.

Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading

Connect your GPU and increase Petals capacity

Petals is a community-run system — we rely on people sharing their GPUs. You can check out available models and help serving one of them! As an example, here is how to host a part of Stable Beluga 2 on your GPU:



Petals: Collaborative Inference and Fine-tuning of Large Models​

Alexander Borzunov, Dmitry Baranchuk, Tim Dettmers, Max Ryabinin, Younes Belkada, Artem Chumachenko, Pavel Samygin, Colin Raffel
Many NLP tasks benefit from using large language models (LLMs) that often have more than 100 billion parameters. With the release of BLOOM-176B and OPT-175B, everyone can download pretrained models of this scale. Still, using these models requires high-end hardware unavailable to many researchers. In some cases, LLMs can be used more affordably via RAM offloading or hosted APIs. However, these techniques have innate limitations: offloading is too slow for interactive inference, while APIs are not flexible enough for research that requires access to weights, attention or logits. In this work, we propose Petals - a system for inference and fine-tuning of large models collaboratively by joining the resources of multiple parties. We demonstrate that this strategy outperforms offloading for very large models, running inference of BLOOM-176B on consumer GPUs with ≈ 1 step per second, which is enough for many interactive LLM applications. Unlike most inference APIs, Petals also natively exposes hidden states of served models, allowing to train and share custom model extensions based on efficient fine-tuning methods.



Large language models are among the most significant recent advances in machine learning. Still, leveraging these models can be difficult: offloading and quantization have limitations, and third-party APIs are less flexible. As an alternative solution, we propose Petals, an open-source decentralized system (showcased this week at the ACL 2023 Demonstrations track) allowing anybody to run large models or even adapt them using the idle resources of volunteers. In this post, you will learn the motivation behind the system, its underlying ideas, and its advantages compared to other ways of using large models.
Petals was developed as a part of the BigScience collaboration by engineers and researchers from Yandex Research, HSE University, University of Washington, Hugging Face, ENS Paris-Saclay, and Yandex School of Data Analysis.

Background: open LLMs and methods of running them​

Since 2020, we have seen large language models (LLMs) like GPT-3 rapidly improve their capabilities, sometimes gaining emergent properties such as in-context learning. In 2022 and early 2023, many open-access alternatives to proprietary LLMs were released: notable examples of this trend include BLOOM, OPT, LLaMA, as well as YaLM developed by Yandex. However, using these models with high performance is still an engineering challenge: models with over 170 billion parameters need over 340 gigabytes of GPU memory to be stored in FP16 precision, which exceeds the limits of any single accelerator.
From a user perspective, an easy solution is to use APIs, where the model is hosted by an external provider charging for requests to this LLM. While this approach does not require expertise in model serving, it is also the least flexible one: API maintainers usually do not allow inspecting the internal states of the neural network, which can be helpful for its analysis. Also, the model itself might be phased out of service by the provider, which makes it especially difficult to conduct reproducible research using such APIs.
In contrast, offloading the neural network weights to larger local storage (such as RAM or SSD) grants full control over the model. Even if your personal computer has enough memory, the latency of this approach is significantly higher: generating a single token with BLOOM-176B, offloading will take more than 5 seconds because of the data transfer bottleneck. Such a delay might be acceptable for batch processing but not for interactive applications: hence, we need something that is transparent yet can be fast enough.

Overview of the approach​

On a surface level, Petals works as a decentralized pipeline designed for fast inference of neural networks. It splits any given model into several blocks (or layers) that are hosted on different servers. These servers can be spread out across continents, and anybody can connect their own GPU! In turn, users can connect to this network as a client and apply the model to their data.
When a client sends a request to the network, it is routed through a chain of servers that is built to minimize the total forward pass time. Upon joining the system, each server selects the most optimal set of blocks based on the current bottlenecks within the pipeline. Below, you can see an illustration of Petals for several servers and clients running different inputs for the model.
An_overview_of_Petals_3aff9c6be7.png

An overview of Petals
As our network consists of volunteers and not on-demand servers, each participant of Petals can disconnect at any point. To address potential failures, a client stores intermediate activations sent to each block and reroutes them from an offline server to an online node hosting the same block.
Importantly, the transparency of intermediate states has an extra benefit here. Because each input and output of the block is sent over the network, it is possible to insert task-specific adapters between layers of the model, which enables lightweight finetuning without altering the pretrained model hosted on servers. The paper about Petals

[1] covers the system in more depth, covering other components such as activation and weight compression of the model.
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,666
Daps
203,884
Reppin
the ether
better algorithms will lead to more powerful LLM's or whatever succeeds it running on consumer grade hardware and on distributed networks.


I could be wrong, but at the moment I'm just not thinking of a major technology where having massively more computing resources didn't give you a huge advantage over those with less. And the more complexity, the greater the advantage.

Even if everyone had the same LLM (and I doubt that AI hierarchies will ever disappear), those with computing power could more effectively create billions of bots to swarm the internet with their LLM results than everyone else could. Not to mention fake videos and images and such. They'll be able to purchase media outlets in order to get "official" cosigns on their fake shyt as well.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,339
Reputation
8,496
Daps
160,019
I could be wrong, but at the moment I'm just not thinking of a major technology where having massively more computing resources didn't give you a huge advantage over those with less. And the more complexity, the greater the advantage.

Even if everyone had the same LLM (and I doubt that AI hierarchies will ever disappear), those with computing power could more effectively create billions of bots to swarm the internet with their LLM results than everyone else could. Not to mention fake videos and images and such. They'll be able to purchase media outlets in order to get "official" cosigns on their fake shyt as well.

internet bots has been a thing for years now, chatgpt, poe & claude powered bots are operating on every major social media site and forums. I think i read somewhere that there was a bot that posted to 4*han convincingly.

if the web becomes too polluted with bots then people will start verifying themselves in person to build out their online community.. bots aren't cost-free, they can be DDOS'd with prompt-replies that would essentially make then useless depending on how they are configured. work is being done to watermark camera photos as real and artists photos as well. there are already thousands of fake news sites that mix real non-controversial news like the weather with their propaganda to appear legit. worse comes to worse, some online communities might start locking registrations down.



 
Last edited:

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,666
Daps
203,884
Reppin
the ether
if the web becomes too polluted with bots than people will start verifying themselves in person to build out their online community.. bots aren't cost-free, they can be DDOS'd with prompt-replies that would essentially make then useless depending on how they are configured. work is being done to watermark camera photos as real and artists photos as well. there are already thousands of fake news sites that mix real non-controversial news like the weather with their propaganda to appear legit. worse comes to worse, some online communities might start locking registrations down.


You're conflating what niche communities who really care might do with the low-effort shyt that impacts the masses. "Some" online communities might take effective countermeasures, but what percentage of the Indian population or the Brazilian population of the Philippines population is going to limit themselves to these well-disciplined communities? What guarantee do you have that Twitter or Facebook or TikTok will ever control the problem, or that the bad actors won't just create their own competing platforms to capture the masses if they don't like what the existing sites are doing? People in power with the right AI and bot control will have massive influence of the bulk of entire populations.
 
Top