How is that that one of the most powerful ideas in the entire history of technology is now under coordinated attack?

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,333
Reputation
8,496
Daps
159,995

1/1
A reminder that local LLMs called by open source libraries will one day be a central part of this open source stack, making AI use in applications cost less to the developer.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GWUra_qWkAA-XKX.jpg











1/12
How is that that one of the most powerful ideas in the entire history of technology is now under coordinated attack?

If I told you that one little software concept powers the website you're reading this on, the router in your house that connects you to the internet, the phone in your pocket and your TV streaming service, what would you think?

What if I said that the NASDAQ and New York stock exchanges where your retirement portfolio trades is powered by the same ideas? Or that those same concepts power 100% of the top 500 supercomputers on the planet doing everything from hunting for cures for cancer, to making rockets safer, to mapping the human brain?

Not only that, it also powers all the major clouds, from Amazon Web Services, to Google Cloud and Alibaba and more.

You'd probably think that was a good thing, right? Not just good, but absolutely critical to the functioning of modern life at every level. And you're be right.

That critical infrastructure is "open source" and it powers the world.

It's everywhere, an invisible bedrock beneath our entire digital ecosystem, underpinning all the applications we take for granted every day.

It seems impossible that something that important would be under attack.

But that's exactly what's happening right now. Or rather, it's happening again. It's not the first time.

In the early days of open source, a powerful group of proprietary software makers went to war against open source, looking to kill it off.

SCO Unix, a proprietary Unix goliath, tried to sue Linux into oblivion under the guise of copyright violations.

Microsoft's former CEO, Steve Ballmer, called open source cancer and communism and launched a massive anti-Linux marketing barrage. They failed and now open source is the most successful software in history, the foundation of 90% of the planet's software, found in 95% of all enterprises on the planet.

Even Microsoft now runs on open source with the majority of the Azure cloud powered by Linux and other open software like Kubernetes and thousands upon thousands of packages like Docker, Prometheus, machine learning frameworks like Pytorch, ONNX and Deepspeed, managed databases like Postgres and MySQL and more.

Imagine if they'd been successful in their early attacks on open source? They would have smashed their own future revenue through short-sightedness and total lack of vision.

And yet open source critics are like an ant infestation in your house. No matter how successful open source gets or how essential it is to critical infrastructure they just keep coming back over and over and over.

Today's critics of open source AI try to cite security concerns and tell us only a small group of companies can protect us from our enemies so we've got to close everything down again and lock it all up behind closed doors.

But if open source is such a security risk, why does "the US Army [have] the single largest installed base for RedHat Linux" and why do many systems in the US Navy nuclear submarine fleet run on Linux, "including [many of] their sonar systems"? Why is it allowed to power our stock markets and clouds and our seven top supercomputers that run our most top secret workloads?

But they aren't stopping with criticism there. They're pushing to strangle open source AI in its crib, with California bills like SB 1047 set to choke out American AI research and development while tangling up open weights AI in suffocating red tape, despite a massive groundswell of opposition from centrist and left Democrats, Republicans, hundreds of members of academia and more, the bill passed and now goes to the governor. Eight members of Congress called on the governor to veto the bill. Speaker Pelosi took the unprecedented step of openly opposing a bill in a state Assembly that is primarily Democrat.

And yet still the bill presses forward.

Why?

Do too many people just not understand what open source means to the modern world?

Do they just not see how critical it is to everything from our power grid to our national security systems and to our economies?

And the answer is simple:

They don't.

That's because open source is invisible.

It runs in the background. It quietly does its job without anyone realizing that it's there. It just works. It's often not the interface to software, it's the engine of software, so it's under the hood but not often the hood itself. It runs our severs and routers and websites and machine learning systems. It's hidden just beneath the surface.

The average person has no idea what powers their Instagram and Facebook and TikTok and Wikipedia and their email servers and their WhatsApp and Signal. They don't know it's making their phone work or that the trades of their retirement portfolio depend on it.

And that's a problem.

Because if people don't even know it exists, how can we defend it?

1/

2/12
** It's the End of the World as We Know It **

Even worse, that blindness to the essential software layer of the world is now putting tomorrow's tech stack under threat of being dominated by a small group of big closed source players, especially when it comes to AI.

Critics are fighting back against open source, terrified that if we all have access to powerful AI we'll face imaginary "catastrophic harms" and it must be stopped by any means necessary.

If they win, it will be a devastating loss for us all, because the revolutions in tech that made it so easy for people to get an education online, chat with our friends all around the globe, find information on any topic at the click of a button, stream movies, play games, find friendship and love are facing intense pressure by governments and activists the world over who want to clamp down on freedom and open access.

In private they know their laws are about crippling AI development in America. They call the bills “anti-AI” bills in their private talks. But in public they carefully frame it as “AI safety” to make it go down nice and easy with the wider public.

What made the web so great was an open, decentralized approach that let anyone setup a website and share it with the world and talk to anyone else without intermediaries. It let them share software and build on the software others gifted them to solve countless problems.

But with each passing day, it's looking more and more likely that we go from an open digital landscape to a world where civilian access to AI is severely locked down and restricted and censored and where powerful closed source AIs block and control what you can do at every step.

That's because a small group of about a dozen non-profits, financed by three billionaires, like convicted criminal and fraudster Sam Bankman-Fried and Estonian billionaire Jann Tallinn, who thinks we should outlaw advanced GPUs and enforce a strict crackdown to crush all strong AI development, have exploited this lack of awareness about the power and potential of open source, to create an AI panic campaign that's threatening tomorrow's tech stack and making it increasingly likely that tomorrow's internet is a world of locked doors and gated access to information.

This extremist group is pushing an information warfare campaign to terrify the public and make it think AI is dangerous, working to drive increasingly restrictive anti-AI laws onto the legislative agenda across the world, while driving a frenzied moral techno panic with increasingly unhinged claims about the end of the world.

They've completely failed to make any headway on stopping the worst abuses of AI, including autonomous weapons and mass surveillance with zero treaties signed and zero laws passed, and so they've turned their full force to restricting civilian access to AI and stopping the dreaded scourge of LLM chat bots, despite none of their actual fears materializing. They managed to get over 678 overlapping, conflicting and impossible to navigate bills pressing forward in 45 states and counting in the US alone, while utterly failing to make even the most basic headway on truly dangerous AI use cases like using it to spy on everyone at scale.

The irony of all this fear and panic is that these misguided folks are driving us right into the worst possible timeline of the future.

It's a world where your AI can't answer questions honestly because it's considered "harmful" (this kind of censorship always escalates because what's "harmful" is always defined by what people in power don't like), where information is gated instead of free, where open source models are killed off so university researchers can't work on medical segmentation and curing cancer (because budget conscious academics rely on open weights/open source models; they can fine tune them but can't afford to train their own) and where we have killer robots and drones but your personal AI is utterly hobbled and lobotomized.

It's a world we've got to fight to stop at all costs.

To understand why you just have to understand a little about the history of how we got to now.
GWUra_qWkAA-XKX.jpg

GWUsOaDXgAAHoI4.jpg

GWUsjUaW8AArilC.jpg

GWUsqs-XUAE20JN.jpg

GWUsvh7WMAAglj0.jpg

GWUtFS7XoAAX4To.jpg

GWUtaxJXgAAAUFf.jpg

GWUtilOWoAAxmU2.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,333
Reputation
8,496
Daps
159,995
continued

2/

3/12
2/

3/12
How we got here is simple:

Permissionless access. Permissional innovation.

That means you don't have to ask anyone for access to information or set up a website or share your code. You don't need approval or a checklist of things you did to earn the "right" to access that information, share your thoughts, or publish your code. You don't need to get someone's blessing to use code or build software on top of it or publish an ML model. You can just build whatever your mind can imagine and whatever your skills allow you to build.

Permissionless is the key driver of progress in the modern world and in history.

And it's the foundation of open source.

The basic idea behind open source is to give everyone the same building blocks, whether you're the government, a massive multinational corporation, a tiny individual academic researcher, a community sewing circle, or a small startup business.

When everyone has the same building blocks, its easier to build bigger and more powerful and more complex solutions on top of those blocks.

Take a technology like open source Wordpress. Wordpress made it super easy for anyone in the world to make and publish a beautiful website, even with limited design skills. It now powers over 43% of the web.

When I was younger it was hard to develop and publish a website. You had to do everything from scratch. But a wave of open technologies made it easier and easier as the years went by. We had the LAMP stack (Linux, Apache, MySql, PHP), an operating system, web server, database server and web scripting language, which gave many developers the common tools they needed to create more complex web apps. That, in turn, enabled the development of Wordpress on top of those technologies.

Better and more powerful stacks followed, like the MERN stack, MongoDB, Express.js, React and Node, all open source technologies.

Each new layer of software makes it easier to build more advanced and more useful solutions on top of the last layer. It's the essence of progress and development. When open ecosystems are allowed to flourish and the enemies of open are finally beaten back once more, we see a flowering of new progress and innovations in a self-reinforcing virtuous loop.

When you only have mud hut level technology you can only build one story tall mud huts. When you have a hammer and nails and standard board sizes you can build much taller and more robust structures. If you have steel and cement and rebar you can build skyscrapers stretching to the sky.

This same cycle of openness benefiting the world permeate the progress of the world again and again across many different aspects of life.

Take publishing:

When six publishers controlled publishing completely we had a flurry of the same kinds of books over and over again, all fitting a few basic models. Publishers kept 90% of the profits of writers. But the digital self-publishing revolution gave us a flood of amazing new writers like Hugh Howey, now keeping 70% or more of their profits, and who would have never made it through the gate guardians of the big publishers to become a phenomenon with Wool.

3/

4/12
Or take business and economics:

When starting a company required the permission of the king, you had a small group of super powerful corporations, like the British East India company that was once so mighty it conquered India with a private army that was double the size of the British army. That's right, it wasn't Britain that conquered India it was a giant mega corporation. And we think we have powerful companies now! Amazon ain't got nothing on the British East Indian company.

When breakthrough laws allowed anyone to create a company, we had a revolution of small businesses and a massive uptick in wealth and economic progress and longevity that has driven down poverty over two centuries to once unimaginable numbers and created more wealth than anyone in the 1700s could possibly even dream of and more.

Historian Michail Moatsos estimates that in 1820, just 200 years ago, almost 80% of the world lived in extreme poverty. That means people couldn't afford even the tiniest place to live or food that didn't leave them horribly malnourished. It means living on less than $1.90 a day in 2011 prices and $2.15 in 2017 prices. Actually you don't even need to go back that far. In the 1950s, half the world still lived in extreme poverty.

Today that number is 10%.

4/

5/12
Nearly half of all children used to die, in every country on Earth, into the late 1800s, when economies were slower and more stagnant and less open.

In most first world countries infant mortality is now single digits or a fraction of 1% and even in the developing world it's 4%.

To see just how stark that drop is, take a look at this chart which shows child mortality over two millennia. You’ll see that for most of human history it was a flat line of uninterrupted death and then it suddenly drops dramatically.

5/
6/12
That's what progress does. It makes the world as a whole better. That's what openness, sharing and common building blocks do. They create possibilities for more progress and more ideas that build on the last ones. Once you have an understanding of microbes, you can build better defenses against them. Once you have microscopes, you can see those tiny menaces and understand if the drug you crafted to stop them is working. Each layer of knowledge and tools enables the next level of knowledge and tools. When anyone can build on the past and abstract those lessons to new domains we get new breakthroughs and new ideas and new businesses.

We're the only species on Earth that can share ideas down through the years and record our understanding of the world so that later generations can build on that. We're the only species that can create software tools that let you benefit from the programmers who went before you. You're just a download away from having a library that will do much of the heavy lifting in whatever brilliant new software you can imagine.

Check out this letter Steve Jobs sent to himself shortly before he passed (at the at here). He understood that we all stand on the shoulders of giants.

When you look back at the history of progress and openness and sharing and realize how crucial it is, it's easy to take it as a given that everyone can easily see just how important it is but they can't. The forces of closed and the enemies of innovation are a powerful chorus and they strike back again and again in history.

They believe in command and control. They believe it’s better to hoard knowledge and keep it secret.

They imagine that if we just make something a little bit open, instead of all the way open, they we can prevent bad people from ever getting their hands on the best tools. But it never works that way.

In the early days of the browser, the US government tried to limit advanced 128 bit encryption to only Americans during the crypto wars in the early 1990s believing they'd somehow stop Russia and criminals from getting encryption.

It didn't.

In the meantime, this restrictive policy held back e-commerce for many years, because people overseas could only get access to 40 bit encryption, which was easily hackable not just by the good guys but the bad guys as well. It is impossible to make a backdoor that only the good guys get to use. Most Americans didn't even have access to 128 bit encryption because it was so cumbersome to get approved for using it by the bureaucratic choke points and so they just gave up or didn't bother and used the weakened browser, exposing the good guys to attacks too.

Today e-commerce is worth about 4 trillion dollars world wide.

It might already be worth more if we'd gone faster and allowed encryption to proliferate earlier. Instead we lost years fighting to keep the tide of misguided thinking at bay.

We prevented law abiding people who just wanted to shop online from getting access to the tech and we didn't stop bad guys from having access to cryptography anyway.

That's because closed and gated systems have tremendous false positive and false negative rates. Gated systems are inexact, inefficient and they block many people you want to have access and don't stop many of the people you don't want to have access. Closed systems also don't scale. They're slow, replying on armies of censors and middleman with little to no experience to try to make decisions and all it does is creates endless bottlenecks everywhere as poorly paid, unskilled people try to make critical decisions and do it badly. Every time we try this in history we create the unwanted side effect of dramatically slowing down progress because information, ideas and software can't flow freely and get trapped between ugly, inefficient, governmental gates.

And yet the forces of closed persist, insistent that their way is safer and that they can somehow stop all risk and all bad things from happening.

Today's enemies of open source have similar motivations to the people who fought open encryption standards:

Fear.

Despite widespread encryption leading to trillions of dollars in economic development through innovations like HTTPS, they see only the harms like a terrorist using encryption to conceal their dealings, completely missing the trillions in economic value and information sharing and business, while imaging that they could somehow stop the bad people from ever getting their hands on powerful tools, while still having the same economic benefits for the rest of us.

And that brings us to one of the most oft-used arguments against open source:

We've got to stop the bad guys from getting it.

So let's look at that argument and the other major anti-open arguments and see how they stack up in the real world.

6/
GWUra_qWkAA-XKX.jpg

GWUsOaDXgAAHoI4.jpg

GWUsjUaW8AArilC.jpg

GWUsqs-XUAE20JN.jpg

GWUsvh7WMAAglj0.jpg

GWUtFS7XoAAX4To.jpg

GWUtaxJXgAAAUFf.jpg

GWUtilOWoAAxmU2.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,333
Reputation
8,496
Daps
159,995
continued
7/12

[/SPOILER]
7/12
The first argument against open source in general and open source/open weights AI in particular is simple and it goes like this:

If I have a powerful technology I don't want criminals or my enemies or powerful authoritarian regimes using this technology, so I have to keep it a closely guarded secret.

From here we inevitably hear something like, "would you open source a nuclear weapon?"

In other words we don't want the bad guys getting it.

We'll leave aside the slippery little fact that the "bad guys" are defined by the guys saying it and it is not a universal concept.

Let's also leave aside the fact that the science behind nuclear weapons was in fact widely shared and open in the scientific community at the time, let's pretend that it wasn't and that a small group of genius scientists developed the bomb completely (which did happen), with no prior knowledge from anyone else (which did not and could not), no open scientific research (also impossible), and they managed to create this weapon of mass destruction.

In fact, that's exactly what the United States tried to do with the technology once they had it. They did everything possible to keep it a state secret and to keep anyone else from developing nuclear weapons. They failed. In short order Russia and then China and other nations developed through a combination of first principals thinking, reverse engineering, espionage and leaks.

The technology was simply too powerful to keep under wraps and too temping for everyone else and so those nations went all out to get a hold of the tech and they got it. This is actually the case with every technology in history. Nobody has ever been able to keep a technology secret for long.

When the Chinese dynasties of the past tried to keep the cultivation and making of tea a secret, because it was so lucrative, they failed.

In the mid-19th century, Robert Fortune, a Scottish botanist, was sent by the British East India Company (there they are again) on a covert mission to China with the goal of breaking China's monopoly on tea production. Disguised as a Chinese official, Fortune managed to infiltrate regions of China that were off-limits to foreigners, where he collected tea plants, seeds, and detailed information about tea cultivation and processing techniques.

Again though, shouldn't we at least try to keep it a secret? Another variant of that is: "the longer we keep it a secret the better." And didn't we prevent much destruction by not allowing nuclear weapons to proliferate.

This is where I will surprise you. Yes we did. We absolutely should not open source a nuke and we should not allow weapons of mass destruction to proliferate.

And that's what we have in place today. Strict controls of fissionable materials and equipment to make nuclear bombs. It doesn't stop our most determined enemies but it likely has stopped terrorist groups and other unhinged fanatics from doing tremendous damage to the world.

Let's go further and say we should absolutely restrict the proliferation of AI for weapons technology and for use in mass surveillance. In other words, we should look to contain the two worst use cases of AI technology, the cases of nation states using it to kill people or to spy on everyone, everywhere.

But that's where the analogy ends.

AI is not a nuclear weapon. LLMs are not nuclear weapons. They're are not even close. A nuclear weapon is designed to do one thing and one thing only, kill as many people as possible. AI has a vast range of possibilities, from detecting cancer, to writing emails, to creating art, to doing tasks, powering robots that will clean the dishes and mop the floors, proving math theorems and more.

Very few technologies are inherently destructive like a nuclear weapon and so when you hear someone making that analogy you know they're making it from a lack of understanding or bad faith, meaning they know it's not a real argument and they make it anyway because they have an ulterior motive which is usually to link AI to something horrific in your mind so they can justify restricting it and locking it down and censoring it.

Every technology in the world has an inherent range of capabilities/possibilities from good to bad. Almost every technology and tool is "dual use," which is one of the second most used arguments against open source. What is dual use? Just like what it sounds like. You can use the tool for good and bad things. It's just the "we can't let the bad guys get it" arguement in different form.

But what does that even mean really? It's basically a government euphemism for justifying restrictions on your rights.

Why?

Because almost every tool or technology in the world is "dual use."

This computer I'm writing on is "dual use." I'm using it to write this article but other people use it to hack people.

I can just write anything here and nothing will stop me. I can write a great article like this or a cookie recipe. I can also write a racial slur or a fiery call to bring down the government or a hate filled screed against this or that group and nothing will prevent me from doing it.

If you have a kitchen knife you can cut vegetables or stab someone on a train. You can pound nails to build your house with a hammer or hit someone over the head.

In fact, the things we should always be most worried about are not "dual use" technologies but "single use" things like nuclear weapons that have no other purpose but the kill. Everything else is dual use and it exists on a continuum from wonderful to destructive.

This range of capabilities may lean more to one side or be somewhere in the middle.

On the whole, a lamp in your house leans strongly to the side of good but I can still hit you over the head with it or you can electrocute yourself with it. A gun may lean more strongly to bad but I can still hunt to feed my family with it or defend against an intruder.

AI is "dual use" and that tells us absolutely nothing at all. So what is it? It's a general purpose tool, like a hammer or a computer. It's right in the middle of that spectrum. It can be used to make autonomous drones that zero in on your face and blow up your head or it can be used to detect cancer.

General purpose technologies are always somewhere in the middle of the range in that they can be used for almost anything and be made to serve many purposes. AI has a massive range of capabilities, both good and bad.

AI can teach a young child to learn a new language or discover new potential pathways for combating cancer or it can be used for surveillance and monitoring dissidents in an authoritarian regime. It is a tool, wielded by the user of that tool and it mirrors the intentions of the wielder. For a better tech analogy, AI might be closer to something like Linux.

Linux has a tremendous range of capabilities as well. It's also general purpose and can be bent to any use you want to put it to. As we saw earlier, it's used in all the supercomputers on Earth, the vast majority of smartphones, most home routers, and it powers every major public cloud, to name a few.

It's also used to write malware and create botnets and it powers the supercomputers and clouds of authoritarian nations too.

And yet we don't try to restrict Linux or say that it can't be developed or control how its used or make Linux kernel developers liable for its misuse.

Why?

Because the benefits far, far outweigh the downsides. By trying to restrict Linux we would have killed Linux.

It would have meant our soldiers don't get to use it on their laptops because it doesn't exist or exists in such a broken form that it never proliferates outwards and creates the virtuous cycle of amazing new discoveries and inventions. With no Linux it means we don't get to use it on our super computers either. We also don't get the benefit of countless scientific libraries that sprang up on top of it for everything from scientific research to running the stock market.

And all this means coming to terms with the fact that the bad guys get to use Linux to.

It means that authoritarian regimes get to build their supercomputers with it and hackers get to use to write botnets and penetration attack tools to take down servers and hack your information.

And that sucks.

If there was a way to ensure that we get the trillions in economic benefits, and the scientific breakthroughs enabled by open science software, and the cloud, and ham radios and everything else, while ensuring that Russian Advanced Persistent Threats (APTs) can't hack servers I'd wave a wand and make that happen.

But there is no magic wand to wave. There is no way to get the benefits without the openness. They are intrinsically linked properties that cannot be separated no matter how much we wish they could be separated.

Smart societies are built on the understanding that bad things will sometimes happen, that bad people will always exist and that they will exploit the tools and resources available to them to do their nefarious deeds.

The best thing we can do is punish them after the fact and do what we can to slow them down in other ways.

We simply cannot make a society that somehow keeps tools out of the bad guys hands while still giving everyone else the benefits. Wise statecraft wielders craft society for the proliferation of good and the benefits of as many of us as possible.

Despite the fact that Linux is used for some purposes we prefer it doesn't get used for, we let it proliferate because the overwhelming positive benefits of a widespread set of common software building blocks for the world.

Every technology has inherent downsides but if it has a range of capabilities, we let the technology proliferate far and wide because we want to reap the benefit of those capabilities as a society. The more we reduce roadblocks to access the more ways that technology proliferates and benefits the world in unexpected ways. Nobody can predict all the ways someone will invent to use the common building blocks of a technology stack to make the world better. That's the nature of creativity. We don't know where the next breakthrough will come from and so wider access means a greater possibility that the next breakthrough comes because more free minds are working on the problem, while limited access means less people get to try their hand at building the future.

It is an illusion to think we can eliminate all risk and when we try we create bottlenecks and choke points that also unwittingly strangle many of the benefits too.

Just because one person stabs someone with a kitchen knife, we do not take kitchen knives off the market because the other 99.99% of people need it to cut vegetables and we want that to keep happening.

Open societies are about accepting a measure of reasonable risk.

7/
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,333
Reputation
8,496
Daps
159,995
continued
8/12

8/12
Proponents of open are fundamentally realists.

We know that some bad things will happen no matter what you do in life. And we know the problems that come from openness vastly outweigh the downsides of openness. By a massive margin.

Enemies of openness all share similar traits. It doesn't matter if they're against democracy, open weights AI, open source software, or free speech.

At their core, enemies of open are pessimists.

They believe people are inherently corrupt and evil at their core and can't be trusted.

They believe the small number of risks outweigh all the benefits.

They are short-sighted and fixed on big, flashy risks that are often totally imaginary but that they’ve convinced themselves are real.

That leads them to try to design rigidly controlled systems where they're in charge of deciding who gets to do what and when. More than anything, they believe that preventing harms is vastly more important than any benefits that an idea or technology might bring if it was more widely disseminated.

These are the folks who would have been against the printing press, because knowledge is "dangerous."

They're also fundamentally elitists. They believe they have special knowledge about how the world works that nobody else has except their chosen in-group and that they're the only ones who can be trusted to do the right things in the world.

They mistake their model of reality for actual reality itself.

In other words, they're delusional.

And dangerous.

You won't find a single fan of openness in the entire history of dictatorships or in systems of mass violence like communism and fascism. They favor rigid control and a bloody reality today for an imagined glorious future. If only they could eliminate these people who don't agree and these ways of thinking they could have true paradise.

Rigid control has never made a paradise for anyone.

Only openness has.

Go ahead and grab Linux for whatever project you dream up. Grab one of the 10s of millions of other open source projects for everything from running a website, to training an AI model, to running a blog, or to power a Ham radio. You don't have to ask anyone's permission or pass a loyalty test or prove that you align to the people in power's view of the world. It's free and ready to use right now.

You get to use the same software as mega-corporations with 10s of billions of dollars in revenue. You get to use the same software as super powerful governments around the world. So do charities, small businesses just getting started, universities, grade schools, hobbyists, tinkerers and more.

Open source is a powerful idea that's shaped the modern world but it's largely invisible because it just works and most people don't have to think about it. It's just there, running everything, with quiet calm and stability. That's made it hard to defend and that's a tragedy because open source gives everyone a level playing field.

With that kind of reach and usefulness I never saw it as even remotely possible that someone would see open source as a bad thing or something that must be stopped ever again.

But I was wrong. Here we are again. The battle is not over. It's starting anew.

The people who want to destroy open source AI come from a loosely knit collection of gamers, breakers and legal moat makers. Mustaf Suleyman wants to make high end open source AI work illegal. He and a few other AI voices want to make sure that you can never compete with their companies through regulatory capture like licensing model makers. The Ineffective Altruism movement (powered by such luminaries Sam Bankman-Fried and his massive crypto fraud) has linked up with AI Doomsday cultists and they want to stop open source AI by forcing companies to keep AI locked up behind closed doors instead of releasing the weights and the datasets and the papers that define how it works.

They must be stopped.

They must be stopped because they are misguided, magical thinkers, whose ideas don't work, have never worked and will never work. We can't run a society based on the broken ideas that have a 100% failure rate over time. We can't run it based on wishful thinking and delusional thinking.

Real life is about understanding that good and bad people exist and you can't prevent all harm before it happens. We punish the bad people and we let the rest of us go to work and live life.

Real, adult understanding of life is that life is a risk. There are no guarantees. Openness means sometimes bad things will happen but open is almost always preferable to closed except when it comes to personal privacy and weapons of mass destruction.

We don't make healthy societies with childish black and white thinking.

When we grow into adults we put away childish things.

And we embrace open.

8/

9/12
So much prose. Light on details and actual points but heavy on pathos.

TL;DR: you like open-source. you don't know much about AI. you don't like SB 1047. you didn't like the people behind it and believe that it's bad for open-source AI. you're light on actual details.

10/12
TLDR, your reading comprehension is terrible and you're not as bright and witty as you imagine yourself to be.

11/12
Daniel, this was a great essay. Thanks for drawing it all together.

I hope a lot of people read it.

It was so good it even overcame my pet peeve about WordPress miscapitalization :-)

12/12
Overcoming typo pet peeves is a big deal!


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GWUra_qWkAA-XKX.jpg

GWUsOaDXgAAHoI4.jpg

GWUsjUaW8AArilC.jpg

GWUsqs-XUAE20JN.jpg

GWUsvh7WMAAglj0.jpg

GWUtFS7XoAAX4To.jpg

GWUtaxJXgAAAUFf.jpg

GWUtilOWoAAxmU2.jpg
 
Top