If AI is plagiarising art and design, then so is every human artist and designer in the world

bnew

Veteran
Joined
Nov 1, 2015
Messages
62,298
Reputation
9,448
Daps
170,875

014-singapore.svg


June 19, 2023

sg-merlion.jpg


In this article​


No headings were found on this page.

Disclaimer: Opinions expressed below belong solely to the author.

Who should own the copyrights to the above rendition of Merlion in front of a somewhat awkward Singapore skyline?

Well, some people believe nobody, while others think they should be paid for it if their photo or illustration was among millions going through AI models, even if it looks nothing like the final work.

The rise of generative AI tools has triggered panic and rage among so many, that I have to say it’s becoming entertaining to witness the modern-day Luddites decrying the sudden rise of superior technology, threatening to displace their inferior manual labour.
luddite

The most recent argument — particularly in the field of generative models in graphics and art — is that artificial intelligence is guilty of plagiarising work of people, having learned to create new imagery after being fed millions of artworks, photos, illustrations and paintings.

The demand is that AI companies should at least compensate those original creators for their work (if those generative models are not outright banned from existence).
If that is the case, then I have a question to all of those “creatives” loudly protesting tools like Midjourney or Stable Diffusion: are you going to pay to all the people whose works you have observed and learned from?

I’m a designer myself (among other professions) — and yet I have never been very fond of the “creative” crowd, which typically sees itself as either unfairly unappreciated or outright better than everybody else around (remember the meltdown they had during Covid when they were deemed “non-essential” in a public survey?).

And as a designer, I consume lots of content on the internet, watching what other people do. In fact, isn’t this in part what a tool like Pinterest was created for?


I’d like to know how many designers do not use it to build inspiration boards for their many projects — illustrations, logos, shapes, packaging, among others?

And how many have ever thought about paying a share of their fee to the people they sought inspiration from?
pinterest design board If you’re using Pinterest for design inspiration should pay for every image you’ve seen, that may have helped you do your project?

What about Dribbble? Deviantart? Google Images?
It’s time to be honest with yourself: nothing you’ve ever designed, drawn, painted, sculpted or sketched is absolutely original.

Get off your high horse and just remember the thousands of images you’ve gone through before you finished the job.

In fact, observation and copying is the basis of education in the field, where you acquire competences in using tools that many have used before you, while mastering different techniques of expression — none of which were authored by you.

There are no painters who just wake up one day and pick up brushes and watercolours, and start creating something out of thin air. Everybody learns from someone else.
bob ross Bob Ross

And AI models such as Midjourney, Dall-E or Stable Diffusion, learn in just the same way — they are fed images with accompanying descriptions, that help them to identify objects in the picture as well as specific style it was created in (painted, drawn, photographed etc.).
Fundamentally, they behave just like humans do — just that they do it much more quickly and at a much greater scale. But if we want to outlaw or penalise it, then the same rules should apply to every human creator.

Even the argument against utilising images with watermarks — as it has often been the case with AI pulling content from stock image websites — is moot because, again, people do just the same. We browse thousands of images not necessarily paying for them, to seek ideas for our projects.

As long as no element is copied outright, the end result is considered our own creative work that is granted full copyrights. If that’s the case, then why would different rules apply to an automated system that behaves in the same way?
And if your argument is that AI is not human so it doesn’t enjoy the same rights, then I’d have to remind you that these sophisticated programs have their human authors and human users.

When I enter a specific prompt into the model, I’m employing the system to do the job for me — while paying for it. It does what I tell it to do, and what I could have done manually but now I do not have to. My own intent is the driver of the output, so why should I not enjoy the legal rights to whatever it produces?

Why can I create something with Photoshop, but not with Midjourney?

This is bound to be a legal minefield, gradually cleared in the coming few years, with some decisions in the US already ruling against AI (and basic logic, it seems).

But as it has been throughout history, no attempts to stall technological progress have ever succeeded and it’s unlikely to be different this time. Good designers and artists pushing the envelope are going to keep their place in the business.

Everybody else is better off learning how to type in good prompts before it’s too late.
 

Adeptus Astartes

Loyal servant of the God-Brehmperor
Supporter
Joined
Sep 15, 2019
Messages
11,660
Reputation
2,932
Daps
71,460
Reppin
Imperium of Man
:unimpressed: The machine made the art, not the human. This is the stupidest argument ever.
And if your argument is that AI is not human so it doesn’t enjoy the same rights, then I’d have to remind you that these sophisticated programs have their human authors and human users.
Human inspiration is not the same, at all. Thank god the USCO agrees.

This dude is lazy and wants to make money by doing nothing but typing prompts.
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,851
Daps
204,013
Reppin
the ether
Lots of dumb arguments in the article. Among them, a human can leave Microsoft and start a new company using the things they've learned, but they can't just copy-paste Microsoft's code. ChatGPT is closer to copy&paste than to human thought. A graphic designer can use the tools they learned working for an advertizing company, but they can't just tweak the logos for new clients. It reminds me of the student who plagerizes an essay, then changes every 5th word so you can't search it online. Nothing original has been created no matter how many word substitutions you make.

Also, he ignores the fact that AI can operate at a scale that will completely wipe out human creators. Even a "derivative" artist can only put in one person's worth of work. A rich guy who buys access to the best AI can employ 10 inferior artists to make 100x as much material as even the greatest artist could put out, and soon every real human is priced out of everything other than niche fields even if their work is superior.
 

SupaDupaFresh

Superstar
Joined
Jul 25, 2018
Messages
6,544
Reputation
5,607
Daps
34,386
Stupid article.

The fact that AI draws from and relies on human inspiration is not even the real problem.

By the end of the day we do not want to live in a world where human creativity is obsolete. Where we can't tell or trust what is real and what is not. Where this technology can be easily exploited for unethical means that hurts people, consumers, industries, and economies. There MUST be regulation on AI generated media that stops it from being exploited. There should be some type of disclaimer or regulation for how and when AI labor is used. This cannot go unregulated.

I'm an artist myself, personally and professionally, and I don't feel completely threatened by AI. It has its purpose and I think by the end of the day the whole appeal of art is looking at the work, and technical application of another human being. There's a reason why you can buy a print of Starry Night for cheap bucks but you have to fork up millions to own the real piece. We are fascinated by the work of other human beings and to witness the capability of one another. We love art--both observing it and creating it--because it is the expression of other human beings.

Imagine if AI can generate sports games. With one-to-one photorealistic graphics and human behavior. Like watching the computer play against each other in 2K. Would you prefer this to real sports? Even if it looks and feels the same would you be truly entertained and invested in what you know is a computer simulation? Of course not. Sports is more than just shallow competition and entertainment. Sports is a celebration of the human body, mind, and spirit, and everything we are capable of. The reason we love sports and athleticism is because we are fascinated with seeing the physical capabilities of humankind. Following the stories and careers of actual human beings who have dedicated their lives to being faster, stronger. AI sports would have no real value and neither does AI art. It aint replacing shyt.

AI art has literally no true value beyond novelty. AI art is not gonna stop people from valuing real art created from the hands and mind of a human being we can admire and draw TRUE inspiration in our lives from. AI stuff is lifeless, it's not hard to pick out, and it serves only a commercial purpose. But it MUST BE REGULATED. This is not brave new world technology we can just embrace without forethought or consideration for human labor. This guy is a real tool.
 

yseJ

Empire strikes back
Joined
Apr 30, 2012
Messages
45,464
Reputation
2,740
Daps
66,116
Reppin
The Yay
Stupid article.

The fact that AI draws from and relies on human inspiration is not even the real problem.

By the end of the day we do not want to live in a world where human creativity is obsolete. Where we can't tell or trust what is real and what is not. Where this technology can be easily exploited for unethical means that hurts people, consumers, industries, and economies. There MUST be regulation on AI generated media that stops it from being exploited. There should be some type of disclaimer or regulation for how and when AI labor is used. This cannot go unregulated.

I'm an artist myself, personally and professionally, and I don't feel completely threatened by AI. It has its purpose and I think by the end of the day the whole appeal of art is looking at the work, and technical application of another human being. There's a reason why you can buy a print of Starry Night for cheap bucks but you have to fork up millions to own the real piece. We are fascinated by the work of other human beings and to witness the capability of one another. We love art--both observing it and creating it--because it is the expression of other human beings.

Imagine if AI can generate sports games. With one-to-one photorealistic graphics and human behavior. Like watching the computer play against each other in 2K. Would you prefer this to real sports? Even if it looks and feels the same would you be truly entertained and invested in what you know is a computer simulation? Of course not. Sports is more than just shallow competition and entertainment. Sports is a celebration of the human body, mind, and spirit, and everything we are capable of. The reason we love sports and athleticism is because we are fascinated with seeing the physical capabilities of humankind. Following the stories and careers of actual human beings who have dedicated their lives to being faster, stronger. AI sports would have no real value and neither does AI art. It aint replacing shyt.

AI art has literally no true value beyond novelty. AI art is not gonna stop people from valuing real art created from the hands and mind of a human being we can admire and draw TRUE inspiration in our lives from. AI stuff is lifeless, it's not hard to pick out, and it serves only a commercial purpose. But it MUST BE REGULATED. This is not brave new world technology we can just embrace without forethought or consideration for human labor. This guy is a real tool.
Agreed. It needs to be regulated not only for artistic or human integrity, but to prevent a lot of issues people don't see with AI as it is right now. We pulled this tech too early for everyday use, I swear. It's fine to use as an auxiliary tool, but it is blowing up to be far more than that.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
62,298
Reputation
9,448
Daps
170,875

Stable Signature: A new method for watermarking images created by open source generative AI​


October 6, 2023•
6 minute read

387117150_283299154583792_8671751040970644786_n.jpg


AI-powered image generation is booming and for good reason: It’s fun, entertaining, and easy to use. While these models enable new creative possibilities, they may raise concerns about potential misuse from bad actors who may intentionally generate images to deceive people. Even images created in good fun could still go viral and potentially mislead people. For example, earlier this year, images appearing to show Pope Francis wearing a flashy white puffy jacket went viral. The images weren’t actual photographs, but plenty of people were fooled, since there weren’t any clear indicators to distinguish that the content was created by generative AI.


At FAIR, we’re excited about driving continued exploratory research in generative AI, but we also want to make sure we do so in a manner that prioritizes safety and responsibility. Today, together with Inria, we are excited to share a research paper and code that details Stable Signature, an invisible watermarking technique we created to distinguish when an image is created by an open source generative AI model. Invisible watermarking incorporates information into digital content. The watermark is invisible to the naked eye but can be detected by algorithms—even if people edit the images. While there have been other lines of research around watermarking, many existing methods create the watermark after an image is generated.

More than 11 billion images have been created using models from three open source repositories, according to Everypixel Journal. In this case, invisible watermarks can be removed simply by deleting the line that generates the watermark.




386501092_619074386822387_6156723771081966318_n.jpg



While the fact that these safeguards exist is a start, this simple tactic shows there’s plenty of potential for this feature to be exploited. The work we’re sharing today is a solution for adding watermarks to images that come from open source generative AI models. We’re exploring how this research could potentially be used in our models. In keeping with our approach to open science, we want to share this research with the AI community in the hope of advancing the work being done in this space.

How the Stable Signature method works



Stable Signature closes the potential for removing the watermark by rooting it in the model with a watermark that can trace back to where the image was created.

Let’s take a look at how this process works with the below chart.




386655463_1051155306228897_7217726810106044762_n.jpg



Alice trains a master generative model. Before distributing it, she fine-tunes a small part of the model (called the decoder) to root a given watermark for Bob. This watermark may identify the model version, a company, a user, etc.

Bob receives his version of the model and generates images. The generated images will carry the watermark of Bob. They can be analyzed by Alice or third parties to see if the image was generated by Bob, who used the generative AI model.

We achieve this in a two-step process:



  • First, two convolutional neural networks are jointly trained. One encodes an image and a random message into a watermark image, while the other extracts the message from an augmented version of the watermark image. The objective is to make the encoded and extracted messages match. After training, only the watermark extractor is retained.
  • Second, the latent decoder of the generative model is fine-tuned to generate images containing a fixed signature. During this fine-tuning, batches of images are encoded, decoded, and optimized to minimize the difference between the extracted message and the target message, as well as to maintain perceptual image quality. This optimization process is fast and effective, requiring only a small batch size and a short time to achieve high-quality results.



Assessing the performance of Stable Signature



We know that people enjoy sharing and reposting images. What if Bob shared the image he created with 10 friends, who each then shared it with 10 more friends? During this time, it’s possible that someone could have altered the image, such as by cropping it, compressing it, or changing the colors. We built Stable Signature to be robust to these changes. No matter how a person transforms an image, the original watermark will likely remain in the digital data and can be traced back to the generative model where it was created.



386659636_700172301983869_8256737163893264734_n.jpg



During our research, we discovered two major advantages of Stable Signature over passive detection methods. First, we were able to control and reduce the generation of false positives, which occur when we mistake an image produced by humans for one generated by AI. This is crucial given the prevalence of non-AI-generated images shared online. For example, the most effective existing detection method can spot approximately 50% of edited generated images but still generates a false positive rate of approximately 1/100. Put differently, on a user-generated content platform receiving 1 billion images daily, around 10 million images would be incorrectly flagged to detect just half of the generated ones. On the other hand, Stable Signature detects images with the same accuracy at a false positive rate of 1e-10 (which can be set to a specific desired value). Moreover, our watermarking method allows us to trace images from various versions of the same model—a capability not possible with passive techniques.



How Stable Signature works with fine-tuning



A common practice in AI is to take foundational models and fine-tune them to handle specific use cases that are sometimes even tailored to one person. For example, a model could be shown images of Alice’s dog, and then Alice could ask for the model to generate images of her dog at the beach. This is done through methods like DreamBooth, Textual Inversion, and ControlNet. These methods act at the latent model level, and they do not change the decoder. This means that our watermarking method is not affected by these fine-tunings.

Overall, Stable Signature works well with vector-quantized image modeling (like VQGANs) and latent diffusion models (like Stable Diffusion). Since our method doesn’t modify the diffusion generation process, it’s compatible with the popular models mentioned above. We believe that, with some adaptation, Stable Signature could also be applied to other modeling methods.


Providing access to our technology



The use of generative AI is advancing at a rapid pace. Currently, there aren’t any common standards for identifying and labeling AI-generated content across the industry. In order to build better products, we believe advancements in responsibility research, like the work we’re sharing today, must exist in parallel.

We’re excited to share our work and give the AI research community access to these tools in the hope of driving continued collaboration and iteration. While it’s still early days for generative AI, we believe that by sharing our research, engaging with the community, and listening to feedback, we can all work together to ensure this impressive new technology is built, operated, and used in a responsible way.

The research we’re sharing today focuses on images, but in the future we hope to explore the potential of integrating our Stable Signature method across more generative AI modalities. Our model works with many popular open source models, however there are still limitations. It does not scale to non-latent generative models, so it may not be future proof to new generation technologies. By continuing to invest in this research, we believe we can chart a future where generative AI is used responsibly for exciting new creative endeavors.

This blog post reflects the work of Matthijs Douze and Pierre Fernandez. We'd like to acknowledge the contributions of Guillaume Couairon, Teddy Furon, and Hervé Jégou to this research.


Read the paper

Get the code
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
62,298
Reputation
9,448
Daps
170,875

T.J. Thomson, Author provided

Data poisoning: how artists are sabotaging AI to take revenge on image generators


Published: December 17, 2023 2:17pm EST

Authors
T.J. Thomson
Senior Lecturer in Visual Communication & Digital Media, RMIT University

Daniel Angus
Professor of Digital Communication, Queensland University of Technology
Disclosure statement


Imagine this. You need an image of a balloon for a work presentation and turn to a text-to-image generator, like Midjourney or DALL-E, to create a suitable image.

You enter the prompt: “red balloon against a blue sky” but the generator returns an image of an egg instead. You try again but this time, the generator shows an image of a watermelon.

What’s going on?

The generator you’re using may have been “poisoned”.

We believe good journalism is good for democracy and necessary for it.

Learn more


What is ‘data poisoning’?


Text-to-image generators work by being trained on large datasets that include millions or billions of images. Some generators, like those offered by Adobe or Getty, are only trained with images the generator’s maker owns or has a licence to use.

But other generators have been trained by indiscriminately scraping online images, many of which may be under copyright. This has led to a slew of copyright infringement cases where artists have accused big tech companies of stealing and profiting from their work.

This is also where the idea of “poison” comes in. Researchers who want to empower individual artists have recently created a tool named “Nightshade” to fight back against unauthorised image scraping.

The tool works by subtly altering an image’s pixels in a way that wreaks havoc to computer vision but leaves the image unaltered to a human’s eyes.

If an organisation then scrapes one of these images to train a future AI model, its data pool becomes “poisoned”. This can result in the algorithm mistakenly learning to classify an image as something a human would visually know to be untrue. As a result, the generator can start returning unpredictable and unintended results.

Symptoms of poisoning

As in our earlier example, a balloon might become an egg. A request for an image in the style of Monet might instead return an image in the style of Picasso.

Some of the issues with earlier AI models, such as trouble accurately rendering hands, for example, could return. The models could also introduce other odd and illogical features to images – think six-legged dogs or deformed couches.

The higher the number of “poisoned” images in the training data, the greater the disruption. Because of how generative AI works, the damage from “poisoned” images also affects related prompt keywords.




Read more: Do AI systems really have their own secret language?



For example, if a “poisoned” image of a Ferrari is used in training data, prompt results for other car brands and for other related terms, such as vehicle and automobile, can also be affected.

Nightshade’s developer hopes the tool will make big tech companies more respectful of copyright, but it’s also possible users could abuse the tool and intentionally upload “poisoned” images to generators to try and disrupt their services.


Is there an antidote?


In response, stakeholders have proposed a range of technological and human solutions. The most obvious is paying greater attention to where input data are coming from and how they can be used. Doing so would result in less indiscriminate data harvesting.

This approach does challenge a common belief among computer scientists: that data found online can be used for any purpose they see fit.

Other technological fixes also include the use of “ensemble modeling” where different models are trained on many different subsets of data and compared to locate specific outliers. This approach can be used not only for training but also to detect and discard suspected “poisoned” images.

Audits are another option. One audit approach involves developing a “test battery” – a small, highly curated, and well-labelled dataset – using “hold-out” data that are never used for training. This dataset can then be used to examine the model’s accuracy.

Strategies against technology

So-called “adversarial approaches” (those that degrade, deny, deceive, or manipulate AI systems), including data poisoning, are nothing new. They have also historically included using make-up and costumes to circumvent facial recognition systems.

Human rights activists, for example, have been concerned for some time about the indiscriminate use of machine vision in wider society. This concern is particularly acute concerning facial recognition.

Systems like Clearview AI, which hosts a massive searchable database of faces scraped from the internet, are used by law enforcement and government agencies worldwide. In 2021, Australia’s government determined Clearview AI breached the privacy of Australians.




Read more: Australian police are using the Clearview AI facial recognition system with no accountability



In response to facial recognition systems being used to profile specific individuals, including legitimate protesters, artists devised adversarial make-up patterns of jagged lines and asymmetric curves that prevent surveillance systems from accurately identifying them.

There is a clear connection between these cases and the issue of data poisoning, as both relate to larger questions around technological governance.

Many technology vendors will consider data poisoning a pesky issue to be fixed with technological solutions. However, it may be better to see data poisoning as an innovative solution to an intrusion on the fundamental moral rights of artists and users.


Artificial intelligence (AI)

DALL-E 2

MidJourney

Generative AI
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,851
Daps
204,013
Reppin
the ether
I’m a designer myself (among other professions) — and yet I have never been very fond of the “creative” crowd, which typically sees itself as either unfairly unappreciated or outright better than everybody else around (remember the meltdown they had during Covid when they were deemed “non-essential” in a public survey?).

And as a designer, I consume lots of content on the internet, watching what other people do. In fact, isn’t this in part what a tool like Pinterest was created for?

This dude is lazy and wants to make money by doing nothing but typing prompts.


lol, that statement of his sounded a little too self-exposing so I tried to google online what he actually does, and found this reddit thread.



This comment is apparently stickied to his threads by the mods:

q8ZK7hp.jpg



The entire thread is savage, apparently every Reddit user in Singapore thinks he's a complete waste of space.

So basically, even before AI came about he was making a living with derivative shytposting, thinks that AI will improve his shytposting relative to the norm, and is bitter at people who have actual talent and can produce new content they haven't just copied from somewhere else.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
62,298
Reputation
9,448
Daps
170,875

1/1
I am pleased to share that our work on SynthID text watermarking is published by @Nature today.

Read the Nature paper at: Scalable watermarking for identifying large language model outputs - Nature
Read more about the work at: SynthID: Tools for watermarking and detecting LLM-generated Text | Responsible Generative AI Toolkit | Google AI for Developers

[Quoted tweet]
Today, we’re open-sourcing our SynthID text watermarking tool through an updated Responsible Generative AI Toolkit.

Available freely to developers and businesses, it will help them identify their AI-generated content. 🔍

Find out more → goo.gle/40apGQh



To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196





1/20
@GoogleDeepMind
Today, we’re open-sourcing our SynthID text watermarking tool through an updated Responsible Generative AI Toolkit.

Available freely to developers and businesses, it will help them identify their AI-generated content. 🔍

Find out more → SynthID



https://video-ft.twimg.com/ext_tw_v...376/pu/vid/avc1/1280x720/G5K0TaljbmDqO-lP.mp4

2/20
@GoogleDeepMind
Here’s how SynthID watermarks AI-generated content across modalities. ↓



https://video-ft.twimg.com/ext_tw_video/1792521399359180800/pu/vid/avc1/720x720/fT7NUZR4FiMQ2iwO.mp4

3/20
@GoogleDeepMind
By open-sourcing the code, more people will be able to use the tool to watermark and determine whether text outputs have come from their own LLMs - making it easier to build AI responsibly.

We explain more about this tech in @Nature. ↓ Scalable watermarking for identifying large language model outputs - Nature



4/20
@AidfulAI
Detecting AI-written text is tough without watermarks.

Open-sourcing SynthID-Text enables others to embed watermarks in their model outputs.

This means there will be two types of models:
Models which watermark their outputs and the ones that won't. 🤔



5/20
@mkieffer1107
awesome!!! was just looking into this yesterday hoping it was open source :smile:



6/20
@dom_beaini
1. Can we break down the image generation by down-sampling and up-sampling?

2. Invisible to the human eye, but if we plug them back into another gen-AI, would it remove the watermark? For example adding noise to the image, then feeding it back into another watermark-free diffusion model? Asking another LLM to make random modification to a given text?

3. Without regulatory enforcement of these watermarks, I suspect most models won't have them.



7/20
@DesFrontierTech
How does SynthID text’s generative watermarking handle variability across different content domains, and what measures are taken to ensure the watermark’s detectability remains consistent when faced with novel or out-of-distribution input contexts?



8/20
@cloudseedingtec
ok i have a random question tthat no one has answered.. did yall put that (i call it the poison pill) into youtube videos.. cuz like well not to self incriminate but it seems like yall did something<3



9/20
@entergnomer
Would a different sampler bypass this?



10/20
@BensenHsu
The study focuses on developing a method called SynthID-Text to watermark text generated by large language models (LLMs). Watermarking can help identify synthetic text and limit accidental or deliberate misuse of LLMs.

The researchers evaluate SynthID-Text across multiple LLMs and find that it provides improved detectability over comparable methods, while maintaining standard benchmarks and human side-by-side ratings that indicate no change in LLM capabilities. They also conduct a live experiment with the Gemini production system, which shows that the difference in response quality and utility, as judged by humans, is negligible between watermarked and unwatermarked responses.

full paper: Scalable watermarking for identifying large language model outputs



GaquIVKbIAAgkV7.jpg


11/20
@shawnchauhan1
Awesome! Really appreciate it.



12/20
@HungamaHeadline
Google's open-sourcing of SynthID is a major step forward in ensuring accountability and trust in AI-generated content. By providing a reliable way to identify AI-generated media, SynthID empowers users to make informed decisions. This is a crucial development as AI continues to shape our world.



13/20
@thegenioo
Irrelevant somehow to the OP

But this simple animation also shows that how LLMs basically work using Probability to output words, like predicting the next word. Its not the entire process but a very simple illustration for someone who has no clue how AI works.



14/20
@MinhQua52508258
Alphastarter



15/20
@benrayfield
very suspicious to announce opensourcing something without saying what license or where to download it



16/20
@benrayfield
"Where is SynthID available? This technology is available to Vertex AI customers using our text-to-image models, Imagen 3 and Imagen 2, which create high-quality images in a wide variety of artistic styles". Prove its opensource. Wheres one of those guys one could fork from?



17/20
@benrayfield
Why dont you call it a steganography tool? Isnt watermarking a kind of steganography if you do it well enuf? You're hiding any arbitrary data by rewriting words to have a similar meaning, and paying for that in extra length to store the data.



18/20
@234Sagyboy
@GoogleDeepMind @Google Awesome now that we have verification in place meaning better identification of content generated by AI Is it possible that we can please have Google Soundstorm and AudioLm released Thanks



19/20
@explorewithmom
Google DeepMind's SynthID is a game-changer for identifying AI-generated content. I've been exploring AI watermarking for my own work and I'm excited to see SynthID open-sourced and freely available to developers and businesses.



20/20
@AdalaceV2
Oh ok so you're actively polluting the output of the software I am paying for. Sounds like I won't be paying for it anymore.




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196




1/4
@MushtaqBilalPhD
Google has open-sourced a watermarking tool, SynthID, to identify AI-generated content.

Teachers can relax now because soon students won't be able to use AI to cheat on their assignments.



https://video-ft.twimg.com/ext_tw_v...305/pu/vid/avc1/1352x720/i6YazQbRYIH6iBnX.mp4

2/4
@MushtaqBilalPhD
Here's the full paper by Google DeepMind:
Scalable watermarking for identifying large language model outputs - Nature



3/4
@healthheronav
I've developed my own ways to detect AI-generated content, but I'm skeptical about tools like SynthID. What's to stop AI from evolving to evade watermarks?



4/4
@fcordobaot
It only works if the content was generated by Gemini after they created the watermark. So unless all the big ones use the standard watermark, it would be complicated to really achieve it!




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196




1/3
@kanpuriyanawab
Google Deepmind open-sourced SynthID today.

Here are 3 things you need to know:

What is SynthID??

SynthID has been developed for watermarking and identifying AI-generated content. This includes text, images, audio, and video.

Significance:

> This tool comes when distinguishing between AI and human-created content is becoming increasingly important due to misinformation, plagiarism, and copyright violations.

How it works?

> For text, SynthID modifies the probability scores of tokens during the generation process so that these modifications act as a watermark.

> This watermark can then be detected through a specific scoring system that assesses the likelihood that the text was generated by a watermarked large language model (LLM).

In my opinion,

The move to open-source SynthID allows anyone to implement this technology in their own AI models to watermark and later identify AI-generated text.

Moreover, this can be seen as a step towards fostering responsible AI development by allowing widespread implementation of watermarking technology.



GarI4YwaAAEtHMM.jpg


2/3
@Yaaaaaashhh
SynthID is really cool!!!!



3/3
@kanpuriyanawab
and necessary




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 
Top