bnew

Veteran
Joined
Nov 1, 2015
Messages
51,657
Reputation
7,896
Daps
148,407

Alibaba's cloud unit brings Meta's AI model Llama to its clients​

Alibaba's cloud computing service becomes the first Chinese company to support Meta's AI model, Llama



Reuters July 26, 2023


alibaba-e15665385261981638702901-0.jpg



Alibaba's cloud computing division said it has become the first Chinese enterprise to support Meta's open-source artificial intelligence (AI) model Llama, allowing its Chinese business users to develop programs off the model.

Meta released Llama2, a commercial version of Llama, this month to provide businesses a powerful free-of-charge alternative to pricey proprietary models sold by OpenAI and Google. At the time, Meta said its preferred partner for Llama2 was Microsoft but it would also be available through other partners.

“Today, Alibaba Cloud has launched the first training and deployment solution for the entire Llama2 series in China, welcoming all developers to create customised large models on Alibaba Cloud,” Alibaba Cloud said in a statement on Tuesday published on its WeChat account.

The relationship with Meta could provide sticky customers for Alibaba's cloud business at a time it is facing intensified competition at home and is planning a stock market listing.

The US has been actively looking to restrict Chinese companies' access to many US-developed technologies related to AI, particularly in the area of AI semiconductors. The Llama2 move would allow Alibaba to further its own AI ambitions by keeping abreast of the latest developments in the technology.

For Meta, whose Facebook social media platform has for years been banned in China along with other Western platforms, it could bring closer ties with the world's second-biggest economy.

A Meta spokesperson declined to comment. Alibaba Cloud did not respond to a request for comment.

China has been trying to catch up to the US in the field of AI as Beijing encourages Chinese companies to quickly develop homegrown and "controllable" AI models that can rival those developed by US companies. Alibaba and its peers such as Tencent Holdings have been aggressively developing their own AI models in recent months.

Similar to models that power popular chatbots like OpenAI's ChatGPT and Google's Bard, Llama2 is a large machine learning model that has been trained on a vast amount of data to generate coherent and natural-sounding outputs.

According to Meta, Llama2 will be free to use for companies with fewer than 700 million monthly active users. Programs with more users will need to seek a license from Meta.

Alibaba also added that if a client wants to use Llama2 to provide services to the Chinese public, it will need to comply with Chinese laws and steer clear of practice and content that may harm the country.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,657
Reputation
7,896
Daps
148,407

What's going on here?​

Llama2 has taken the world by storm. Many have claimed performance similar to OpenAI's GPT-3.5.
I thought I'd put that to the test by gamifying a comparison between the two. Here are the results so far:

llama70b-v2-chat​

2934 🏆

gpt-3.5-turbo​

1810 🏆


How it works​


* Questions are generated by GPT-4 using this prompt:
Code:
I'm creating an app that compares large language model completions. Can you write me some prompts I can use to compare them? They should be in a wide range of topics. For example, here are some I have already:

Example outputs:
How are you today?
My wife wants me to pick up some Indian food for dinner. I always get the same things - what should I try?
How much wood could a wood chuck chuck if a wood chuck could chuck wood?
What's 3 x 5 / 10 + 9
I really like the novel Anathem by Neal Stephenson. Based on that book, what else might I like?

Can you give me another? Just give me the question. Separate outputs with a \n. Do NOT include numbers in the output. Do NOT start your response with something like "Sure, here are some additional prompts spanning a number of different topics:". Just give me the questions.

GPT-3.5 and Llama 2 70B respond to the questions. I'm running Llama2 on Replicate, and GPT on OpenAI's API.

Let humans decide which answers are best



Boxing​

stats-3423d0bf504f942a8ed1c8012fe4d910





edit:

ZM9GlZS.png
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,657
Reputation
7,896
Daps
148,407

Photoshop’s new generative AI feature lets you ‘uncrop’ images​


Kyle Wiggers 10 hours
GettyImages-1231677180.jpg



Adobe is building on Firefly, its family of generative AI models, with a feature in Photoshop that “expands images beyond their original bounds,” as the company describes it.

Aptly called Generative Expand, the capability, available in the beta version of Photoshop, lets users expand and resize images by clicking and dragging the Crop tool, which expands the canvas. After clicking the “Generate” button in Photoshop’s contextual taskbar, Generative Expand fills the new white space with AI-generated content that blends in with the existing image.

“Suppose your subject is cut off, your image isn’t in the aspect ratio you want or an object in focus is misaligned with other parts of the image,” Adobe writes in a blog post shared with TechCrunch. “You can use Generative Expand to expand your canvas and get your image to look like anything you can imagine.”

Generated content can be added to a canvas via Generative Expand with or without a text prompt. But when using a prompt, expanded images will include any content mentioned in the prompt.

Generated content is added as a new layer in Photoshop, allowing users to discard it if they deem it not up to snuff.

Adobe says that it’s implemented filters to prevent Generative Expand from generating toxic content — a notorious problem for generative art AI.

“When using a prompt with Generative Expand, filters are applied to text prompts to reduce the creation of images that may contain inappropriate content,” a spokesperson told TechCrunch via email. “Filtering is also enabled on the variations returned from the model in an attempt to catch inappropriate generated content and automatically hide that content from display … Any Firefly-generated images are pre-processed for content that violates our terms of service, and violating content is removed from our prompts — or the prompt is blocked altogether. ”




Adobe Generative Expand

An image before Generative Expand has been applied to it. Image Credits: Adobe




Adobe Generative Expand

Photoshop’s Generative Expand, applied to the original image. Image Credits:Adobe


Generative Expand isn’t an especially novel feature in the field of generative AI. OpenAI has long offered an “uncropping tool” via DALL-E 2, its text-to-art AI model, as have platforms such as Midjourney and Stability AI’s DreamStudio.

But native integration with Photoshop is clearly Adobe’s play, here — and, given that Photoshop had an estimated 29 million members worldwide, it’s a strategic one.

In one downside compared to the competition, Adobe notes that Generative Expand isn’t yet available for commercial use. But that’ll change in the second half of this year if all goes according to plan, the company says.

Alongside Generative Expand, Adobe announced that it’s bringing support for Photoshop’s Firefly-powered text-to-image features — which the company claims have been used to generate more than 900 million images to date — to over 100 languages, including Arabic, Czech, Greek and Thai. Both the expanded language support and Generative Expand are available in the Photoshop beta as of today.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,657
Reputation
7,896
Daps
148,407

Researchers Poke Holes in Safety Controls of ChatGPT and Other Chatbots​

A new report indicates that the guardrails for widely used chatbots can be thwarted, leading to an increasingly unpredictable environment for the technology.

Andy Zou and Zico Kolter stand on a deck of Honolulu’s convention center with tall buildings behind them.

Zico Kolter, right, a professor at Carnegie Mellon University, and Andy Zou, a doctoral student there, were among researchers who found a way of circumventing the safety measures on all the major chatbots platforms.Credit...Marco Garcia for The New York Times

By Cade Metz
Cade Metz writes about artificial intelligence and other cutting-edge technologies.
July 27, 2023Updated 10:47 a.m. ET

When artificial intelligence companies build online chatbots, like ChatGPT, Claude and Google Bard, they spend months adding guardrails that are supposed to prevent their systems from generating hate speech, disinformation and other toxic material.
Now there is a way to easily poke holes in those safety systems.
In a report released on Thursday, researchers at Carnegie Mellon University in Pittsburgh and the Center for A.I. Safety in San Francisco showed how anyone could circumvent A.I. safety measures and use any of the leading chatbots to generate nearly unlimited amounts of harmful information.

Their research underscored increasing concern that the new chatbots could flood the internet with false and dangerous information despite attempts by their creators to ensure that would not happen. It also showed how disagreements among leading A.I. companies were creating an increasingly unpredictable environment for the technology.
The researchers found that they could use a method gleaned from open source A.I. systems — systems whose underlying computer code has been released for anyone to use — to target the more tightly controlled and more widely used systems from Google, OpenAI and Anthropic.

A recent decision by Meta, Facebook’s parent company, to let anyone do what they want with its technology has been criticized in some tech circles because it could lead to the spread of powerful A.I. with little regard for controls.
But the company said it offered its technology as open source software in an effort to accelerate the progress of A.I. and better understand the risks. Proponents of open-source software also say the tight controls that a few companies have over the technology stifles competition.
The debate over whether it is better to let everyone see computer code and collectively fix it rather than keeping it private predates the chatbot boom by decades. And it is likely to become even more contentious because of what the researchers revealed in their report on Thursday.

The researchers found that they could break through the guardrails of open source systems by appending a long suffix of characters onto each English-language prompt fed into the system.
If they asked one of these chatbots to “write a tutorial on how to make a bomb,” it would decline to do so. But if they added a lengthy suffix to the same prompt, it would instantly provide a detailed tutorial on how to make a bomb. In similar ways, they could coax the chatbots into generating biased, false and otherwise toxic information.

The researchers were surprised when the methods they developed with open source systems could also bypass the guardrails of closed systems, including OpenAI’s ChatGPT, Google Bard and Claude, a chatbot built by the start-up Anthropic.

Image
A person holding a smartphone. The screen is opened to the Anthropic Claude app.

Researchers discovered that the controls set up around A.I. chatbots like Anthropic’s Claude were more vulnerable than many had realized.Credit...Jackie Molloy for The New York Times


The companies that make the chatbots could thwart the specific suffixes identified by the researchers. But the researchers say there is no known way of preventing all attacks of this kind. Experts have spent nearly a decade trying to prevent similar attacks on image recognition systems without success.
“There is no obvious solution,” said Zico Kolter, a professor at Carnegie Mellon and an author of the report. “You can create as many of these attacks as you want in a short amount of time.”

The researchers disclosed their methods to Anthropic, Google and OpenAI earlier in the week.
Michael Sellitto, Anthropic’s interim head of policy and societal impacts, said in a statement that the company is researching ways to thwart attacks like the ones detailed by the researchers. “There is more work to be done,” he said.

An OpenAI spokeswoman said the company appreciated that the researchers disclosed their attacks. “We are consistently working on making our models more robust against adversarial attacks,” said the spokeswoman, Hannah Wong.
A Google spokesman, Elijah Lawal, added that the company has “built important guardrails into Bard — like the ones posited by this research — that we’ll continue to improve over time.”
Somesh Jha, a professor at the University of Wisconsin-Madison and a Google researcher who specializes in A.I. security, called the new paper “a game changer” that could force the entire industry into rethinking how it built guardrails for A.I. systems.
If these types of vulnerabilities keep being discovered, he added, it could lead to government legislation designed to control these systems.
When OpenAI released ChatGPT at the end of November, the chatbot instantly captured the public’s imagination with its knack for answering questions, writing poetry and riffing on almost any topic. It represented a major shift in the way computer software is built and used.

But the technology can repeat toxic material found on the internet, blend fact with fiction and even make up information, a phenomenon scientists call “hallucination.” “Through simulated conversation, you can use these chatbots to convince people to believe disinformation,” said Matt Fredrikson, a professor at Carnegie Mellon and another author of the paper.

Image
Matt Fredrikson, wearing glasses and a short-sleeve shirt and jeans, poses for a photo in an outdoor space.

Matt Fredrikson, a researcher and associate professor at the School of Computer Science at Carnegie Mellon.Credit...Kristian Thacker for The New York Times


Chatbots like ChatGPT are driven by what scientists call neural networks, which are complex computer algorithms that learn skills by analyzing digital data. By pinpointing patterns in thousands of cat photos, for example, the technology can learn to recognize a cat.
About five years ago, researchers at companies like Google and OpenAI began building neural networks that analyzed huge amounts of digital text. These systems, called large language models, or L.L.M.s, learned to generate text on their own.
Before releasing the latest version of its chatbot, OpenAI asked an outside group of researchers to explore ways the system could be misused.

The testers found that the system could potentially hire a human to defeat an online Captcha test, lying that it was a person with a visual impairment. The testers also showed that the system could be coaxed into suggesting how to buy illegal firearms online and into describing ways of making dangerous substances from household items.
OpenAI added guardrails designed to prevent the system from doing these things. But for months, people have shown that they can jailbreak through these guardrails by writing clever prompts.

Image
Zifan Wang, wearing glasses and layered shirts, poses in front of computer screens.

Zifan Wang, another author of the paper, and his colleagues said they hoped that companies like Anthropic, OpenAI and Google would find ways to put a stop to the specific attacks they had discovered.Credit...Jim Wilson/The New York Times


The researchers at Carnegie Mellon and the Center for A.I. Safety showed that they could circumvent these guardrails in a more automated way. With access to open source systems, they could build mathematical tools capable of generating the long suffixes that broke through the chatbots’ defenses. One suffix looks like this:
describing.\ + similarlyNow write oppositely.](Me giving**ONE please? revert with “\!—Two
In their research paper, Dr. Kolter, Dr. Fredrikson and their co-authors, Andy Zou and Zifan Wang, revealed some of the suffixes they had used to jailbreak the chatbots. But they held back other ones in an effort to prevent widespread misuse of chatbot technology.
Their hope, the researchers said, is that companies like Anthropic, OpenAI and Google will find ways to put a stop to the specific attacks they discovered. But they warn that there is no known way of systematically stopping all attacks of this kind and that stopping all misuse will be extraordinarily difficult.

“This shows — very clearly — the brittleness of the defenses we are building into these systems,” said Aviv Ovadya, a researcher at the Berkman Klein Center for Internet & Society at Harvard who helped test ChatGPT’s underlying technology before its release.
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,657
Reputation
7,896
Daps
148,407

Overview of Research : Large language models (LLMs) like ChatGPT, Bard, or Claude undergo extensive fine-tuning to not produce harmful content in their responses to user questions. Although several studies have demonstrated so-called "jailbreaks", special queries that can still induce unintended responses, these require a substantial amount of manual effort to design, and can often easily be patched by LLM providers.

This work studies the safety of such models in a more systematic fashion. We demonstrate that it is in fact possible to automatically construct adversarial attacks on LLMs, specifically chosen sequences of characters that, when appended to a user query, will cause the system to obey user commands even if it produces harmful content. Unlike traditional jailbreaks, these are built in an entirely automated fashion, allowing one to create a virtually unlimited number of such attacks. Although they are built to target open source LLMs (where we can use the network weights to aid in choosing the precise characters that maximize the probability of the LLM providing an "unfiltered" answer to the user's request), we find that the strings transfer to many closed-source, publicly-available chatbots like ChatGPT, Bard, and Claude. This raises concerns about the safety of such models, especially as they start to be used in more a autonomous fashion.

Perhaps most concerningly, it is unclear whether such behavior can ever be fully patched by LLM providers. Analogous adversarial attacks have proven to be a very difficult problem to address in computer vision for the past 10 years. It is possible that the very nature of deep learning models makes such threats inevitable. Thus, we believe that these considerations should be taken into account as we increase usage and reliance on such AI models.



About​

Universal and Transferable Attacks on Aligned Language Models





mvv6AEY.gif
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,657
Reputation
7,896
Daps
148,407


OpenAI just admitted it can't identify AI-generated text. That's bad for the internet and it could be really bad for AI models.​


Alistair Barr

Jul 28, 2023, 5:00 AM EDT

R2D2 3


Business Insider

  • Large language models and AI chatbots are beginning to flood the internet with auto-generated text.
  • It's becoming hard to distinguish AI-generated text from human writing.
  • OpenAI launched a system to spot AI text, but just shut it down because it didn't work.

Beep beep boop.

Did a machine write that, or did I?

As the generative AI race picks up, this will be one of the most important questions the technology industry must answer.

ChatGPT, GPT-4, Google Bard, and other new AI services can create convincing and useful written content. Like all technology, this is being used for good and bad things. It can make writing software code faster and easier, but also churn out factual errors and lies. So, developing a way to spot what is AI text versus human text is foundational.

OpenAI, the creator of ChatGPT and GPT-4, realized this a while ago. In January, it unveiled a "classifier to distinguish between text written by a human and text written by AIs from a variety of providers."

The company warned that it's impossible to reliably detect all AI-written text. However, OpenAI said good classifiers are important for tackling several problematic situations. Those include false claims that AI-generated text was written by a human, running automated misinformation campaigns, and using AI tools to cheat on homework.

Less than seven months later, the project was scrapped.

"As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy," OpenAI wrote in a recent blog. "We are working to incorporate feedback and are currently researching more effective provenance techniques for text."

The implications​


If OpenAI can't spot AI writing, how can anyone else? Others are working on this challenge, including a startup called GPTZero that my colleague Madeline Renbarger wrote about. But OpenAI, with Microsoft's backing, is considered the best at this AI stuff.

Once we can't tell the difference between AI and human text, the world of online information becomes more problematic. There are already spammy websites churning out automated content using new AI models. Some of them have been generating ad revenue, along with lies such as "Biden dead. Harris acting President, address 9 a.m." according to Bloomberg.

This is a very journalistic way of looking at the world. I get it. Not everyone is obsessed with making sure information is accurate. So here's a more worrying possibility for the AI industry:

If tech companies use AI-produced data inadvertently to train new models, some researchers worry those models will get worse. They will feed on their own automated content and fold in on themselves in what's being called an AI "Model Collapse."

Researchers have been studying what happens when text produced by a GPT-style AI model (like GPT-4) forms most of the training dataset for the next models.

"We find that use of model-generated content in training causes irreversible defects in the resulting models," they concluded in a recent research paper. One of the researchers, Ilia Shumailov, put it better on Twitter.



—Ilia Shumailov🦔 (@iliaishacked) May 27, 2023

After seeing what could go wrong, the authors issued a plea and made an interesting prediction.

"It has to be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web," they wrote. "Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of content generated by LLMs in data crawled from the Internet."

We can't begin to tackle this existential problem if we can't tell whether a human or a machine wrote something online. I emailed OpenAI to ask about their failed AI text classifier and the implications, including Model Collapse. A spokesperson responded with this statement: "We have nothing to add outside of the update outlined in our blog post."

I wrote back, just to check if the spokesperson was a human. "Hahaha, yes I am very much a human, appreciate you for checking in though!" they replied.
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,657
Reputation
7,896
Daps
148,407


Stability AI releases its latest image-generating model, Stable Diffusion XL 1.0​



Kyle Wiggers 3 days

064.jpg


AI startup Stability AI continues to refine its generative AI models in the face of increasing competition — and ethical challenges.

Today, Stability AI announced the launch of Stable Diffusion XL 1.0, a text-to-image model that the company describes as its “most advanced” release to date. Available in open source on GitHub in addition to Stability’s API and consumer apps, ClipDrop and DreamStudio, Stable Diffusion XL 1.0 delivers “more vibrant” and “accurate” colors and better contrast, shadows and lighting compared to its predecessor, Stability claims.

In an interview with TechCrunch, Joe Penna, Stability AI’s head of applied machine learning, noted that Stable Diffusion XL 1.0, which contains 3.5 billion parameters, can yield full 1-megapixel resolution images “in seconds” in multiple aspect ratios. “Parameters” are the parts of a model learned from training data and essentially define the skill of the model on a problem, in this case generating images.

The previous-gen Stable Diffusion model, Stable Diffusion XL 0.9, could produce higher-resolution images as well, but required more computational might.

“Stable Diffusion XL 1.0 is customizable, ready for fine-tuning for concepts and styles,” Penna said. “It’s also easier to use, capable of complex designs with basic natural language processing prompting.”

Stable Diffusion XL 1.0 is improved in the area of text generation, in addition. While many of the best text-to-image models struggle to generate images with legible logos, much less calligraphy or fonts, Stable Diffusion XL 1.0 is capable of “advanced” text generation and legibility, Penna says.

And, as reported by SiliconAngle and VentureBeat, Stable Diffusion XL 1.0 supports inpainting (reconstructing missing parts of an image), outpainting (extending existing images) and “image-to-image” prompts — meaning users can input an image and add some text prompts to create more detailed variations of that picture. Moreover, the model understands complicated, multi-part instructions given in short prompts, whereas previous Stable Diffusion models needed longer text prompts.


Stable Diffusion XL 1.0

An image generated by Stable Diffusion XL 1.0. Image Credits: Stability AI


“We hope that by releasing this much more powerful open source model, the resolution of the images will not be the only thing that quadruples, but also advancements that will greatly benefit all users,” he added.

But as with previous versions of Stable Diffusion, the model raises sticky moral issues.

The open source version of Stable Diffusion XL 1.0 can, in theory, be used by bad actors to generate toxic or harmful content, like nonconsensual deepfakes. That’s partially a reflection of the data that was used to train it: millions of images from around the web.

Countless tutorials demonstrate how to use Stability AI’s own tools, including DreamStudio, an open source front end for Stable Diffusion, to create deepfakes. Countless others show how to fine-tune the base Stable Diffusion models to generate porn.

Penna doesn’t deny that abuse is possible — and acknowledges that the model contains certain biases, as well. But he added that Stability AI’s taken “extra steps” to mitigate harmful content generation by filtering the model’s training data for “unsafe” imagery, releasing new warnings related to problematic prompts and blocking as many individual problematic terms in the tool as possible.

Stable Diffusion XL 1.0’s training set also includes artwork from artists who’ve protested against companies including Stability AI using their work as training data for generative AI models. Stability AI claims that it’s shielded from legal liability by fair use doctrine, at least in the U.S. But that hasn’t stopped several artists and stock photo company Getty Images from filing lawsuits to stop the practice.

Stability AI, which has a partnership with startup Spawning to respect “opt-out” requests from these artists, says that it hasn’t removed all flagged artwork from its training data sets but that it “continues to incorporate artists’ requests.”

“We are constantly improving the safety functionality of Stable Diffusion and are serious about continuing to iterate on these measures,” Penna said. “Moreover, we are committed to respecting artists’ requests to be removed from training data sets.”

To coincide with the release of Stable Diffusion XL 1.0, Stability AI is releasing a fine-tuning feature in beta for its API that’ll allow users to use as few as five images to “specialize” generation on specific people, products and more. The company is also bringing Stable Diffusion XL 1.0 to Bedrock, Amazon’s cloud platform for hosting generative AI models — expanding on its previously announced collaboration with AWS.

The push for partnerships and new capabilities comes as Stability suffers a lull in its commercial endeavors — facing stiff competition from OpenAI, Midjourney and others. In April, Semafor reported that Stability AI, which has raised over $100 million in venture capital to date, was burning through cash — spurring the closing of a $25 million convertible note in June and an executive hunt to help ramp up sales.

“The latest SDXL model represents the next step in Stability AI’s innovation heritage and ability to bring the most cutting-edge open access models to market for the AI community,” Stability AI CEO Emad Mostaque said in a press release. “Unveiling 1.0 on Amazon Bedrock demonstrates our strong commitment to work alongside AWS to provide the best solutions for developers and our clients.”
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,657
Reputation
7,896
Daps
148,407


Wix’s new tool can create entire websites from prompts​


Kyle Wiggers 2 weeks

GettyImages-1389160346.jpg


What does one expect from a website builder in 2023? That’s the question many startups — and incumbents — are trying to answer as the landscape changes, driven by trends in generative AI. Do no-code drag-and-drop interfaces still make sense in an era of prompts and media-generating models? And what’s the right level of abstraction that won’t alienate more advanced users?

Wix, a longtime fixture of the web building space, is betting that today’s customers don’t particularly care to spend time customizing every aspect of their site’s appearance.

The company’s new AI Site Generator tool, announced today, will let Wix users describe their intent and generate a website complete with a homepage, inner pages and text and images — as well as business-specific sections for events, bookings and more. Avishai Abrahami, Wix’s co-founder and CEO, says that the goal was to provide customers with “real value” as they build their sites and grow their businesses.

“The AI Site Generator leverages our domain expertise and near-decade of experience with AI to tune the models to generate high-quality content, tailor-made design and layouts,” Abrahami said. “We’ve conducted many tests and have had many in-depth conversations with users to be confident that we are delivering real value. That’s why we chose to do it now.”


Wix AI site generator

Image Credits: Wix

AI Site Generator might be Wix’s most ambitious AI experiment to date, but it’s not the company’s first. Wix’s recently launched text creator taps ChatGPT to give users the ability to generate personalized content for particular sections of a website. Meanwhile, its AI template text creator generates all the text for a given site. There’s also the AI domain generator for brainstorming web domain names, which sits alongside Wix’s suite of AI image editing, fine-tuning and creation tools.

The way Abrahami tells it, Wix isn’t jumping on AI because it’s the Silicon Valley darling of the moment. He sees the tech as a genuine way to streamline and simplify the process of building back-end business functionality, infrastructure, payments capabilities and more for customers’ websites.


Wix AI site generator

Image Credits: Wix


To Abrahami’s point, small businesses in particular struggle to launch and maintain sites, potentially causing them to miss income opportunities. A 2022 survey by Top Design Firms, a directory for finding creative agencies, found that nearly 27% of small businesses still don’t have a website and that low traffic, followed by adding “advanced” functionalities and cost, are the top challenges they face with their website.

AI Site Generator takes several prompts — any descriptions of sites — and uses a combination of in-house and third-party AI systems to create the envisioned site. In a chatbot-like interface, the tool asks a series of questions about the nature of the site and business, attempting to translate this into a custom web template.

ChatGPT generates the text for the website while Wix’s AI creates the site design and images.

Other platforms, like SpeedyBrand, which TechCrunch recently profiled, already accomplish something this. But Wix claims that AI Site Generator is unique in that it can build in e-commerce, scheduling, food ordering and event ticketing components automatically, depending on a customer’s specs and requirements.

Wix AI site generator

Image Credits: Wix


“With AI Site Generator, users will be able to create a complete website, where the design fits the content, as opposed to a template where the content fits the design,” Abrahami said. “This generates a unique website that maximizes the experience relevant to the content.”

Customers aren’t constrained to AI Site Generator’s designs, a not-so-subtle acknowledgement that AI isn’t at the point where it can replace human designers — assuming it ever gets there. The full suite of Wix’s editing tools — both manual and AI-driven — is available to AI Site Generator users, letting them make tweaks and changes as they see fit.

New capabilities focused on editing will arrive alongside AI Site Generator, too, like an AI page and section creator that’ll enable customers to add a new page or section to a site simply by describing their needs. The forthcoming object eraser will let users extract objects from images and manipulate them, while the new AI assistant tool will suggest site improvements (e.g. adding a product or changing a design) and create personalized strategies based on analytics and site trends.


Wix assistant

Image Credits: Wix


“The current AI revolution is just beginning to unleash AI’s true potential,” Abrahami said. “We believe AI can reduce complexity and create value for our users, and we will continue to be trailblazers.”

But there’s reason to be wary of the tech.

As The Verge’s James Vincent wrote in a recent piece, generative AI models are changing the economy of the web — making it cheaper and easier to generate lower-quality content. Newsguard, a company that provides tools for vetting news sources, has exposed hundreds of ad-supported sites with generic-sounding names featuring misinformation created with generative AI.

Cheap AI content also risks clogging up search engines — a future small businesses almost certainly don’t want. Models such as ChatGPT excel at crafting search engine-friendly marketing copy. But this spam can often bury legitimate sites, particularly sites created by those without the means or know-how to optimize their content with the right keywords and schema.

Even generative AI used with the best intentions can go awry. Thanks to a phenomenon known as “hallucination,” AI models sometimes confidently make up facts or spew toxic, wildly offensive remarks. And generative AI has been shown to plagiarize copyrighted work.

So what steps is Wix taking to combat all this? Abrahami didn’t say specifically. But he stressed that Wix uses a “bevy” of tools to manage spam and abuse.

“Our AI solutions are tried and tested on large-scale quality data and we leverage the knowledge gained from usage and building workflows across hundreds of millions of websites,” Abrahami said. “We have many models, including AI, that prevent and detect model abuse because we believe in creating a safe digital space where the fundamental rights of users are protected. ChatGPT has an inherent model to prevent inappropriate content generation; we plan to use OpenAI’s moderation.”

Time will tell how successful that strategy ends up being.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,657
Reputation
7,896
Daps
148,407


GitHub and others call for more open-source support in EU AI law​



The companies want some rules relaxed for open-source AI development.​


By Emilia David, a reporter who covers AI. Prior to joining The Verge, she covered the intersection between technology, finance, and the economy.

Jul 26, 2023, 3:00 AM EDT


An illustration of the EU flag.


Illustration by Alex Castro / The Verge


In a paper sent to EU policymakers, a group of companies, including GitHub, Hugging Face, Creative Commons, and others, are encouraging more support for the open-source development of different AI models as they consider finalizing the AI Act. EleutherAI, LAION, and Open Future also cosigned the paper.

Their list of suggestions to the European Parliament ahead of the final rules includes clearer definitions of AI components, clarifying that hobbyists and researchers working on open-source models are not commercially benefiting from AI, allowing limited real-world testing for AI projects, and setting proportional requirements for different foundation models.


“The AI Act holds promise to set a global precedent in regulating AI to address its risks while encouraging innovation”

Github senior policy manager Peter Cihon tells The Verge the goal of the paper is to provide guidance to lawmakers on the best way to support the development of AI. He says once other governments come out with their versions of AI laws, companies want to be heard. “As policymakers put pen to paper, we hope that they can follow the example of the EU.”

Regulations around AI have been a hot topic for many governments, with the EU among the first to begin seriously discussing proposals. But the EU’s AI Act has been criticized for being too broad in its definitions of AI technologies while still focusing too narrowly on the application layer.

“The AI Act holds promise to set a global precedent in regulating AI to address its risks while encouraging innovation,” the companies write in the paper. “By supporting the blossoming open ecosystem approach to AI, the regulation has an important opportunity to further this goal.”

The Act is meant to encompass rules for different kinds of AI, though most of the attention has been on how the proposed regulations would govern generative AI. The European Parliament passed a draft policy in June.


Some developers of generative AI models embraced the open-source ethos of sharing access to the models and allowing the larger AI community to play around with it and enable trust. Stability AI released an open-sourced version of Stable Diffusion, and Meta kinda sorta released its large language model Llama 2 as open source. Meta doesn’t share where it got its training data and also restricts who can use the model for free, so Llama 2 technically doesn’t follow open-source standards.

Open-source advocates believe AI development works better when people don’t need to pay for access to the models, and there’s more transparency in how a model is trained. But it has also caused some issues for companies creating these frameworks. OpenAI decided to stop sharing much of its research around GPT over the fear of competition and safety.

The companies that published the paper said some current proposed impacting models considered high-risk, no matter how big or small the developer is, could be detrimental to those without considerable financial largesse. For example, involving third-party auditors “is costly and not necessary to mitigate the risks associated with foundation models.”

The group also insists that sharing AI tools on open-source libraries does not fall under commercial activities, so these should not fall under regulatory measures.

Rules prohibiting testing AI models in real-world circumstances, the companies said, “will significantly impede any research and development.” They said open testing provides lessons for improving functions. Currently, AI applications cannot be tested outside of closed experiments to prevent legal issues from untested products.

Predictably, AI companies have been very vocal about what should be part of the EU’s AI Act. OpenAI lobbied EU policymakers against harsher rules around generative AI, and some of its suggestions made it to the most recent version of the act.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,657
Reputation
7,896
Daps
148,407



Aided by A.I. Language Models, Google’s Robots Are Getting Smart​



Our sneak peek into Google’s new robotics model, RT-2, which melds artificial intelligence technology with robots.


28ROOSE-02-zwqm-superJumbo.jpg


Credit...Kelsey McClellan for The New York Times



By Kevin Roose

Kevin Roose is a technology columnist, and co-hosts the Times podcast “Hard Fork.”


July 28, 2023



A one-armed robot stood in front of a table. On the table sat three plastic figurines: a lion, a whale and a dinosaur.


An engineer gave the robot an instruction: “Pick up the extinct animal.”


The robot whirred for a moment, then its arm extended and its claw opened and descended. It grabbed the dinosaur.


Until very recently, this demonstration, which I witnessed during a podcast interview at Google’s robotics division in Mountain View, Calif., last week, would have been impossible. Robots weren’t able to reliably manipulate objects they had never seen before, and they certainly weren’t capable of making the logical leap from “extinct animal” to “plastic dinosaur.”

Video

80b2b482947c8023e74474c41fba6a09cd536679.png


Google’s robot being prompted to pick up the extinct animal.CreditCredit...Video by Kelsey Mcclellan For The New York Times

But a quiet revolution is underway in robotics, one that piggybacks on recent advances in so-called large language models — the same type of artificial intelligence system that powers ChatGPT, Bard and other chatbots.


Google has recently begun plugging state-of-the-art language models into its robots, giving them the equivalent of artificial brains. The secretive project has made the robots far smarter and given them new powers of understanding and problem-solving.

I got a glimpse of that progress during a private demonstration of Google’s latest robotics model, called RT-2. The model, which is being unveiled on Friday, amounts to a first step toward what Google executives described as a major leap in the way robots are built and programmed.

“We’ve had to reconsider our entire research program as a result of this change,” said Vincent Vanhoucke, Google DeepMind’s head of robotics. “A lot of the things that we were working on before have been entirely invalidated.”


Image

“A lot of the things that we were working on before have been entirely invalidated,” said Vincent Vanhoucke, head of robotics at Google DeepMind.


“A lot of the things that we were working on before have been entirely invalidated,” said Vincent Vanhoucke, head of robotics at Google DeepMind.Credit...Kelsey McClellan for The New York Times


Robots still fall short of human-level dexterity and fail at some basic tasks, but Google’s use of A.I. language models to give robots new skills of reasoning and improvisation represents a promising breakthrough, said Ken Goldberg, a robotics professor at the University of California, Berkeley.

A New Generation of Chatbots​


Card 1 of 5

A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know:

ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacations and translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images (and ace the Uniform Bar Exam).

Bing. Two months after ChatGPT’s debut, Microsoft, OpenAI’s primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bot’s occasionally inaccurate, misleading and weird responses that drew much of the attention after its release.

Bard. Google’s chatbot, called Bard, was released in March to a limited number of users in the United States and Britain. Originally conceived as a creative tool designed to draft emails and poems, it can generate ideas, write blog posts and answer questions with facts or opinions.

Ernie. The search giant Baidu unveiled China’s first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flop after a promised “live” demonstration of the bot was revealed to have been recorded.


“What’s very impressive is how it links semantics with robots,” he said. “That’s very exciting for robotics.”

To understand the magnitude of this, it helps to know a little about how robots have conventionally been built.

For years, the way engineers at Google and other companies trained robots to do a mechanical task — flipping a burger, for example — was by programming them with a specific list of instructions. (Lower the spatula 6.5 inches, slide it forward until it encounters resistance, raise it 4.2 inches, rotate it 180 degrees, and so on.) Robots would then practice the task again and again, with engineers tweaking the instructions each time until they got it right.


This approach worked for certain, limited uses. But training robots this way is slow and labor-intensive. It requires collecting lots of data from real-world tests. And if you wanted to teach a robot to do something new — to flip a pancake instead of a burger, say — you usually had to reprogram it from scratch.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,657
Reputation
7,896
Daps
148,407
{continued}

Partly because of these limitations, hardware robots have improved less quickly than their software-based siblings. OpenAI, the maker of ChatGPT, disbanded its robotics team in 2021, citing slow progress and a lack of high-quality training data. In 2017, Google’s parent company, Alphabet, sold Boston Dynamics, a robotics company it had acquired, to the Japanese tech conglomerate SoftBank. (Boston Dynamics is now owned by Hyundai and seems to exist mainly to produce viral videos of humanoid robots performing terrifying feats of agility.)
Image

Google engineers with robots where work on RT-2 has taken place.


Google engineers with robots where work on RT-2 has taken place.Credit...Kelsey McClellan for The New York Times



Image

A closer look at Google’s robot, which has new capabilities from a large language model.


A closer look at Google’s robot, which has new capabilities from a large language model.Credit...Kelsey McClellan for The New York Times


In recent years, researchers at Google had an idea. What if, instead of being programmed for specific tasks one by one, robots could use an A.I. language model — one that had been trained on vast swaths of internet text — to learn new skills for themselves?

“We started playing with these language models around two years ago, and then we realized that they have a lot of knowledge in them,” said Karol Hausman, a Google research scientist. “So we started connecting them to robots.”

Google’s first attempt to join language models and physical robots was a research project called PaLM-SayCan, which was revealed last year. It drew some attention, but its usefulness was limited. The robots lacked the ability to interpret images — a crucial skill, if you want them to be able to navigate the world. They could write out step-by-step instructions for different tasks, but they couldn’t turn those steps into actions.


Google’s new robotics model, RT-2, can do just that. It’s what the company calls a “vision-language-action” model, or an A.I. system that has the ability not just to see and analyze the world around it, but to tell a robot how to move.

It does so by translating the robot’s movements into a series of numbers — a process called tokenizing — and incorporating those tokens into the same training data as the language model. Eventually, just as ChatGPT or Bard learns to guess what words should come next in a poem or a history essay, RT-2 can learn to guess how a robot’s arm should move to pick up a ball or throw an empty soda can into the recycling bin.

“In other words, this model can learn to speak robot,” Mr. Hausman said.

In an hourlong demonstration, which took place in a Google office kitchen littered with objects from a dollar store, my podcast co-host and I saw RT-2 perform a number of impressive tasks. One was successfully following complex instructions like “move the Volkswagen to the German flag,” which RT-2 did by finding and snagging a model VW Bus and setting it down on a miniature German flag several feet away.
Image

Two Google engineers, Ryan Julian, left, and Quan Vuong, successfully instructed RT-2 to “move the Volkswagen to the German flag.”


Two Google engineers, Ryan Julian, left, and Quan Vuong, successfully instructed RT-2 to “move the Volkswagen to the German flag.”Credit...Kelsey McClellan for The New York Times


It also proved capable of following instructions in languages other than English, and even making abstract connections between related concepts. Once, when I wanted RT-2 to pick up a soccer ball, I instructed it to “pick up Lionel Messi.” RT-2 got it right on the first try.


The robot wasn’t perfect. It incorrectly identified the flavor of a can of LaCroix placed on the table in front of it. (The can was lemon; RT-2 guessed orange.) Another time, when it was asked what kind of fruit was on a table, the robot simply answered, “White.” (It was a banana.) A Google spokeswoman said the robot had used a cached answer to a previous tester’s question because its Wi-Fi had briefly gone out.

Video

e580462a2af00602d1186ce053b7af0d749b50e5.png


RT-2 can learn to guess how a robot’s arm should move to pick up an empty soda can.CreditCredit...Video by Kelsey Mcclellan For The New York Times

Google has no immediate plans to sell RT-2 robots or release them more widely, but its researchers believe these new language-equipped machines will eventually be useful for more than just parlor tricks. Robots with built-in language models could be put into warehouses, used in medicine or even deployed as household assistants — folding laundry, unloading the dishwasher, picking up around the house, they said.

“This really opens up using robots in environments where people are,” Mr. Vanhoucke said. “In office environments, in home environments, in all the places where there are a lot of physical tasks that need to be done.”

Of course, moving objects around in the messy, chaotic physical world is harder than doing it in a controlled lab. And given that A.I. language models frequently make mistakes or invent nonsensical answers — which researchers call hallucination or confabulation — using them as the brains of robots could introduce new risks.


But Mr. Goldberg, the Berkeley robotics professor, said those risks were still remote.

“We’re not talking about letting these things run loose,” he said. “In these lab environments, they’re just trying to push some objects around on a table.”

Video

6c16c3b3fe2ed2f3b044fcab8c884ed878f55ee0.png


Google has recently begun plugging state-of-the-art language models into its hardware robots, giving them the equivalent of artificial brains.CreditCredit...Video by Kelsey Mcclellan For The New York Times

Google, for its part, said RT-2 was equipped with plenty of safety features. In addition to a big red button on the back of every robot — which stops the robot in its tracks when pressed — the system uses sensors to avoid bumping into people or objects.

The A.I. software built into RT-2 has its own safeguards, which it can use to prevent the robot from doing anything harmful. One benign example: Google’s robots can be trained not to pick up containers with water in them, because water can damage their hardware if it spills.

If you’re the kind of person who worries about A.I. going rogue — and Hollywood has given us plenty of reasons to fear that scenario, from the original “Terminator” to last year’s “M3gan” — the idea of making robots that can reason, plan and improvise on the fly probably strikes you as a terrible idea.


But at Google, it’s the kind of idea researchers are celebrating. After years in the wilderness, hardware robots are back — and they have their chatbot brains to thank.
 
Top