Bard gets its biggest upgrade yet with Gemini {Google A.I / LLM}

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,369
Reputation
8,499
Daps
160,085



https://www.reddit.com/r/singularity/comments/1cyu29p/well_done_ufukksmith/


1/1
Seems the origin of the Google AI’s conclusion was an 11 year old Reddit post by the eminent scholar, fukksmith.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GOPGCUGbcAAK94-.jpg

GONkRbQaIAAxsFi.jpg
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,369
Reputation
8,499
Daps
160,085
Google AI is TRASH.

tried it on my phone it couldn't open Spotify or create an event on the calendar.
The fukk is the point? Had to go multiple menus in settings to disable that BS.

they haven't released that yet.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,369
Reputation
8,499
Daps
160,085

Google criticized as AI Overview makes obvious errors, such as saying former President Obama is Muslim​

PUBLISHED FRI, MAY 24 2024 10:30 AM EDT
UPDATED 2 HOURS AGO

Hayden Field@HAYDENFIELD
SHAREShare Article via FacebookShare Article via TwitterShare Article via LinkedInShare Article via Email
KEY POINTS
  • It’s been less than two weeks since Google debuted “AI Overview” in Google Search, and public criticism has mounted after queries have returned nonsensical or inaccurate results within the AI feature — without any way to opt out.
  • For example, when asked how many Muslim presidents the U.S. has had, AI Overview responded, “The United States has had one Muslim president, Barack Hussein Obama.”
  • The news follows Google’s high-profile rollout of Gemini’s image-generation tool in February, and then a pause that same month after comparable issues.
In this article
Follow your favorite stocksCREATE FREE ACCOUNT
Alphabet CEO Sundar Pichai speaks at the Asia-Pacific Economic Cooperation CEO Summit in San Francisco on Nov. 16, 2023.

Alphabet CEO Sundar Pichai speaks at the Asia-Pacific Economic Cooperation CEO Summit in San Francisco on Nov. 16, 2023.
David Paul Morris | Bloomberg | Getty Images
It’s been less than two weeks since Google debuted “AI Overview” in Google Search, and public criticism has mounted after queries have returned nonsensical or inaccurate results within the AI feature — without any way to opt out.

AI Overview shows a quick summary of answers to search questions at the very top of Google Search. For example, if a user searches for the best way to clean leather boots, the results page may display an “AI Overview” at the top with a multistep cleaning process, gleaned from information it synthesized from around the web.


But social media users have shared a wide range of screenshots showing the AI tool giving incorrect and controversial responses.

Google, Microsoft, OpenAI and other companies are at the helm of a generative AI arms race as companies in seemingly every industry rush to add AI-powered chatbots and agents to avoid being left behind by competitors. The market is predicted to top $1 trillion in revenue within a decade.

Here are some examples of errors produced by AI Overview, according to screenshots shared by users.

When asked how many Muslim presidents the U.S. has had, AI Overview responded, “The United States has had one Muslim president, Barack Hussein Obama.”

When a user searched for “cheese not sticking to pizza,” the feature suggested adding “about 1/8 cup of nontoxic glue to the sauce.” Social media users found an 11-year-old Reddit comment that seemed to be the source.


Attribution can also be a problem for AI Overview, especially in attributing inaccurate information to medical professionals or scientists.

For instance, when asked, “How long can I stare at the sun for best health,” the tool said, “According to WebMD, scientists say that staring at the sun for 5-15 minutes, or up to 30 minutes if you have darker skin, is generally safe and provides the most health benefits.”

When asked, “How many rocks should I eat each day,” the tool said, “According to UC Berkeley geologists, people should eat at least one small rock a day,” going on to list the vitamins and digestive benefits.

The tool also can respond inaccurately to simple queries, such as making up a list of fruits that end with “um,” or saying the year 1919 was 20 years ago.

When asked whether or not Google Search violates antitrust law, AI Overview said, “Yes, the U.S. Justice Department and 11 states are suing Google for antitrust violations.”

The day Google rolled out AI Overview at its annual Google I/O event, the company said it also plans to introduce assistant-like planning capabilities directly within search. It explained that users will be able to search for something like, “Create a 3-day meal plan for a group that’s easy to prepare,” and they’d get a starting point with a wide range of recipes from across the web.

“The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web,” a Google spokesperson told CNBC in a statement. “Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce.”

The spokesperson said AI Overview underwent extensive testing before launch and that the company is taking “swift action where appropriate under our content policies.”

The news follows Google’s high-profile rollout of Gemini’s image-generation tool in February, and a pause that same month after comparable issues.

The tool allowed users to enter prompts to create an image, but almost immediately, users discovered historical inaccuracies and questionable responses, which circulated widely on social media.

For instance, when one user asked Gemini to show a German soldier in 1943, the tool depicted a racially diverse set of soldiers wearing German military uniforms of the era, according to screenshots on social media platform X.

When asked for a “historically accurate depiction of a medieval British king,” the model generated another racially diverse set of images, including one of a woman ruler, screenshots showed. Users reported similar outcomes when they asked for images of the U.S. founding fathers, an 18th-century king of France, a German couple in the 1800s and more. The model showed an image of Asian men in response to a query about Google’s own founders, users reported.

Google said in a statement at the time that it was working to fix Gemini’s image-generation issues, acknowledging that the tool was “missing the mark.” Soon after, the company announced it would immediately “pause the image generation of people” and “re-release an improved version soon.”

In February, Google DeepMind CEO Demis Hassabis said Google planned to relaunch its image-generation AI tool in the next “few weeks,” but it has not yet rolled out again.

The problems with Gemini’s image-generation outputs reignited a debate within the AI industry, with some groups calling Gemini too “woke,” or left-leaning, and others saying that the company didn’t sufficiently invest in the right forms of AI ethics. Google came under fire in 2020 and 2021 for ousting the co-leads of its AI ethics group after they published a research paper critical of certain risks of such AI models and then later reorganizing the group’s structure.

In 2023, Sundar Pichai, CEO of Google’s parent company, Alphabet, was criticized by some employees for the company’s botched and “rushed” rollout of Bard, which followed the viral spread of ChatGPT.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,369
Reputation
8,499
Daps
160,085

What the heck is "&udm=14"? A new search engine strips AI junk from Google results​

ROB BESCHIZZA 9:49 AM THU MAY 23, 2024
image-129.png


"&udm=14" is a URL parameter you can add to Google search result URLs that removes all the new AI and ad stuff. And udm14.com is a pseudo-search engine that redirects automatically to these simplified yet more substantial results for your query. It's the work of Ernie Smith, who describes &udm=14 as the "disenshyttification Konami code" for Google.

The results are fascinating. It's essentially Google, minus the crap. No parsing of the information in the results. No surfacing metadata like address or link info. No knowledge panels, but also, no ads. It looks like the Google we learned to love in the early 2000s, buried under the "More" menu like lots of other old things Google once did more to emphasize, like Google Books.
Some report that it doesn't work for them; it might depend on an ongoing rollout of the underlying feature to users. If the URL trick works for you, the site will. It doesn't change ranking—for"verbatim" results you can add "&tbs=li:1" to a Google results URL. The code is on github if you're thinking of implementing it in some other way.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,369
Reputation
8,499
Daps
160,085

Google scrambles to manually remove weird AI answers in search​


The company confirmed it is ‘taking swift action’ to remove some of the AI tool’s bizarre responses.​

By Kylie Robison, a senior AI reporter working with The Verge's policy and tech teams. She previously worked at Fortune Magazine and Business Insider.

May 24, 2024, 8:10 PM EDT

127 Comments

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

Hands with additional fingers typing on a keyboard.

Image: Álvaro Bernis / The Verge

Social media is abuzz with examples of Google’s new AI Overview product saying weird stuff, from telling users to put glue on their pizza to suggesting they eat rocks. The messy rollout means Google is racing to manually disable AI Overviews for specific searches as various memes get posted, which is why users are seeing so many of them disappear shortly after being posted to social networks.

It’s an odd situation, since Google has been testing AI Overviews for a year now — the feature launched in beta in May 2023 as the Search Generative Experience — and CEO Sundar Pichai has said the company served over a billion queries in that time.

But Pichai has also said that Google’s brought the cost of delivering AI answers down by 80 percent over that same time, “driven by hardware, engineering and technical breakthroughs.” It appears that kind of optimization might have happened too early, before the tech was ready.

“A company once known for being at the cutting edge and shipping high-quality stuff is now known for low-quality output that’s getting meme’d,” one AI founder, who wished to remain anonymous, told The Verge.

Google continues to say that its AI Overview product largely outputs “high quality information” to users. “Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce,” Google spokesperson Meghann Farnsworth said in an email to The Verge. Farnsworth also confirmed that the company is “taking swift action” to remove AI Overviews on certain queries “where appropriate under our content policies, and using these examples to develop broader improvements to our systems, some of which have already started to roll out.”

Gary Marcus, an AI expert and an emeritus professor of neural science at New York University, told The Verge that a lot of AI companies are “selling dreams” that this tech will go from 80 percent correct to 100 percent. Achieving the initial 80 percent is relatively straightforward since it involves approximating a large amount of human data, Marcus said, but the final 20 percent is extremely challenging. In fact, Marcus thinks that last 20 percent might be the hardest thing of all.

“You actually need to do some reasoning to decide: is this thing plausible? Is this source legitimate? You have to do things like a human fact checker might do, that actually might require artificial general intelligence,” Marcus said. And Marcus and Meta’s AI chief Yann LeCun both agree that the large language models that power current AI systems like Google’s Gemini and OpenAI’s GPT-4 will not be what creates AGI.

Look, it’s a tough spot for Google to be in. Bing went big on AI before Google did with Satya Nadella’s famous “ we made them dance” quote, OpenAI is reportedly working on its own search engine, a fresh AI search startup is already worth $1 billion, and a younger generation of users who just want the best experience are switching to TikTok. The company is clearly feeling the pressure to compete, and pressure is what makes for messy AI releases. Marcus points out that in 2022, Meta released an AI system called Galactica that had to be taken down shortly after its launch because, among other things, it told people to eat glass. Sounds familiar.

Google has grand plans for AI Overviews — the feature as it exists today is just a tiny slice of what the company announced last week. Multistep reasoning for complex queries, the ability to generate an AI-organized results page, video search in Google Lens — there’s a lot of ambition here. But right now, the company’s reputation hinges on just getting the basics right, and it’s not looking great.

“[These models] are constitutionally incapable of doing sanity checking on their own work, and that’s what’s come to bite this industry in the behind,” Marcus said.
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,369
Reputation
8,499
Daps
160,085

Google admits its AI Overviews need work, but we’re all helping it beta test​

Sarah Perez

12:54 PM PDT • May 31, 2024

Comment

Sundar-AI-backdrop-Google-IO.png
Image Credits: Google

Google is embarrassed about its AI Overviews, too. After a deluge of dunks and memes over the past week, which cracked on the poor quality and outright misinformation that arose from the tech giant’s underbaked new AI-powered search feature, the company on Thursday issued a mea culpa of sorts. Google — a company whose name is synonymous with searching the web — whose brand focuses on “organizing the world’s information” and putting it at user’s fingertips — actually wrote in a blog post that “some odd, inaccurate or unhelpful AI Overviews certainly did show up.”

That’s putting it mildly.

The admission of failure, penned by Google VP and Head of Search Liz Reid, seems a testimony as to how the drive to mash AI technology into everything has now somehow made Google Search worse.

In the post titled “About last week,” (this got past PR?), Reid spells out the many ways its AI Overviews make mistakes. While they don’t “hallucinate” or make things up the way that other large language models (LLMs) may, she says, they can get things wrong for “other reasons,” like “misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available.”

Reid also noted that some of the screenshots shared on social media over the past week were faked, while others were for nonsensical queries, like “How many rocks should I eat?” — something no one ever really searched for before. Since there’s little factual information on this topic, Google’s AI guided a user to satirical content. (In the case of the rocks, the satirical content had been published on a geological software provider’s website.)

It’s worth pointing out that if you had Googled “How many rocks should I eat?” and were presented with a set of unhelpful links, or even a jokey article, you wouldn’t be surprised. What people are reacting to is the confidence with which the AI spouted back that “ geologists recommend eating at least one small rock per day” as if it’s a factual answer. It may not be a “hallucination,” in technical terms, but the end user doesn’t care. It’s insane.

What’s unsettling, too, is that Reid claims Google “tested the feature extensively before launch,” including with “robust red-teaming efforts.”

Does no one at Google have a sense of humor then? No one thought of prompts that would generate poor results?

In addition, Google downplayed the AI feature’s reliance on Reddit user data as a source of knowledge and truth. Although people have regularly appended “Reddit” to their searches for so long that Google finally made it a built-in search filter, Reddit is not a body of factual knowledge. And yet the AI would point to Reddit forum posts to answer questions, without an understanding of when first-hand Reddit knowledge is helpful and when it is not — or worse, when it is a troll.

Reddit today is making bank by offering its data to companies like Google, OpenAI and others to train their models, but that doesn’t mean users want Google’s AI deciding when to search Reddit for an answer, or suggesting that someone’s opinion is a fact. There’s nuance to learning when to search Reddit and Google’s AI doesn’t understand that yet.

As Reid admits, “forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice, like using glue to get cheese to stick to pizza,” she said, referencing one of the AI feature’s more spectacular failures over the past week.

Google AI overview suggests adding glue to get cheese to stick to pizza, and it turns out the source is an 11 year old Reddit comment from user F*cksmith 😂 pic.twitter.com/uDPAbsAKeO

— Peter Yang (@petergyang) May 23, 2024

If last week was a disaster, though, at least Google is iterating quickly as a result — or so it says.

The company says it’s looked at examples from AI Overviews and identified patterns where it could do better, including building better detection mechanisms for nonsensical queries, limiting the user of user-generated content for responses that could offer misleading advice, adding triggering restrictions for queries where AI Overviews were not helpful, not showing AI Overviews for hard news topics, “where freshness and factuality are important,” and adding additional triggering refinements to its protections for health searches.

With AI companies building ever-improving chatbots every day, the question is not on whether they will ever outperform Google Search for helping us understand the world’s information, but whether Google Search will ever be able to get up to speed on AI to challenge them in return.

As ridiculous as Google’s mistakes may be, it’s too soon to count it out of the race yet — especially given the massive scale of Google’s beta-testing crew, which is essentially anybody who uses search.

“There’s nothing quite like having millions of people using the feature with many novel searches,” says Reid.
 

Alix217

Superstar
Joined
Mar 11, 2022
Messages
4,854
Reputation
916
Daps
34,882
Forget Google AI, regular Google is going to shyt to

When I search something it only gives me like 7 links and the rest is gibberish anyone else noticing this?!?!
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,369
Reputation
8,499
Daps
160,085




1/4
And got access to Gemini 1.5 Pro’s 2M token context window

time to test.

Thanks
@GoogleAI

2/4
Ah that's great to know.

3/4
For me the most important usecase a whole github repo. Still doubt if 2mn will accommodate really bigger ones (like pytorch).

4/4
A one-plus 24GB mobile running a Mixtral 8x7B at 11 tokens/second.

Much faster inference speed vs llama.cpp and MLC-LLM.

Using swap and caching to run the model even if it doesn't fit the available RAM.

Between Apple’s LLM in a flash and PowerInfer-2, seems like the


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GP-i2-zWEAAcCSF.png
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,369
Reputation
8,499
Daps
160,085


1/2
This internal tool from
@GoogleAI is interesting.

Smart Paste, that streamlines the code authoring workflow by automating adjustments to pasted code.

Predicts the next state of a code environment and uses generative AI to create context-aware adjustments to pasted code.

2/2
Smart Paste for context-aware adjustments to pasted code


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,369
Reputation
8,499
Daps
160,085

New Google AI experiment could let you chat with celebrities, YouTube influencers​

Don't like Gemini's responses? Google may let you create your own chatbot soon.

By Calvin Wankhede

June 25, 2024

Google Gemini logo on smartphone stock photo (2)

Edgar Cervantes / Android Authority

TL;DR
  • According to a new report, Google is working on an AI experiment that could let you chat with famous personalities.
  • The project will also allow anyone to build their own chatbots, similar to services like Character.ai.
  • The search giant may partner with YouTube influencers to create brand-specific AI personas.
Google is reportedly working on a new AI project that will let you converse with chatbots modeled after celebrities, YouTube influencers, or even fictional characters. According to The Information, Google plans to let anyone create their own chatbot by “describing its personality and appearance” and then converse with it — purely for entertainment.

This latest AI effort is notably distinct from Gems, which are essentially “customized versions of Gemini”. Put simply, Gems are similar to custom GPTs that can be taught to handle singular tasks like acting as a running coach or coding partner. On the other hand, Google’s upcoming chatbot project will fine-tune the Gemini family of language models to mimic or emulate the response style of specific people.

The search giant’s interest in personalized chatbots might suggest that it’s looking to take on Meta’s Celebrity AI chatbots. The latter already lets you talk to AI recreations of famous personalities like Snoop Dogg. Google’s upcoming product has also drawn comparisons to Character.ai, a chatbot service that offers a diverse range of personas ranging from TV characters to real-life politicians. Character.ai allows you to create your own personas with unique response styles that can be trained via text datasets.

Google’s customizable chatbot endeavor comes courtesy of the Labs team, which pivoted to working on various AI experiments last year. It’s being developed by a team of ten employees and led by long-time Google Doodle designer Ryan Germick.

As for monetization, the report suggests that Google may eventually integrate the project into YouTube rather than launching it as a standalone product. This would allow creators to create their own AI personas and potentially improve engagement with their audiences. YouTube’s most famous personality, MrBeast, already embodies an AI-powered chatbot on Meta’s platforms. While this approach may still not translate to a direct revenue stream, it could convince users to return to YouTube more often and offer creators better reach.

While a release date has yet to be finalized, the chatbot platform will likely make its way to the Google Labs page for testing first. The company is currently showing off over a dozen experimental tools and projects, with some like the controversial AI Overviews already integrated within mainline Google products.





1/1
Character AI revealed earlier this week in a blog post that they now serve more than 20k inference qps - that's 20% of Google Search request volume. According to The Information, Google is now developing its own celebrity and user made chatbot platform. Planned launch this year.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 
Top