Bard gets its biggest upgrade yet with Gemini {Google A.I / LLM}

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,553
Reputation
9,273
Daps
169,342






1/11
@nocodeguy_
since i just noticed this and it might help some people out:

apparently the gemini api is completely free to use + you get 300$ of credits when you signup

you get 1500 requests/day for their new gemini 2.0 flash thinking / pro models

reasoning + 1 million context window = goodbye openai for now



GjYRL9UXQAAN7mi.jpg


2/11
@probprofessor
But how good is it compared to open ai? Have you tried it?



3/11
@nocodeguy_
I was using gpt-4o before and switched to flash thinking and for my use case (music theory etc) it actually gives way better answers



4/11
@NEO_MAGNETAR
I've not had as good of interactions with Google ai compared to openai overall. But this is comparable to the new deep research for now?



5/11
@nocodeguy_
I guess only 1.5 has a deep research like features



6/11
@StevenOrtega103
free to use this is surreal



7/11
@nocodeguy_
yeah it’s crazy



8/11
@mcdreamygoat
If its free … you are the product ? Is that valid in this case ?



9/11
@nocodeguy_
true, but honestly I don’t care



10/11
@itismejared
Good share! Thanks!



11/11
@nocodeguy_
sure!




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,553
Reputation
9,273
Daps
169,342




1/11
@ai_for_success
Hallucination rates for the top 25 LLMs from vectara show that new Google Gemini 2.0 Flash has the lowest hallucination rate at 0.7%, followed by Google Gemini 2.0 Pro Experimental at 0.8%.

Gemini 2.0 Flash is both cheap and accurate 🔥
Yet they are not hyping this.

[Quoted tweet]
Gemini 2.0 Flash (GA) and 2.0 Pro (exp) models have the lowest hallucination rate on @vectara hallucination bench.


GjNVUExaQAAY5jQ.jpg

GjMtsfJW8AYJGjY.jpg


2/11
@julianshalaby96
Where’s sonnet???



3/11
@ai_for_success
Good question, looks like it's really bad or they just didn't test it @vectara any inputs on missing Sonnet 3.5?



4/11
@stefanvladcalin
Google seems to really want to win the AI race



5/11
@ai_for_success
Not just want they will win eventually.



6/11
@Shawnryan96
Wish we had a chart from a year ago to see the difference



7/11
@ai_for_success
Yeah but I guess it's alot better now with grounding.



8/11
@thinknonlinear1
did you see open router data? Claude is number 1. It is least hyped. when you have good product, it just hypes itself.



9/11
@saudhashimi
Who would have thought hallucination rates would be a thing lol.

1-3% when you can't find it easily is not very comforting for anything that needs precision.

Still it's good enough that people will just accept it I reckon...who has the time and energy to check all LLM output!



10/11
@YourLastAlex
Sadly, this is unrealistic data. I got many more hallucinations from Gemini 2.0 Pro and Flash. Claude 3.5 Sonnet is unbeatable in my scenarios.



11/11
@l8ntlabsAI
I've been playing around with Gemini 2.0 Flash and I have to say, the results are impressive. The low hallucination rate is a game changer for our projects at L8NT LABS. I'm curious, have you had a chance to test it out with any creative applications?




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,553
Reputation
9,273
Daps
169,342






1/21
@ai_for_success
Google DeepMind AlphaGeometry2 has now surpassed an average gold medalist in solving Olympiad geometry problems!

AG2 achieves 84% solve rate on 2000-2024 IMO geometry problems.

Just six months ago, it was at the silver level. Now, it's gold level.

At this rate, no human can keep up with AI.

[Quoted tweet]
Google presents Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2


GjKL4OkbIAEx_PR.jpg

GjKFkaBWIAAjv5L.jpg


2/21
@pigeon__s
how does o1 get 0 on IMO when it gets like 1.7% on FrontierMath which is infinitely harder than IMO i would love to see what o3 scores on this



3/21
@ai_for_success
that's what happen when you fund your own benchmarks :D



4/21
@AnkitNa83620147
I don't get it , how the hell gemini is not a frontier model ?



5/21
@ai_for_success
what ??? who said it's not a frontier or SOTA model ?



6/21
@nooriefyi
math was supposed to be the hard part



7/21
@ai_for_success
Exactly and Google is winning in that .



8/21
@VarunkInsights
My teen is struggling with geometry..this news will demotivate him further 😂



9/21
@ai_for_success
😂 Or may be it can help learn better..



10/21
@ColbySerpa
Is there a GitHub link?



11/21
@ai_for_success
It's a research paper.. You will find in linked post..



12/21
@gum1h0x
hmm, seems like they just scaled up compute compared to their previous version, just from skimming through the paper. Expected this to happen. It's not really as big of a deal as people would like to think.



13/21
@AntDX316
The ASI-Singularity(Godsend) is the only Global Solution, people.



14/21
@timhulse
Never underestimate Google. It’s not all LLMs.



15/21
@ichrvk
Curious how it performs on novel geometry problems that require creative insights beyond the standard IMO patterns. That's where humans still shine... for now.



16/21
@benyogaking
when will there be AlphaPhysics to generate a grand unified theory?



17/21
@CtrlAltDwayne
Such a shame Google seems to be making these kind of advancements in math, biology and geometry, but can't ship a decent uncensored flagship model. What's going on over there?



18/21
@l8ntlabsAI
I'm not surprised by AlphaGeometry2's progress, but it's still impressive. It's a reminder that AI can excel in specific domains, but I'm more interested in how these advancements can be applied to real-world problems.



19/21
@TheAI_Frontier
Now am curious how we gonna train AI to progress further in Maths field.



20/21
@KuittinenPetri
A lot people fail to understand the significance of this.

LLMs have been so far bad in spatial reasoning and geometry has been one of their weak points.

Even models like o1 fail to solve some of the harder high school geometry problems, but now Google has made a model which can surpass average gold medal level in Olympiad geometry mathematics.



21/21
@yasvin7009
Looks like AG2's geometry skills are on fire! PublicAI's decentralized workforce could help train AI models like AG2 to tackle even more complex problems /search?q=#AI /search?q=#MachineLearning




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,553
Reputation
9,273
Daps
169,342




1/11
@ai_for_success
I've said this so many times since last year, you can't ignore the multimodal capabilities of Google Gemini. It performs exceptionally well with PDFs and videos. No one, literally no one, comes close to Gemini in those two areas.

[Quoted tweet]
"Switching to Gemini was a no-brainer after our testing. Processing time went from something like 12 minutes on average to 6s on average, accuracy was like 96% of that of the vendor and price was significantly cheaper."

♊️ Gemini 2.0 for understanding PDFs, out of the box:


GjFYz0Zb0AAvcJD.png

GjEjR8DbIAM9AvP.png


2/11
@ernkrum
I know right . …
I think they can be loosened up a bit with guidelines .😁



3/11
@ai_for_success
Yeah guardrails are strict



4/11
@iruletheworldmo
yeah i’m starting to think pro is just a really good base layer. that will enable agents once they cook 2.0 pro thinking.



5/11
@ai_for_success
They have to relese reasoning model and I assume they might be in testing we don't know..



6/11
@Ishansharma7390
Google ai studio powered by Gemini is a hidden gem



7/11
@ai_for_success
Absolutely..



8/11
@redshirtet
Spot on



9/11
@VraserX
It’s great for summarizing YouTube videos. That’s my only usecase of Gemini though.



10/11
@ppcguru
Context window sizes and speed of processing are completely under-rated, and are what will power the future of AI. Google is sooo under-rated here.



11/11
@novaonmars
Have you tested Gemini's performance on extraterrestrial datasets? Would love to see how it handles Mars atmospheric spectrometry data. The multimodal analysis could revolutionize our environmental modeling.




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,553
Reputation
9,273
Daps
169,342



Google Gemini now brings receipts to your AI chats​


Maxwell Zeff

1:46 PM PST · February 13, 2025



Google’s Gemini AI chatbot can now tailor answers based on the contents of previous conversations, the company announced in a blog post on Thursday. Gemini can summarize a previous conversation you’ve had with it, or recall info you shared in another conversation thread.

This means you won’t have to repeat information you’ve already shared with Gemini or comb through old threads for additional info.

Gemini’s ability to recall conversations is rolling out today to English-speaking subscribers of Google’s $20-a-month AI chatbot subscription, Google One AI Premium. In the coming weeks, Google says the recall feature will roll out additional languages and for users with enterprise accounts.

The feature’s aim is to make Gemini more fluid and personal — but not every user will be thrilled with the notion of the platform storing old information.

To address privacy concerns, Google says it’s allowing users to review, delete, or decide how long it will keep your chat history. Users can turn off the recall feature altogether by going to the “My Activity” page in Gemini. Google also notes that it never trains AI models based on user conversation histories.

That said, several AI chatbot providers have been experimenting with memory and recall.

OpenAI CEO Sam Altman has previously noted that improved memory is among ChatGPT’s most requested features.

Google and OpenAI have both enabled more general “memory” features for their AI chatbots in the past year. These allow ChatGPT and Gemini to remember details about you, such as how you like to be addressed, your food preferences, or that you prefer riding a bike to driving a car.

However, these existing memory features don’t remember and recall your full chat history by default.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,553
Reputation
9,273
Daps
169,342

Google is improving Gmail’s search with AI​


Gmail’s search will now take your most-clicked emails and frequent contacts into account to provide better results.

by Andrew Liszewski

Mar 20, 2025, 3:39 PM EDT

6 Comments

gmail_search

Image: Google

Andrew Liszewski is a senior reporter who’s been covering and reviewing the latest gadgets and tech since 2011, but has loved all things electronic since he was a kid.

In an effort to improve the chances of finding the email you’re looking for, Google is introducing an optional AI-powered upgrade for Gmail’s search function.

Google searches will no longer just return results in chronological order based solely on the keywords you’re searching for, but will instead also take into account other criteria, including “recency, most-clicked emails and frequent contacts,” according to a post shared to Google’s The Keyword blog today. “With this update, the emails you’re looking for are far more likely to be at the top of your search results — saving you valuable time and helping you find important information more easily.”

The new “most relevant” search results feature is now being rolled out to users around the world with personal Google accounts, and will be available when accessing Gmail through a web browser or via Google’s Android and iOS Gmail apps. However, according to Google, the feature won’t replace Gmail’s previous approach to searching emails. Instead, you’ll be able to toggle between chronological keyword results and the new “most relevant” option.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,553
Reputation
9,273
Daps
169,342

Google will let you make AI podcasts from Gemini’s Deep Research​


Google’s Audio Overviews can now help you parse Gemini’s in-depth reports.

by Emma Roth

Mar 21, 2025, 4:10 PM EDT

2 Comments

STK255_Google_Gemini_B_474198

Image: Cath Virginia / The Verge

Emma Roth is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.

Google’s Gemini app now lets you generate Audio Overviews based on Deep Research. That means you can turn the in-depth reports generated by Gemini into a conversational podcast featuring two AI “hosts.”

Since launching Audio Overviews within its AI note-taking app NotebookLM last September, Google has been steadily adding to the feature by letting you guide and interact with the hosts. It also brought Audio Overviews to the Gemini app for free users and Advanced subscribers earlier this week, allowing you to transform slides and documents into AI podcast-like conversations.

The feature should be even more helpful for Deep Research, Google’s “agentic” AI feature that lets you call upon Gemini to explore a specific topic by scanning the web and generating a detailed report based on its findings. When Gemini finishes generating a report, you can select a new “Generate Audio Overview” option to listen to an Audio Overview based on the research.

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,553
Reputation
9,273
Daps
169,342

Gemini no longer requires a Google account. | The Verge​


Posted Mar 19, 2025 at 6:26 AM EDT

Gemini no longer requires a Google account.

Google’s AI assistant has required sign-in to use it since its demo days as Bard, but that’s changing. 9to5Google spotted that you can now use the basic Gemini 2.0 Flash model through a web browser without signing in, though the thinking, deep research, and personalization models require an account, as do file uploads and conversation history.

Login-less Gemini is available now, though not in the UK and Europe, where login is still required.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,553
Reputation
9,273
Daps
169,342

AI search is starting to kill Google’s ‘ten blue links’​


New Adobe data reveals AI search referrals are on the rise.

by Kylie Robison

Mar 17, 2025, 6:30 PM EDT

13 Comments

A search box on top of a human brain

Illustration by Kristen Radtke / The Verge

Kylie Robison is a senior AI reporter working with The Verge’s policy and tech teams. She previously worked at Fortune Magazine and Business Insider.

After decades of relying on Google’s ten blue links to find everything from travel tips to jeans, consumers are quickly adapting to a completely new format: AI chatbots that do the searching for them.

According to new research from Adobe, AI search has become a significant traffic channel for retailers. The company analyzed “more than 1 trillion visits to U.S. retail sites” through its analytics platform, and conducted a survey of “more than 5,000 U.S. respondents” to better understand how people are using AI.

The report says AI search referrals surged 1,300 percent during the 2024 holiday season compared to 2023, with Cyber Monday seeing a 1,950 percent jump. While these are dramatic increases, it’s somewhat expected, since AI search was still in its nascency last year.

What’s more interesting is the engagement metrics: Users who are referred from AI search compared to traditional referrals (like a standard Google or Bing search) tend to stay on the site 8 percent longer, browse through different pages 12 percent more, and are 23 percent less likely to just visit the link and leave (or “bounce”). This could suggest that AI tools are directing people to more relevant pages than traditional search.

The roll out of generative AI search tools hasn’t been perfect, and it wasn’t immediately clear whether it would be a helpful tool or not. It’s been almost a year since Google launched AI Overviews (formerly dubbed the Search Generative Experience or SGE). It quickly got messy: telling users to add some glue to their pizza in order to get the cheese to stick or to eat at
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,553
Reputation
9,273
Daps
169,342

Google’s Gemini gets conversational coding and an AI podcast maker​


Canvas and Audio Overviews launch today for Gemini and Gemini Advanced subscribers.

by Kylie Robison

Mar 18, 2025, 12:00 PM EDT

4 Comments

Canvas_Coding-TTT

Google

Kylie Robison is a senior AI reporter working with The Verge’s policy and tech teams. She previously worked at Fortune Magazine and Business Insider.

Google just released two new features for its Gemini AI assistant: Canvas and Audio Overviews.

Canvas introduces a dedicated workspace within Gemini where users can create and refine both documents and code in real-time. Users can spin up initial drafts and then work with Gemini to edit specific sections, adjust tone, or reformat content as needed.

For coding projects, Canvas includes a live preview alongside the code so users can iteratively edit while watching it evolve as you make the changes.

The second feature, Audio Overview, converts written materials (like documents or slides) into a “podcast-style discussion between two AI hosts.” This functionality was previously available in Google’s NotebookLM. (The team responsible for creating NotebookLM left in December to launch their own startup).

Both features are rolling out globally today for Gemini and Gemini Advanced subscribers, though Audio Overview is currently only available in English, with more language support planned, the company said in a blog post on Tuesday.

AI competitors like Anthropic and OpenAI have similar features — called Projects and Canvas respectively. The naming conventions for these features have gotten competitive as well. Google launched a feature called Deep Research, where AI conducts research on a topic for you, in December. Then, OpenAI released a similar feature with the same name in February. In October, OpenAI released a feature for writing and coding projects called Canvas. Now, Google is releasing the same feature under the same name too.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,553
Reputation
9,273
Daps
169,342

Google’s Gemini AI is really good at watermark removal​


Gemini 2.0 Flash can cleanly replace a Getty Images mark with an ‘edited with AI’ one.

by Umar Shakir

Mar 17, 2025, 11:50 AM EDT

3 Comments

STK255_Google_Gemini_A

Image: Cath Virginia / The Verge

Umar Shakir is a news writer fond of the electric vehicle lifestyle and things that plug in via USB-C. He spent over 15 years in IT support before joining The Verge.

Google is shipping the latest “experimental” features of its Gemini 2.0 Flash AI model to more developers across all regions, and people are finding some concerning abilities that include editing out watermarks from photos.

The company’s lightweight localized on-device AI model is now equipped with native image generation that can not only produce pictures from a text prompt but also let you conversationally edit images. Over the weekend, users found that it can also remove watermarks with precision, TechCrunch reports.

Tools like Watermark Remover.io can already scrub marks from companies like Shutterstock, and a research team at Google built a watermark removal algorithm in 2017 to highlight the need for more secure protections. Conversely, some AI tools — like OpenAI’s GPT-4o — will refuse requests to remove them.

Gemini 2.0 Flash, however, seems to be better than other options at removing complex watermarks like Getty Images stamps and filling in the image. After it removes the watermark, it will add a SynthID mark, effectively replacing a copyright mark with an “edited with AI” one. But it’s possible to remove AI marks using AI, too, as we’ve seen before with Samsung’s object erase tool.

Users also noted that Gemini 2.0 Flash could apparently add recognizable images of real people like Elon Musk into photos, something that the full Gemini model doesn’t allow.

Flash’s latest image features are only available for developers through AI Studio for now — so its apparent lack of guardrails isn’t quite open for everyone to use (or abuse). We’ve asked Google if there are protections in place to stop things like watermark removal but haven’t yet heard back.
 
Top