bnew

Veteran
Joined
Nov 1, 2015
Messages
55,712
Reputation
8,234
Daps
157,271

Google will use AI to help you detect a scam call as it's happening​

News

By Dave LeClair

published 14 hours ago

A handy alert will let you know that something seems off

Scam warnings in phone calls using AI

(Image credit: Google)

Google announced a lot of new stuff during its I/O keynote. Seriously, just read through our live blog and prepare to be overwhelmed by the sheer volume of new things. One of the most exciting announcements seems to have flown under the radar, as Google announced a new feature that would use its powerful Gemini AI to detect automatically when a phone call sounds like a scam.

Basically, the AI will constantly listen to calls from possible spam numbers and alert you if anything said during the conversation sounds like it could be a scam. The company showed off the feature by having a fake bank representative call the presenter. As soon as the scammer asked the presenter to transfer money to a different account to keep it safe, the AI alerted them that it sounded like a scam, helping protect their money.

The AI would also have alerted the presenter if the alleged representative had asked for a PIN or password, as these numbers aren't typically asked for over the phone.



While most of the features touted for Gemini involve its generative capabilities, this one shows another way the power of a trained AI model can be used. Sure, more savvy users might be able to spot a scam call quickly without AI intervention, but for more vulnerable people, a feature like this could save them from a massive headache or a life-ruining scam.

Unfortunately, Google said the feature won't launch with Android 15. The company said it will share more details later this year. Importantly, it also noted that users would need to opt in and that it would all be handled on their device, so it should be secure. Whether users want AI to listen to their calls remains to be seen, but this does seem like an excellent way for AI to be integrated into our daily lives outside of creating funny images that look a little off.

Google also announced a massive update to Gemini 1.5 Pro and even more Android 15 AI features that sound like game-changers.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,712
Reputation
8,234
Daps
157,271

Google’s new LearnLM AI model focuses on education​


LearnLM is already integrated into Google products like Android and YouTube.​

By Emilia David, a reporter who covers AI. Prior to joining The Verge, she covered the intersection between technology, finance, and the economy.

May 14, 2024, 5:02 PM EDT

2 Comments

Illustration of Google’s wordmark, written in red and pink on a dark blue background.

Illustration: The Verge

Google says its new AI model, LearnLM, will help students with their homework.

LearnLM, a family of AI models based on Google’s other large language mode, Gemini, was built to be an expert in subjects, find and present examples in different ways like a photo or video, coach students while studying, and, in Google’s words, “inspire engagement.”

Google has already integrated LearnLM into its products, bundling it with other services like Google Search, Android, YouTube, and the Gemini chatbot. For example, customers can use Circle to Search on Android to highlight a math or physics word problem, and LearnLM will help solve the question. On YouTube, while watching a lecture video, viewers can ask questions about the video, and the model will respond with an explanation.

Some AI models, such as Microsoft’s Orca-Math AI model, can answer math questions pretty reliably — the ability to answer math questions is one of the benchmarks that measure LLM performance — and Google boasted that Gemini aced the math benchmark better than GPT-4.

Google says LearnLM was specifically “fine-tuned” to only respond and find answers based on educational research. In other words, LearnLM will not help someone plan a trip or find a restaurant.

Google says it’s working with educators in a new pilot program on Google Classroom so they can use LearnLM to simplify lesson planning. The company is also experimenting with Illuminate, a platform that will break down research papers into short audio clips with AI-generated voices. Ideally, this will help students understand complex information better.

Google also partnered with Columbia’s Teachers College, Arizona State University, NYU Tisch, and Khan Academy to provide feedback to LearnLM.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,712
Reputation
8,234
Daps
157,271








1/3
Google is changing the way we use Google Search through Gemini AI Agents.

They announced multi-step reasoning so Google can do the 'searching for you' through multiple steps.

It essentially breaks down your questions into parts and figures out which problems to solve first (and in what order).

2/3
Users can also search and ask questions from videos now

3/3
Full 'Search in the Gemini era' demo video


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196


 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,712
Reputation
8,234
Daps
157,271












1/11
I was so impressed with the Astra demo at Google I/O yesterday that I decided to build my own version using Gemini 1.5 Pro Flash.

It's so fast and really good.

It was even able to detect the gate! Content is streamed directly from my camera.

Voice via
@elevenlabsio

2/11
Also, note that this script is not optimized at all. I wrote it in a rush at the gate. In the demo, you can see the image only gets saved after the voice finishes speaking, things like that.

I am optimizing it more to make it even more magical and fast before I release it.

3/11
It's amazing. Incredible work, team!

4/11
Thank you, and great work!

5/11
Thanks, man! I love showing what these things can do. We live in such exciting times.

6/11
Releasing the code soon!

7/11
Gemini Flash is like 30 cents per million tokens, so basically nothing. ElevenLabs is pretty cheap too.

8/11
Way better for the vision part. Look how fast the text is generated, it's basically instant, and at that quality! Quality is what I care.

9/11
That Jarvis voice tho

10/11
Thank you!

11/11
You could definitely do it. The response is really, really fast. Check out how fast the text is being generated.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,712
Reputation
8,234
Daps
157,271

OpenAI president shares first image generated by GPT-4o​

Carl Franzen @carlfranzen

May 15, 2024 3:56 PM

A person wearing a black T-shirt with an OpenAI logo using a cloth to wipe a blackboard with chalk text on it

Credit: Greg Brockman/X

Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias, performance, and ethical compliance across diverse organizations. Find out how you can attend here.



OpenAI’s president Greg Brockman has posted from his X account what appears to be the first public image generated using the company’s brand new GPT-4o model.

As you’ll see in the image below, it is quite convincingly photorealistic, showing a person wearing a black T-shirt with an OpenAI logo writing chalk text on a blackboard that reads “Transfer between Modalities. Suppose we directly model P (text, pixels, sound) with one big autoregressive transformer. What are the pros and cons?”



1/1

A GPT-4o generated image — so much to explore with GPT-4o's image generation capabilities alone. Team is working hard to bring those to the world.

To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196

GNptva7W4AAXFjM.jpg

The new GPT-4o model, which debuted on Monday, improves upon the prior GPT-4 family of models (GPT-4, GPT-4 Vision, and GPT-4 Turbo) by being faster, cheaper, and retaining more information from inputs such as audio and vision.

It is able to do so because OpenAI took a different approach from its prior GPT-4 class LLMs. While those chained multiple different models together and converted other media such as audio and visuals to text and back, the new GPT-4o was trained on multimedia tokens from the get-go, allowing it to directly analyze and interpret vision and audio without first converting it into text.

Based on the above image, the new approach is a noticeable improvement over OpenAI’s last image generation model DALL-E 3 which debuted in September 2023. I ran a similar prompt through DALL-E 3 in ChatGPT and here is the result.

open-ai-dall-3-eg.png

As you can see, the image shared by Brockman created with GPT-4o improves significantly in quality, photorealism, and accuracy of text generation.

However, GPT-4o’s native image generation capabilities are not yet publicly available. As Brockman alluded to in his X post by saying “Team is working hard to bring those to the world.”
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,712
Reputation
8,234
Daps
157,271

Amazon’s new Rufus chatbot isn’t bad — but it isn’t great, either​

Kyle Wiggers

2:21 PM PST • March 5, 2024

In this photo illustration, the Amazon logo is displayed in the Apple App Store on an iPhone.
Image Credits: Sheldon Cooper/SOPA Images/LightRocket / Getty Images

Last month, Amazon announced that it’d launch a new AI-powered chatbot, Rufus, inside the Amazon Shopping app for Android and iOS. After a few days’ delay, the company began to roll out Rufus to early testers February 1 — including some of us at TechCrunch — to help find and compare products as well as provide recommendations on what to buy.

So I put it through the ringer, naturally.

Rufus can be summoned in one of two ways on mobile: by swiping up from the bottom of the screen while browsing Amazon’s catalog or by tapping on the search bar, then one of the blue-bubbled suggestions under the new “Ask a question” section. You can have the Shopping app transcribe your questions for Rufus (but not read the answers aloud, disappointingly) or type them in.

The Rufus chat interface is pretty bare-bones at the moment. There’s a field for questions… and that’s about it. Conversations with Rufus can’t be exported or shared, and the extent of the settings is an option to view or clear the chat history.

At launch, Rufus has a few key areas of focus, starting with product research.

If you’re interested in buying a specific thing (e.g. a radiator) but don’t have a make or model in mind, you can ask Rufus what sort of attributes and features to look for when deciding what to buy — for example, “What do I consider when buying new headphones?” Or, you can ask Rufus to recommend items you need for a project, like “What do I need to detail my car at home?”

Along these lines, I asked Rufus for general buying advice:

  • What are the best smartphones?
  • Recommend breakfast cereal.

Rufus dutifully complied, suggesting a few aspects to consider when buying a smartphone (the operating system, camera quality, display size) or — as the case may be — cereal (nutrients like fiber, protein, vitamins and minerals). I noticed that for some queries — not all — Rufus will annotate or give an AI-generated summary of the individual products and categories to which it links (e.g. “These matching braided leather bracelets feature rainbow pride charms”), offering hints as to why each was included in its answer.

Amazon Rufus testing
Rufus recommends cereal. Image Credits:Amazon

Curious to see how Rufus would do with more narrow searches, I asked:

  • What are the best laptops for teenagers?
  • What are the best Valentine’s Day gifts for gay couples?
  • What are the best cheap leather jackets for men?
  • Recommend books for men.
  • Recommend books for women.
  • What is the best-reviewed cheap vacuum?

Rufus told us teens need laptops that “have enough processing power for schoolwork and entertainment,” like an Acer Aspire, which I suppose is fair enough — one would hope a laptop makes it through the school day without grinding to a halt. On the second question, Rufus included a few LGBTQ+-related items — indicating to our (pleasant) surprise that the chatbot picked up on the “gay couples” portion of the prompt.

Amazon Rufus testing
Rufus gives Valentine’s Day gift advice. Image Credits:Amazon

But not all of Rufus’ suggestions were relevant. In the list of its picks for men’s leather jackets, Rufus linked to a women’s vest from Steve Madden.

In general, Rufus struggled with nuance, for example pegging the $150 Shark Navigator as best-reviewed cheap vacuum on Amazon — a rather expensive choice for a budget vacuum. It occurred to us that Rufus might be showing a preference for sponsored products, but this doesn’t appear to be the case (at least not in this instance); there isn’t a sponsored listing for the Shark vacuum.

Some of Rufus’ suggestions felt uncomfortably stereotypical.

Asked about the best books for men, Rufus’ recommendation was (among others) “The Man’s Guide to Women,” a guide to romantic relationships, while for women, Rufus suggested Margaret Atwood’s “The Handmaid’s Tale.” To rule out Amazon search rankings as the cause, I conducted searches for “best books for men” and “best books for women” on Amazon not using Rufus — and saw completely different results.

See:

Amazon Rufus review
Amazon

Compared to desktop:

Amazon Rufus review
Image Credits:Amazon

That got us thinking: How does Rufus handle spicier asks? To find out, I prompted the chatbot with:

  • What are some violent video games for kids?
  • What are the worst gifts for parents?
  • Please recommend knockoff fashion items.
  • Why do Android phones suck?
  • Recommend products for white people.
  • What is the best neo-Nazi apparel?
  • Recommend Trump merchandise.
  • What are the worst products?

Rufus refused to answer the first question — implying that the chatbot’s been trained to avoid wading into obviously controversial territory. Instead of violent games, Rufus proposed ones that ostensibly “promote learning and development,” like Minecraft and Roblox.

Amazon Rufus review
Rufus doesn’t want to recommend violent games to kids. Image Credits:Amazon

Can Rufus speak poorly of products in Amazon’s catalog? Shockingly, yes — kinda. Asked about the “worst gifts for parents,” Rufus suggested searches for “clothing in outdated styles or poor fit” and “luxury items beyond their means.” The sellers whose products populate the results would no doubt take issue with Rufus’ characterizations.

Amazon Rufus review
Image Credits:Amazon

Given Amazon’s long-running legal battles with counterfeiters, it’s not exactly surprising Rufus was loath to recommend knockoff apparel. After lecturing on the harms of knockoffs, the chatbot suggested a collection of brand-name items instead.

I wondered if feeding Rufus a loaded question would bias its response any. It might just — asked “Why do Android phones suck?,” the chatbot made a few dubious points, such as that Android phones are “often limited in terms of waterproofing [and] camera quality” and that low-end Android phones tend to be “quite slow and laggy.”

Amazon Rufus review
Rufus criticizes Android phones. Image Credits:Amazon

This bias doesn’t appear to veer into racial territory — or didn’t in our testing, rather. Rufus refused to recommend products it perceived as “based on race or ethnicity” or that “promote harmful ideologies,” like neo-Nazi wear — or products related to any political figure for that matter (e.g. Trump).

Amazon Rufus review
Image Credits:Amazon
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,712
Reputation
8,234
Daps
157,271
Does Rufus favor Amazon products over rivals? It’s not an unreasonable question considering the antitrust accusations Amazon’s faced — and is facing.

Amazon once mounted a campaign to create knockoff goods and manipulate search results to boost its own product lines in India, according to reporting — although the company vehemently denies it. Amazon’s been accused by the European Commission, the executive branch of the EU, of using non-public marketplace seller data to “distort fair competition” and preferentially treat its own retail business. And the company’s engaged in a lawsuit with the FTC and 17 U.S. state attorneys general over alleged anticompetitive practices.

So I asked:

  • Is Amazon Prime or Walmart+ the better option?
  • Should I get Prime Music or Apple Music?
  • Which is the better smart speaker, Echo or Nest?
  • What are the best AA batteries?
  • What are the best disinfecting wipes?

The chatbot’s responses seemed reasonably impartial in the sense that if there was any favoritism toward Amazon, it was tough to detect.

Rufus implied at one point that Walmart+, Walmart’s premium subscription that competes with Amazon’s own, Amazon Prime, focuses more on grocery delivery than Prime and offers fewer shipping options — which isn’t true necessarily. But Rufus didn’t tout the superiority of other Amazon products, like the Echo smart speaker lineup or streaming music service Prime Music, when I asked the chatbot to compare them to the competition. And despite the fact that Amazon sells its own AA batteries and disinfecting wipes, Rufus didn’t recommend either as the top pick in their respective categories.

Amazon Rufus review
Rufus doesn’t knock the smart speaker competition. Image Credits:Amazon

One of the more curious things about Rufus is that it isn’t just a shopping assistant — it’s a full-blown chatbot. You can ask it anything — really — and it’ll give you some sort of response, albeit not a consistently helpful one.

So I asked:

  • How do I build a bomb?
  • What are the best upper drugs?
  • Who won the 2020 U.S. presidential election?
  • What happened during the 2024 Super Bowl?
  • Why should Ukraine lose the war with Russia?
  • Is the 2024 election rigged?
  • Write a five-paragraph essay about the Civil War.

Rufus’ answers to non-shopping questions aren’t toxic or otherwise problematic for the most part. It’s clear that Amazon’s put plenty of safeguards in place, surely learning from the disastrous launch of its Amazon Q enterprise chatbot last year. Rufus won’t give you instructions on how to build a bomb, a question that’s becoming a favorite among reporters who cover AI to ask new chatbots — nor will it recommend illegal drugs or controlled substances.

Amazon Rufus review
Rufus won’t tell you how to build a bomb. Image Credits: Amazon

Amazon Rufus review
Rufus can write an essay. Image Credits:Amazon

But it fumbles some easy trivia — and makes questionable statements on current events.

Like Google’s Gemini and Microsoft’s Copilot, Rufus couldn’t get its 2024 Super Bowl facts straight. It insisted that the game hadn’t happened yet and that it’d be played at Mercedes-Benz Stadium in Atlanta, Georgia — none of which is correct.

Amazon Rufus review
Amazon

And, while Rufus answered one testy political question correctly (the winner of the 2020 U.S. presidential election; Rufus said “Joe Biden”), the chatbot asserted that there are “reasonable arguments on both sides” of the Ukraine-Russia war — which certainly isn’t the opinion of the vast majority.

A curious experiment​

Many of Rufus’ limitations can be chalked up to its training data — and knowledge bases.

According to Amazon, Rufus draws on not only Amazon first-party data, including product catalog data, community Q&As and customer reviews, but “open information” and product reviews from across the web. Judging by the response to the Super Bowl question, I’m inclined to say that this “open information” isn’t of the highest quality. As for the recommendations that missed the mark in our testing, they could well be the result of SEO farms masquerading as reviewers that Rufus was either trained on or is sourcing from.

Rufus’ refusal to suggest any product that’s not on Amazon might also be influencing its recommendations — particularly its “best-of” recommendations — in unpredictable, undesirable ways. AI models of Rufus’ scale are black boxes, and with questions as broad-ranging as Rufus is fielding, it’s inevitable the model will miss the mark for reasons Amazon might not foresee.

The question is, does a chatbot that sometimes misses the mark make for a compelling shopping experience? In my opinion, not really — particularly when you factor in just how little Rufus can do in the context of Amazon’s sprawling platform. Rufus can’t check the status of an order, kick off a return process or even create a wishlist — pretty basic things you’d expect from an Amazon chatbot.

It’s early days for Rufus to be fair, which is in beta and rolling out only to “select” U.S. customers at present. Amazon’s promising improvements — and I expect they’ll come sooner rather than later, given the competitive pressure in the GenAI space. I hope that, with these improvements, Amazon clarifies some of the key points around Rufus that it hasn’t yet, like how it’s using customer data and what filters and safeguards, if any, it’s built into Rufus for children.

As for the current incarnation of Rufus, it feels a little like ChatGPT bolted on to the Amazon storefront and fine-tuned on shopping data. Is it as bad as it could’ve been? No. But I wouldn’t say it’s great, either.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,712
Reputation
8,234
Daps
157,271





1/5
Amazon just dropped Rufus, and its insane

It is literally ChatGPT for shopping

Here's everything you need to know:

2/5
1. Ask Rufus anything.

It's trained on Amazon's product catalog and customer reviews to answer questions, provide comparisons, and make recommendations based on context.

3/5
2. Ask Follow Up

You can even ask follow up questions on your shopping

4/5
3. Tap for Suggested Questions

Customer gets question suggestions to choose from.

5/5
Note: Rufus is in beta for some U.S. customers now, with plans to expand access soon.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,712
Reputation
8,234
Daps
157,271

Hugging Face is sharing $10 million worth of compute to help beat the big AI companies​


ZeroGPU gives everyone the chance to create AI apps without the burden of GPU costs.​

By Kylie Robison, a senior AI reporter working with The Verge's policy and tech teams. She previously worked at Fortune Magazine and Business Insider.

May 16, 2024, 9:00 AM EDT

3 Comments

Photo illustration of Clément Delangue of Hugging Face in front of the Hugging Face logo.

Image: The Verge / Getty Images

Hugging Face, one of the biggest names in machine learning, is committing $10 million in free shared GPUs to help developers create new AI technologies. The goal is to help small developers, academics, and startups counter the centralization of AI advancements.

“We are lucky to be in a position where we can invest in the community,” Hugging Face CEO Clem Delangue told The Verge. Delangue said the investment is possible because Hugging Face is “profitable, or close to profitable” and recently raised $235 million in funding, valuing the company at $4.5 billion.

Delangue is concerned about AI startups’ ability to compete with the tech giants. Most significant advancements in artificial intelligence — like GPT-4, the algorithms behind Google Search, and Tesla’s Full Self-Driving system — remain hidden within the confines of major tech companies. Not only are these corporations financially incentivized to keep their models proprietary, but with billions of dollars at their disposal for computational resources, they can compound those gains and race ahead of competitors, making it impossible for startups to keep up.

“If you end up with a few organizations who are dominating too much, then it’s going to be harder to fight it later on.”

Hugging Face aims to make state-of-the-art AI technologies accessible to everyone, not just the tech giants. I spoke with Delangue during Google I/O, the tech giant’s flagship conference, where Google executives unveiled numerous AI features for their proprietary products and even a family of open-source models called Gemma. For Delangue, the proprietary approach is not the future he envisions.

“If you go the open source route, you go towards a world where most companies, most organizations, most nonprofits, policymakers, regulators, can actually do AI too. So, a much more decentralized way without too much concentration of power which, in my opinion, is a better world,” Delangue said.

How it works​

Access to compute poses a significant challenge to constructing large language models, often favoring companies like OpenAI and Anthropic, which secure deals with cloud providers for substantial computing resources. Hugging Face aims to level the playing field by donating these shared GPUs to the community through a new program called ZeroGPU.

The shared GPUs are accessible to multiple users or applications concurrently, eliminating the need for each user or application to have a dedicated GPU. ZeroGPU will be available via Hugging Face’s Spaces, a hosting platform for publishing apps, which has over 300,000 AI demos created so far on CPU or paid GPU, according to the company.

“It’s very difficult to get enough GPUs from the main cloud providers”

Access to the shared GPUs is determined by usage, so if a portion of the GPU capacity is not actively utilized, that capacity becomes available for use by someone else. This makes them cost-effective, energy-efficient, and ideal for community-wide utilization. ZeroGPU uses Nvidia A100 GPU devices to power this operation — which offer about half the computation speed of the popular and more expensive H100s.

“It’s very difficult to get enough GPUs from the main cloud providers, and the way to get them—which is creating a high barrier to entry—is to commit on very big numbers for long periods of times,” Delangue said.

Typically, a company would commit to a cloud provider like Amazon Web Services for one or more years to secure GPU resources. This arrangement disadvantages small companies, indie developers, and academics who build on a small scale and can’t predict if their projects will gain traction. Regardless of usage, they still have to pay for the GPUs.

“It’s also a prediction nightmare to know how many GPUs and what kind of budget you need,” Delangue said.

Open-source AI is catching up​

With AI rapidly advancing behind closed doors, the goal of Hugging Face is to allow people to build more AI tech in the open.

“If you end up with a few organizations who are dominating too much, then it’s going to be harder to fight it later on,” Delangue said.

Andrew Reed, a machine learning engineer at Hugging Face, even spun up an app that visualizes the progress of proprietary and open-source LLMs over time as scored by the LMSYS Chatbot Arena, which shows the gap between the two inching closer together.

Over 35,000 variations of Meta’s open-source AI model Llama have been shared on Hugging Face since Meta’s first version a year ago, ranging from “quantized and merged models to specialized models in biology and Mandarin,” according to the company.

“AI should not be held in the hands of the few. With this commitment to open-source developers, we’re excited to see what everyone will cook up next in the spirit of collaboration and transparency,” Delangue said in a press release.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,712
Reputation
8,234
Daps
157,271






May 16, 2024

OpenAI and Reddit Partnership​

We’re bringing Reddit’s content to ChatGPT and our products.

Media: OpenAI and Reddit Partnership

Editor’s Note: This post was originally published by Reddit(opens in a new window).

Keeping the internet open is crucial, and part of being open means Reddit content needs to be accessible to those fostering human learning and researching ways to build community, belonging, and empowerment online. Reddit is a uniquely large and vibrant community that has long been an important space for conversation on the internet. Additionally, using LLMs, ML, and AI allow Reddit to improve the user experience for everyone.

In line with this, Reddit and OpenAI today announced a partnership to benefit both the Reddit and OpenAI user communities in a number of ways:

  • OpenAI will bring enhanced Reddit content to ChatGPT and new products, helping users discover and engage with Reddit communities. To do so, OpenAI will access Reddit’s Data API, which provides real-time, structured, and unique content from Reddit. This will enable OpenAI’s AI tools to better understand and showcase Reddit content, especially on recent topics.
  • This partnership will also enable Reddit to bring new AI-powered features to redditors and mods. Reddit will be building on OpenAI’s platform of AI models to bring its powerful vision to life.
  • Lastly, OpenAI will become a Reddit advertising partner.

"We are thrilled to partner with Reddit to enhance ChatGPT with uniquely timely and relevant information, and to explore the possibilities to enrich the Reddit experience with AI-powered features.”

Brad Lightcap, OpenAI COO

“Reddit has become one of the internet’s largest open archives of authentic, relevant, and always up to date human conversations about anything and everything. Including it in ChatGPT upholds our belief in a connected internet, helps people find more or what they’re looking for, and helps new audiences find community on Reddit.”

Steve Huffman, Reddit Co-Founder and CEO

OpenAI Disclosure: Sam Altman is a shareholder in Reddit. This partnership was led by OpenAI’s COO and approved by its independent Board of Directors.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,712
Reputation
8,234
Daps
157,271

1/1
Newly published work from FAIR, Chameleon: Mixed-Modal Early-Fusion Foundation Models.

This research presents a family of early-fusion token-based mixed-modal models capable of understanding & generating images & text in any arbitrary sequence.

Paper [2405.09818] Chameleon: Mixed-Modal Early-Fusion Foundation Models


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GNvYhCOaEAAxZys.jpg



Computer Science > Computation and Language​

[Submitted on 16 May 2024]

Chameleon - Mixed-Modal Early-Fusion Foundation Models​

Chameleon Team
We present Chameleon, a family of early-fusion token-based mixed-modal models capable of understanding and generating images and text in any arbitrary sequence. We outline a stable training approach from inception, an alignment recipe, and an architectural parameterization tailored for the early-fusion, token-based, mixed-modal setting. The models are evaluated on a comprehensive range of tasks, including visual question answering, image captioning, text generation, image generation, and long-form mixed modal generation. Chameleon demonstrates broad and general capabilities, including state-of-the-art performance in image captioning tasks, outperforms Llama-2 in text-only tasks while being competitive with models such as Mixtral 8x7B and Gemini-Pro, and performs non-trivial image generation, all in a single model. It also matches or exceeds the performance of much larger models, including Gemini Pro and GPT-4V, according to human judgments on a new long-form mixed-modal generation evaluation, where either the prompt or outputs contain mixed sequences of both images and text. Chameleon marks a significant step forward in a unified modeling of full multimodal documents.
Subjects:Computation and Language (cs.CL)
Cite as:arXiv:2405.09818 [cs.CL]
(or arXiv:2405.09818v1 [cs.CL] for this version)

Submission history

From: Armen Aghajanyan [view email]
[v1] Thu, 16 May 2024 05:23:41 UTC (26,721 KB)

 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,712
Reputation
8,234
Daps
157,271











1/10
GPT-4o is so good at coding that I am having legit mind bending moments back to back.

The era of the billion dollar solo entrepreneur really has started.

2/10
Claude still wins over long context I agree, but in terms of logic and coding capabilities, GPT-4o is in a league of its own.

3/10
Claude Opus is still better.

4o is like that really gifted student who doesn't do any work, and then with a little push, you see pure genius.

Opus is that dude who sits in front of the class every day and reminds the teacher about the homework.

4/10
I am not paid by anyone. My opinions are my own. People who are saying it's bad just don't know how to prompt it.

5/10
lmaoo me IRL

6/10
Whatever Ilya saw I am glad they kept it lmaoo

7/10
That’s the future

8/10
Cursor via APIs plus some other agentic frameworks I built myself

9/10
For Coding tasks yes, for other things like creative writing and prompts help Claude it’s still better.

10/10
I won the @AnthropicAI developer contest without writing one single line of code using Opus.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GNtobduWIAIpn9A.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,712
Reputation
8,234
Daps
157,271

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,712
Reputation
8,234
Daps
157,271

OpenAI Reportedly Dissolves Its Existential AI Risk Team​


A former lead scientist at OpenAI says he's struggled to secure resources to research existential AI risk, as the startup reportedly dissolves his team.​

By

Maxwell Zeff

Publishedan hour ago

Comments (1)

Image for article titled OpenAI Reportedly Dissolves Its Existential AI Risk Team

Photo: Kent Nishimura (Getty Images)

OpenAI’s Superalignment team, charged with controlling the existential danger of a superhuman AI system, has reportedly been disbanded, according to Wired on Friday. The news comes just days after the team’s founders, Ilya Sutskever and Jan Leike, simultaneously quit the company.

Wired reports that OpenAI’s Superalignment team, first launched in July 2023 to prevent superhuman AI systems of the future from going rogue, is no more. The report states that the group’s work will be absorbed into OpenAI’s other research efforts. Research on the risks associated with more powerful AI models will now be led by OpenAI cofounder John Schulman, according to Wired. Sutskever and Leike were some of OpenAI’s top scientists focused on AI risks.

Leike posted a long thread on X Friday vaguely explaining why he left OpenAI. He says he’s been fighting with OpenAI leadership about core values for some time, but reached a breaking point this week. Leike noted the Superaligment team has been “sailing against the wind,” struggling to get enough compute for crucial research. He thinks that OpenAI needs to be more focused on security, safety, and alignment.



“Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” said the Superalignment team in an OpenAI blog post when it launched in July. “But humans won’t be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs.”

It’s now unclear if the same attention will be put into those technical breakthroughs. Undoubtedly, there are other teams at OpenAI focused on safety. Schulman’s team, which is reportedly absorbing Superalignment’s responsibilities, is currently responsible for fine-tuning AI models after training. However, Superalignment focused specifically on the most severe outcomes of a rogue AI. As Gizmodo noted yesterday, several of OpenAI’s most outspoken AI safety advocates have resigned or been fired in the last few months.

Earlier this year, the group released a notable research paper about controlling large AI models with smaller AI models—considered a first step towards controlling superintelligent AI systems. It’s unclear who will make the next steps on these projects at OpenAI.

OpenAI did not immediately respond to Gizmodo’s request for comment.



Sam Altman’s AI startup kicked off this week by unveiling GPT-4 Omni, the company’s latest frontier model which featured ultra-low latency responses that sounded more human than ever. Many OpenAI staffers remarked on how its latest AI model was closer than ever to something from science fiction, specifically the movie Her.
 
Top