bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,751

Meta is testing an AI-powered search bar in Instagram​

Ivan Mehta @indianidle / 1:44 PM EDT•April 12, 2024

Instagram logo

Image Credits: Alexander Shatov (opens in a new window)/ Unsplash

Meta is pushing ahead with its efforts to make its generative AI-powered products available to more users. Apart from testing Meta AI chatbot with users in countries like India on WhatsApp, the company is also experimenting with putting Meta AI in the Instagram search bar for both chat with AI and content discovery.

The search query in the search bar leads you to a conversation in DM with Meta AI, where you can ask questions or use one of the pre-loaded prompts. The design of the prompt screen prompted Perplexity AI’s CEO, Aravind Srinivas, to point out that the interface uses a design similar to the startup’s search screen.

But beyond that, it could also help you discover new content on Instagram. For instance, a video on Threads posted by a user indicates that you can tap on a prompt like “Beautiful Maui sunset Reels” to search for Reels related to that topic.

Separately, a few users TechCrunch talked to were able to ask Meta AI to search for Reels suggestions.

IMG_6529.jpg

Screenshot

This means that Meta plans to tap the power of generative AI beyond text generation and use it for surfacing new content from network like Instagram.

Meta confirmed its Meta AI experiment on Instagram with TechCrunch. However, the company didn’t specify if it is using generative AI tech in search.

“Our generative AI-powered experiences are under development in varying phases, and we’re testing a range of them publicly in a limited capacity,” a Meta spokesperson told TechCrunch.

You can find a ton of posts about the quality of Instagram search. So, it would not be surprising if Meta wants to use generative AI to improve search.

Also, Meta would want Instagram to have better discoverability than TikTok. Last year, Google introduced a new perspectives feature to surface results from Reddit and TikTok. Earlier this week, reverse engineer Alessandro Paluzzi noted on X that Instagram is working on an option called “Visibility off Instagram” to possibly show posts as part of search engine results.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,751


Meta trials its AI chatbot across WhatsApp, Instagram and Messenger in India and Africa​

Manish Singh, Ingrid Lunden/ 2:24 AM EDT•April 12, 2024

Comment

Mark Zuckerberg onstage at Meta Connect 2023

Image Credits: Brian Heater

Meta has confirmed to TechCrunch that it is testing Meta AI, its large language model-powered chatbot, with WhatsApp, Instagram and Messenger users in India and parts of Africa. The move signals how Meta plans to tap massive user bases across its various apps to scale its AI offerings.

The social media giant has been scrambling to roll out more AI services in the wake of big AI moves from other major tech companies, OpenAI and more.

Meta announced plans to build and experiment with chatbots and other AI tools in February 2023. India, where users have recently started noting the appearing of the Meta AI chatbot, is a very important market for the company: It is home to more than 500 million Facebook and WhatsApp users, making it Meta’s largest single market.

Developing markets, where smartphone users are growing faster than developed markets like the U.S. (where growth has plateaued), are also a big target for Facebook to try out more services to engage audiences. Users in Africa are also reporting signs of Meta AI appearing in WhatsApp.



Meta confirmed the move in a statement. “Our generative AI-powered experiences are under development in varying phases, and we’re testing a range of them publicly in a limited capacity,” a Meta spokesperson told TechCrunch.

Meta unveiled Meta AI, its general-purpose assistant, in September 2023. The AI chatbot is designed to answer user queries directly within chats as well as offer them the ability to generate photorealistic images from text prompts. In the case of Instagram, there’s evidence it’s also being used for search queries.

Meta has been somewhat late to the game for building and rolling out AI tools to its users. In part, its teams assumed that generative AI tech was not quite ready for prime time. OpenAI clearly proved that wrong, putting MetaAI on the back foot.

“The availability of ChatGPT somehow captured the attention and enthusiasm of the public,” said Yann LeCun, the Turing Award winner who is Meta’s chief AI scientist, speaking earlier this week at an “AI Day” that the company organized at its offices in London. “What was surprising to people like me about ChatGPT was not the technology or the performance of the system. It was how much interest it gathered from the public. That surprised everyone. It surprised OpenAI, too.” Meta, he explained, thought that AI chatbots, based on its own efforts to launch them, “were not particularly welcome… in fact, some of them were trashed by people.” Now, he described the company, and the wider tech community, as “more open, and more comfortable with releasing models.”

And that’s what Meta is doing now. More pragmatically speaking, though, there are three reasons why Meta may be forging ahead with its AI strategy.

First, for user retention (users now expect to see and want to use AI tools in their apps; if Meta doesn’t offer them the worry is that those users will move away).

Second, for investor retention (investors want strong earnings, sure, but in tech they also want to see signs that Meta is committed to backing and building what many believe will be the next generation of computing).

Third, for its own pride (it’s been setting the pace for so much in areas like mobile apps, social media and advertising for the last decade, and it has outsized talent on its bench, including the celebrated AI academic Yann LeCun. Is it really going to jump the shark and miss all of this?!).

Instagram and WhatsApp’s massive global user base, boasting billions of monthly active users, to be sure presents Meta with a very unique opportunity to scale its AI offerings. By integrating Meta AI into WhatsApp and Instagram, the Facebook-parent firm can expose its advanced language model and image generation capabilities to an enormous audience, potentially dwarfing the reach of its competitors — at least on paper.

The company separately confirmed earlier this week that it will be launching Llama 3, the next version of its open source large language model, within the next month.



The story was updated with more detail and to note that Meta is testing also Meta AI across Instagram and Messenger alongside WhatsApp.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,751


Meta will auto-blur nudity in Instagram DMs in latest teen safety step​

Natasha Lomas @riptari / 9:28 AM EDT•April 11, 2024

Comment

Instagram logo

Image Credits: KIRILL KUDRYAVTSEV / Contributor / Getty Images

Meta said on Thursday that it is testing new features on Instagram intended to help safeguard young people from unwanted nudity or sextortion scams. This includes a feature called “Nudity Protection in DMs,” which automatically blurs images detected as containing nudity.

The tech giant said it will also nudge teens to protect themselves by serving a warning encouraging them to think twice about sharing intimate images. Meta hopes this will boost protection against scammers who may send nude images to trick people into sending their own images in return.

The company said it is also implementing changes that will make it more difficult for potential scammers and criminals to find and interact with teens. Meta said it is developing new technology to identify accounts that are “potentially” involved in sextortion scams, and will apply limits on how these suspect accounts can interact with other users.

In another step announced on Thursday, Meta said it has increased the data it is sharing with the cross-platform online child safety program, Lantern, to include more “sextortion-specific signals.”

The social networking giant has had long-standing policies that ban people from sending unwanted nudes or seeking to coerce others into sharing intimate images. However, that doesn’t stop these problems from occurring and causing misery for scores of teens and young people — sometimes with extremely tragic results.

We’ve rounded up the latest crop of changes in more detail below.

Nudity screens​

Nudity Protection in DMs aims to protect teen users of Instagram from cyberflashing by putting nude images behind a safety screen. Users will be able to choose whether or not to view such images.

“We’ll also show them a message encouraging them not to feel pressure to respond, with an option to block the sender and report the chat,” said Meta.

The nudity safety screen will be turned on by default for users under 18 globally. Older users will see a notification encouraging them to turn the feature on.

“When nudity protection is turned on, people sending images containing nudity will see a message reminding them to be cautious when sending sensitive photos, and that they can unsend these photos if they’ve changed their mind,” the company added.

Anyone trying to forward a nude image will see the same warning encouraging them to reconsider.

The feature is powered by on-device machine learning, so Meta said it will work within end-to-end encrypted chats because the image analysis is carried out on the user’s own device.

The nudity filter has been in development for nearly two years.

Safety tips​

In another safeguarding measure, Instagram users who send or receive nudes will be directed to safety tips (with information about the potential risks involved), which, according to Meta, have been developed with guidance from experts.

“These tips include reminders that people may screenshot or forward images without your knowledge, that your relationship to the person may change in the future, and that you should review profiles carefully in case they’re not who they say they are,” the company wrote in a statement. “They also link to a range of resources, including Meta’s Safety Center, support helplines, StopNCII.org for those over 18, and Take It Down for those under 18.”

The company is also testing showing pop-up messages to people who may have interacted with an account that has been removed for sextortion. These pop-ups will also direct users to relevant resources.

“We’re also adding new child safety helplines from around the world into our in-app reporting flows. This means when teens report relevant issues — such as nudity, threats to share private images or sexual exploitation or solicitation — we’ll direct them to local child safety helplines where available,” the company said.

Tech to spot sextortionists​

While Meta says it removes sextortionists’ accounts when it becomes aware of them, it first needs to spot bad actors to shut them down. So, the company is trying to go further by “developing technology to help identify where accounts may potentially be engaging in sextortion scams, based on a range of signals that could indicate sextortion behavior.”

“While these signals aren’t necessarily evidence that an account has broken our rules, we’re taking precautionary steps to help prevent these accounts from finding and interacting with teen accounts,” the company said. “This builds on the work we already do to prevent other potentially suspicious accounts from finding and interacting with teens.”

It’s not clear what technology Meta is using to do this analysis, nor which signals might denote a potential sextortionist (we’ve asked for more details). Presumably, the company may analyze patterns of communication to try to detect bad actors.

Accounts that get flagged by Meta as potential sextortionists will face restrictions on messaging or interacting with other users.

“[A]ny message requests potential sextortion accounts try to send will go straight to the recipient’s hidden requests folder, meaning they won’t be notified of the message and never have to see it,” the company wrote.

Users who are already chatting with potential scam or sextortion accounts will not have their chats shut down, but will be shown Safety Notices “encouraging them to report any threats to share their private images, and reminding them that they can say ‘no’ to anything that makes them feel uncomfortable,” according to the company.

Teen users are already protected from receiving DMs from adults they are not connected with on Instagram (and also from other teens, in some cases). But Meta is taking this a step further: The company said it is testing a feature that hides the “Message” button on teenagers’ profiles for potential sextortion accounts — even if they’re connected.

“We’re also testing hiding teens from these accounts in people’s follower, following and like lists, and making it harder for them to find teen accounts in Search results,” it added.

It’s worth noting the company is under increasing scrutiny in Europe over child safety risks on Instagram, and enforcers have questioned its approach since the bloc’s Digital Services Act (DSA) came into force last summer.

A long, slow creep towards safety​

Meta has announced measures to combat sextortion before — most recently in February, when it expanded access to Take It Down. The third-party tool lets people generate a hash of an intimate image locally on their own device and share it with the National Center for Missing and Exploited Children, helping to create a repository of non-consensual image hashes that companies can use to search for and remove revenge porn.

The company’s previous approaches to tackle that problem had been criticized, as they required young people to upload their nudes. In the absence of hard laws regulating how social networks need to protect children, Meta was left to self-regulate for years — with patchy results.

However, some requirements have landed on platforms in recent years — such as the U.K.’s Children Code (which came into force in 2021) and the more recent DSA in the EU — and tech giants like Meta are finally having to pay more attention to protecting minors.

For example, in July 2021, Meta started defaulting young people’s Instagram accounts to private just ahead of the U.K. compliance deadline. Even tighter privacy settings for teens on Instagram and Facebook followed in November 2022.

This January, the company announced it would set stricter messaging settings for teens on Facebook and Instagram by default, shortly before the full compliance deadline for the DSA kicked in in February.

This slow and iterative feature creep at Meta concerning protective measures for young users raises questions about what took the company so long to apply stronger safeguards. It suggests Meta opted for a cynical minimum in safeguarding in a bid to manage the impact on usage, and prioritize engagement over safety. That is exactly what Meta whistleblower Francis Haugen repeatedly denounced her former employer for.

Asked why the company is not also rolling out these new protections to Facebook, a spokeswoman for Meta told TechCrunch, “We want to respond to where we see the biggest need and relevance — which, when it comes to unwanted nudity and educating teens on the risks of sharing sensitive images — we think is on Instagram DMs, so that’s where we’re focusing first.”
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,751

Google open sources tools to support AI model development​

Kyle Wiggers @kyle_l_wiggers / 8:00 AM EDT•April 9, 2024

Google Cloud logo on the Sphere

Image Credits: Frederic Lardinois/TechCrunch

In a typical year, Cloud Next — one of Google’s two major annual developer conferences, the other being I/O — almost exclusively features managed and otherwise closed source, gated-behind-locked-down-APIs products and services. But this year, whether to foster developer goodwill or advance its ecosystem ambitions (or both), Google debuted a number of open source tools primarily aimed at supporting generative AI projects and infrastructure.

The first, MaxDiffusion, which Google actually quietly released in February, is a collection of reference implementations of various diffusion models — models like the image generator Stable Diffusion — that run on XLA devices. “XLA” stands for Accelerated Linear Algebra, an admittedly awkward acronym referring to a technique that optimizes and speeds up specific types of AI workloads, including fine-tuning and serving.

Google’s own tensor processing units (TPUs) are XLA devices, as are recent Nvidia GPUs.

Beyond MaxDiffusion, Google’s launching JetStream, a new engine to run generative AI models — specifically text-generating models (so not Stable Diffusion). Currently limited to supporting TPUs with GPU compatibility supposedly coming in the future, JetStream offers up to 3x higher “performance per dollar” for models like Google’s own Gemma 7B and Meta’s Llama 2, Google claims.

“As customers bring their AI workloads to production, there’s an increasing demand for a cost-efficient inference stack that delivers high performance,” Mark Lohmeyer, Google Cloud’s GM of compute and machine learning infrastructure, wrote in a blog post shared with TechCrunch. “JetStream helps with this need … and includes optimizations for popular open models such as Llama 2 and Gemma.”

Now, “3x” improvement is quite a claim to make, and it’s not exactly clear how Google arrived at that figure. Using which generation of TPU? Compared to which baseline engine? And how’s “performance” being defined here, anyway?

I’ve asked Google all these questions and will update this post if I hear back.

Second-to-last on the list of Google’s open source contributions are new additions to MaxText, Google’s collection of text-generating AI models targeting TPUs and Nvidia GPUs in the cloud. MaxText now includes Gemma 7B, OpenAI’s GPT-3 (the predecessor to GPT-4), Llama 2 and models from AI startup Mistral — all of which Google says can be customized and fine-tuned to developers’ needs.

“We’ve heavily optimized [the models’] performance on TPUs and also partnered closely with Nvidia to optimize performance on large GPU clusters,” Lohmeyer said. “These improvements maximize GPU and TPU utilization, leading to higher energy efficiency and cost optimization.”

Finally, Google’s collaborated with Hugging Face, the AI startup, to create Optimum TPU, which provides tooling to bring certain AI workloads to TPUs. The goal is to reduce the barrier to entry for getting generative AI models onto TPU hardware, according to Google — in particular text-generating models.

But at present, Optimum TPU is a bit bare-bones. The only model it works with is Gemma 7B. And Optimum TPU doesn’t yet support training generative models on TPUs — only running them.

Google’s promising improvements down the line.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,751

UK’s antitrust enforcer sounds the alarm over Big Tech’s grip on GenAI​

Natasha Lomas @riptari / 1:21 PM EDT•April 11, 2024

SAN FRANCISCO, CALIFORNIA - NOVEMBER 06: Microsoft CEO Satya Nadella speaks during the OpenAI DevDay event on November 06, 2023 in San Francisco, California. Altman delivered the keynote address at the first-ever Open AI DevDay conference.(Photo by Justin Sullivan/Getty Images)

Image Credits: Justin Sullivan / Getty Images

The U.K.’s competition watchdog, Competition and Markets Authority (CMA), has sounded a warning over Big Tech’s entrenching grip on the advanced AI market, with CEO Sarah Cardell expressing “real concerns” over how the sector is developing.

In an Update Paper on foundational AI models published Thursday, the CMA cautioned over increasing interconnection and concentration between developers in the cutting-edge tech sector responsible for the boom in generative AI tools.

The CMA’s paper points to the recurring presence of Google, Amazon, Microsoft, Meta and Apple (aka GAMMA) across the AI value chain: compute, data, model development, partnerships, release and distribution platforms. And while the regulator also emphasized that it recognizes that partnership arrangements “can play a pro-competitive role in the technology ecosystem,” it coupled that with a warning that “powerful partnerships and integrated firms” can pose risks to competition that run counter to open markets.

CMA GAMMA diagram

Image Credits: CMA’s Foundation Models. Update Paper

“We are concerned that the FM [foundational model] sector is developing in ways that risk negative market outcomes,” the CMA wrote, referencing a type of AI that’s developed with large amounts of data and compute power and may be used to underpin a variety of applications.

“In particular, the growing presence across the FM value chain of a small number of incumbent technology firms, which already hold positions of market power in many of today’s most important digital markets, could profoundly shape FM-related markets to the detriment of fair, open and effective competition, ultimately harming businesses and consumers, for example by reducing choice and quality, and by raising prices,” it warned.

The CMA undertook an initial review of the top end of the AI market last May and went on to publish a set of principles for “responsible” generative AI development that it said would guide its oversight of the fast-moving market. Although, Will Hayter, senior director of the CMA’s Digital Markets Unit, told TechCrunch last fall that it was not in a rush to regulate advanced AI because it wanted to give the market a chance to develop.

Since then, the watchdog has stepped in to scrutinize the cozy relationship between OpenAI, the developer behind the viral AI chatbot ChatGPT, and Microsoft, a major investor in OpenAI. Its update paper remarks on the giddy pace of change in the market. For example, it flagged research by the U.K.’s internet regulator, Ofcom, in a report last year that found 31% of adults and 79% of 13- to 17-year-olds in the U.K. have used a generative AI tool, such as ChatGPT, Snapchat My AI or Bing Chat (aka Copilot). So there are signs the CMA is revising its initial chillaxed position on the GenAI market amid the commercial “whirlwind” sucking up compute, data and talent.

Its Update Paper identifies three “key interlinked risks to fair, effective, and open competition,” as it puts it, which the omnipresence of GAMMA speaks to: (1) Firms controlling “critical inputs” for developing foundational models (known as general-purpose AI models), which might allow them to restrict access and build a moat against competition; (2) tech giants’ ability to exploit dominant positions in consumer- or business-facing markets to distort choice for GenAI services and restrict competition in deployment of these tools; and (3) partnerships involving key players, which the CMA says “could exacerbate existing positions of market power through the value chain.”

CMA GAMMA GenAI dev relalationships

Image Credits: CMA

In a speech delivered Thursday in Washington, D.C., at a legal event focused on generative AI, Cardell pointed to the “winner-take-all dynamics” seen in earlier web dev eras, when Big Tech built and entrenched their Web 2.0 empires while regulators sat on their heels. She said it’s important that competition enforcers don’t repeat the same mistakes with this next generation of digital development.

“The benefits we wish to see flowing from [advanced AI], for businesses and consumers, in terms of quality, choice and price, and the very best innovations, are much more likely in a world where those firms are themselves subject to fair, open and effective competition, rather than one where they are simply able to leverage foundation models to further entrench and extend their existing positions of power in digital markets,” she said, adding: “So we believe it is important to act now to ensure that a small number of firms with unprecedented market power don’t end up in a position to control not just how the most powerful models are designed and built, but also how they are embedded and used across all parts of our economy and our lives.”

How is the CMA going to intervene at the top end of the AI market? It does not have concrete measures to announce, as yet, but Cardell said it’s closely tracking GAMMA’s partnerships and stepping up its use of merger review to see whether any of these arrangements fall within existing merger rules.

That would unlock formal powers of investigation, and even the ability to block connections it deems anti-competitive. But for now the CMA has not gone that far, despite clear and growing concerns about cozy GAMMA GenAI ties. Its review of the links between OpenAI and Microsoft — for example, to determine whether the partnership constitutes a “relevant merger situation” — continues.

“Some of these arrangements are quite complex and opaque, meaning we may not have sufficient information to assess this risk without using our merger control powers to build that understanding,” Cardell also told the audience, explaining the challenges of trying to understand the power dynamics of the AI market without unlocking formal merger review powers. “It may be that some arrangements falling outside the merger rules are problematic, even if not ultimately remediable through merger control. They may even have been structured by the parties to seek to avoid the scope of merger rules. Equally some arrangements may not give rise to competition concerns.”

“By stepping up our merger review, we hope to gain more clarity over which types of partnerships and arrangements may fall within the merger rules, and under what circumstances competition concerns may arise — and that clarity will also benefit the businesses themselves,” she added.

The CMA’s Update report sets out some “indicative factors,” which Cardell said may trigger greater concern about and attention to FM partnerships, such as the upstream power of the partners, over AI inputs; and the downstream power, over distribution channels. She also said the watchdog will be looking closely at the nature of the partnership and the level of “ influence and alignment of incentives” between partners.

Meanwhile, the U.K. regulator is urging AI giants to follow the seven development principles it set out last fall to steer market developments onto responsible rails where competition and consumer protection are baked in. (The short version of what it wants to see is: accountability, access, diversity, choice, flexibility, fair dealing, and transparency.)

“We’re committed to applying the principles we have developed and to using all legal powers at our disposal — now and in the future — to ensure that this transformational and structurally critical technology delivers on its promise,” Cardell said in a statement.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,751






1/3
Amazing. Perfect.

2/3
This thing is such a pushover

3/3
It’s unclear what “knowledge cutoff” is supposed to even mean


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GK7UDU6XgAApF8y.jpg

GK7UDU7XoAAPR6U.jpg

GK7UDU5XcAAor4-.jpg

GK7Dz9iawAATDLc.png

GK7WUftXQAAe2pC.jpg

GK7WdbMWgAEv9cM.jpg

GK7XSb_XwAAcTLr.jpg





1/3
can confirm, GPT-4 April Update is more fun, or maybe I just got lucky

doesn't throw around any disclaimers

2/3
ah, didn't mean to QT ...

3/3
don't we all


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GK7ZA_TXsAEfKNn.png

GK7ZA_5W0AErWLT.png

GK7Dz9iawAATDLc.png
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,751

Is Google's AI Actually Discovering 'Millions of New Materials?'​

·APR 11, 2024 AT 3:13 PM

"In the DeepMind paper there are many examples of predicted materials that are clearly nonsensical."

Is Google's AI Actually Discovering 'Millions of New Materials?'
PHOTO BY JJ YING ON UNSPLASH

Become a paid subscriber for unlimited, ad-free articles and access to bonus content. This site is funded by subscribers and you will be directly powering our journalism.

In November, Google’s AI outfit DeepMind published a press release titled “Millions of new materials discovered with deep learning." But now, researchers who have analyzed a subset of what DeepMind discovered say "we have yet to find any strikingly novel compounds" in that subset.

“AI tool GNoME finds 2.2 million new crystals, including 380,000 stable materials that could power future technologies,” Google wrote of the finding, adding that this was “equivalent to nearly 800 years’ worth of knowledge,” that many of the discoveries “escaped previous human chemical intuition,” and that it was “an order-of-magnitude expansion in stable materials known to humanity.” The paper was published in Nature and was picked up very widely in the press as an example of the incredible promise of AI in science.

Another paper, published at the same time and done by researchers at Lawrence Berkeley National Laboratory “in partnership with Google DeepMind … shows how our AI predictions can be leveraged for autonomous material synthesis,” Google wrote. In this experiment, researchers created an “autonomous laboratory” (A-Lab) that used “computations, historical data from the literature, machine learning, and active learning to plan and interpret the outcomes of experiments performed using robotics.” Essentially, the researchers used AI and robots to remove humans from the laboratory, and came out the other end after 17 days having discovered and synthesized new materials, which the researchers wrote “demonstrates the effectiveness of artificial intelligence-driven platforms for autonomous materials discovery.”


A diagram from the Chemical Materials paper that explains what should constitute a new material. A DIAGRAM FROM THE CHEMICAL MATERIALS PAPER THAT EXPLAINS WHAT SHOULD CONSTITUTE A NEW "MATERIAL."

But in the last month, two external groups of researchers that analyzed the DeepMind and Berkeley papers and published their own analyses that at the very least suggest this specific research is being oversold. Everyone in the materials science world that I spoke to stressed that AI holds great promise for discovering new types of materials. But they say Google and its deep learning techniques have not suddenly made an incredible breakthrough in the materials science world.

In a perspective paper published in Chemical Materials this week, Anthony Cheetham and Ram Seshadri of the University of California, Santa Barbara selected a random sample of the 380,000 proposed structures released by DeepMind and say that none of them meet a three-part test of whether the proposed material is “credible,” “useful,” and “novel.” They believe that what DeepMind found are “crystalline inorganic compounds and should be described as such, rather than using the more generic label ‘material,’” which they say is a term that should be reserved for things that “demonstrate some utility.”

In the analysis, they write “we have yet to find any strikingly novel compounds in the GNoME and Stable Structure listings, although we anticipate that there must be some among the 384,870 compositions. We also note that, while many of the new compositions are trivial adaptations of known materials, the computational approach delivers credible overall compositions, which gives us confidence that the underlying approach is sound.”

"most of them might be credible, but they’re not very novel because they’re simple derivatives of things that are already known"

In a phone interview, Cheetham told me “the Google paper falls way short in terms of it being a useful, practical contribution to the experimental materials scientists.” Seshadri said “we actually think that Google has missed the mark here.”

“If I was looking for a new material to do a particular function, I wouldn’t comb through more than 2 million new compositions as proposed by Google,” Cheetham said. “I don’t think that’s the best way of going forward. I think the general methodology probably works quite well, but it needs to be a lot more focused around specific needs, so none of us have enough time in our lives to go through 2.2 million possibilities and decide how useful that might be. We spent quite a lot of time on this going through a very small subset of the things that they propose and we realize not only was there no functionality, but most of them might be credible, but they’re not very novel because they’re simple derivatives of things that are already known.”

Google DeepMind told me in a statement, “We stand by all claims made in Google DeepMind’s GNoME paper.”

“Our GNoME research represents orders of magnitude more candidate materials than were previously known to science, and hundreds of the materials we’ve predicted have already been independently synthesized by scientists around the world,” it added. The Materials Project, an open-access material property database, has found Google’s GNoMe database to be top-of-the-line when compared to other machine learning models, and Google said that some of the criticisms in the Chemical Materials analysis, like the fact that many of the new materials have already known structures but use different elements, were done by DeepMind by design.

The Berkeley paper, meanwhile, claimed that an “autonomous laboratory” (called “A-Lab”) took structures proposed by another project called the Materials Project and used a robot to synthesize them with no human intervention and created 43 “novel compounds.” There is one DeepMind researcher on this paper, and Google promoted it in its press release, but Google did not actively do the experiment.

Human researchers analyzing this finding found that it, too, has issues: “We discuss all 43 synthetic products and point out four common shortfalls in the analysis. These errors unfortunately lead to the conclusion that no new materials have been discovered in that work,” the authors, who include Leslie Schoop of Princeton University and Robert Palgrave of University College London, wrote in their analysis.

Again, each of the four researchers I spoke to say that they believe an AI-guided process to finding new materials shows promise, but said that the specific papers they analyzed were not necessarily huge breakthroughs, and that they should not be contextualized as such.

“In the DeepMind paper there are many examples of predicted materials that are clearly nonsensical. Not only to subject experts, but most high school students could say that compounds like H2O11 (which is a Deepmind prediction) do not look right,” Palgrave told me. “There are many many other examples of clearly wrong compounds and Cheetham/Seshadri do a great job of breaking this down more diplomatically than I am doing here. To me it seems that basic quality control has not happened—for the ML to be outputting such compounds as predictions is alarming and to me shows something has gone wrong.”

AI has been used to flood the internet with lots of content that cannot be easily parsed by humans, which makes discovering human-generated, high quality work a challenge. It's an imperfect analogy, but the researchers I spoke to said something similar could happen in materials science as well: Giant databases of potential structures doesn't necessarily make it easier to create something that is going to have a positive impact on society.

“There is some benefit to knowing millions of materials (if accurate) but how do you navigate this space looking for useful materials to make?,” Palgrave said. “Better to have an idea of a few new compounds with exceptionally useful properties than a million where you have no idea which are good.”

Schoop said that there were already “50k unique crystalline inorganic compounds, but we only know the properties of a fraction of these. So it is not very clear to me why we need millions of more compounds if we haven’t yet understood all the ones we do know. It might be much more useful to predict properties of materials than just plainly new materials.”

Again, Google DeepMind says it stands by its paper and takes issues with these characterizations, but it is fair to say that there is now a lot of debate about how AI and machine learning can be used to discover new materials, how these discoveries should be contextualized, tested, and acted upon, and how and whether dumping gigantic databases of proposed structures on the world is actually going to lead to new, tangible breakthroughs for society, or whether it will simply create a lot of noise.

“We don’t think that there’s a problem with AI fundamentally,” Seshadri said. “We think it’s a problem of how you use it. We’re not like, old fashioned people who think these techniques have no place in our science.”
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,751


1/3


2/3
we're hiring designers, engineers, product, data, infra, and ai tutors - join us!

Careers

3/3

Grok-1.5 Vision Preview


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196






April 12, 2024

Grok-1.5 Vision Preview​

Connecting the digital and physical worlds with our first multimodal model.

Introducing Grok-1.5V, our first-generation multimodal model. In addition to its strong text capabilities, Grok can now process a wide variety of visual information, including documents, diagrams, charts, screenshots, and photographs. Grok-1.5V will be available soon to our early testers and existing Grok users.

Capabilities​

Grok-1.5V is competitive with existing frontier multimodal models in a number of domains, ranging from multi-disciplinary reasoning to understanding documents, science diagrams, charts, screenshots, and photographs. We are particularly excited about Grok’s capabilities in understanding our physical world. Grok outperforms its peers in our new RealWorldQA benchmark that measures real-world spatial understanding. For all datasets below, we evaluate Grok in a zero-shot setting without chain-of-thought prompting.


Grok-1.5VGPT-4VClaude 3 SonnetClaude 3 OpusGemini Pro 1.5
53.6%56.8%53.1%59.4%58.5%
52.8%49.9%47.9%50.5%52.1%
88.3%78.2%88.7%88.1%80.3%
78.1%78.0%--73.5%
76.1%78.5%81.1%80.8%81.3%
85.6%88.4%89.5%89.3%86.5%
68.7%61.4%51.9%49.8%67.5%

BenchmarkGrok-1.5VGPT-4VClaude 3 SonnetClaude 3 OpusGemini Pro 1.5
MMMU
Multi-discipline
53.6%56.8%53.1%59.4%58.5%
Mathvista
Math
52.8%49.9%47.9%50.5%52.1%
AI2D
Diagrams
88.3%78.2%88.7%88.1%80.3%
TextVQA
Text reading
78.1%78.0%--73.5%
ChartQA
Charts
76.1%78.5%81.1%80.8%81.3%
DocVQA
Documents
85.6%88.4%89.5%89.3%86.5%
RealWorldQA
Real-world understanding
68.7%61.4%51.9%49.8%67.5%

Writing code from a diagram​

Sample 1/7

Next sample

A diagram describing a guessing game with a flowchart of the game logic and user interactions


User​

Can you translate this into Python code?

Grok​

Certainly! The flowchart you’ve provided describes a simple guessing game where the computer generates a random number, and the user has to guess it. Here’s the Python code that represents the logic in the flowchart:
python
Python:
import random

def guess_number(): # Generate a random number between 1 and 10

    target = random.randint(1, 10)

    while True:
        guess = int(input("Guess a number between 1 and 10: "))

        if guess == target:
            print("You won!")
            break
        else:
            print("Wrong guess, try again!")

if __name__ == "__main__":
    guess_number()

This code will run the guessing game as described in the flowchart. The user will be prompted to enter their guess until they guess the correct number.

Calculating calories​

Sample 2/7

Next sample

A close-up of the side panel of a food product box, listing various nutritional details such as the serving size and the amount of calories per serving.


User​

How many calories are there in 5 slices of this?

Grok​

If one serving size is 3 slices and contains 60 calories, then 5 slices would contain approximately 100 calories. This is calculated by first determining the calories per slice: 60 calories / 3 slices = 20 calories per slice. Then, multiply the calories per slice by the number of slices: 20 calories per slice * 5 slices = 100 calories.



From a drawing to a bedtime story​

Sample 3/7
Next sample
A drawing from a child, showing a boy next to a boat.

User​

My son drew this, can you tell a short bedtime story based on his drawing?

Grok​

Once upon a time, in a land of adventure and wonder, there was a brave little boy named Timmy. Timmy lived in a cozy little house with his family, and he loved nothing more than exploring the world around him.

One sunny day, Timmy decided to go on an adventure in the nearby forest. He packed his favorite snacks, a compass, and his trusty notebook to draw all the exciting things he would see.

As Timmy ventured deeper into the forest, he discovered a magical river that sparkled in the sunlight. The river was so enchanting that Timmy decided to build a small paper boat to send down the river as a special message to the forest spirits.

Timmy carefully folded his paper boat and drew a smiley face on it to bring joy to the forest spirits. He then placed the boat on the river and watched as it floated away, carrying his message of friendship and love for nature.

As the sun began to set, Timmy knew it was time to head back home. He felt grateful for the magical day he had spent in the forest, and he knew that he would always carry the memories of his adventure in his heart.

Back at home, Timmy shared his story with his family, who were delighted to hear about his magical day in the forest. They all agreed that Timmy’s brave and kind spirit was a true inspiration, and they knew that he would continue to have many more wonderful adventures in the future.

And so, with a heart full of love and a mind full of dreams, Timmy drifted off to sleep, eager to discover what new adventures awaited him in the world of his imagination.


Explaining a meme​

Sample 4/7
Next sample
A meme satirizing the differences between startups and big companies. The meme features two panels. On the left, a group of construction workers actively digging, under the text 'Startups'. On the right, a group of people watching a single man dig, under the tex t 'Big Companies'.

User​

I don't get it, please explain

Grok​

The image is a humorous comparison between startups and big companies. On the left side, labeled "Startups," there is a group of people working together to dig a hole. They are all actively participating, and the hole is already quite deep.

On the right side, labeled "Big companies," there is a group of people standing around a hole. However, unlike the startup side, only one person is actually digging the hole, while the others are watching or engaged in other activities.

The humor in this image comes from the exaggeration of the differences between startups and big companies. In startups, there is often a sense of urgency and everyone is expected to contribute directly to the task at hand. In contrast, big companies are often depicted as having more bureaucracy and less direct involvement from all team members in every project.


Converting a table to csv​

Sample 5/7
Next sample
A table with tracking Olympic medal winners from Morocco at the 2016 Summer Paralympics

User​

Please convert this table to csv.

Grok​

The table in the image can be translated to CSV format as follows:
csv
Code:
medal,name,sport,event,date
1 gold,El Amin Chentouf,Athletics,Men's marathon - T13,11 September
1 gold,Mohamed Amguoun,Athletics,Men's 400m - T13,15 September
1 gold,Azeddine Nouiri,Athletics,Men's shot put - T34,15 September
2 silver,Mahdi Afri,Athletics,Men's 400m - T12,9 September
2 silver,El Amin Chentouf,Athletics,Men's 5000m - T13,15 September
3 bronze,Mahdi Afri,Athletics,Men's 200m - T12,17 September
3 bronze,Mohamed Lahna,Paratriathlon,Men's PT2,10 September

Please note that the CSV format does not support styling or formatting, so the information is presented in a plain text format.

Help with rotten wood on a deck​

Sample 6/7
Next sample
A picture of a deck with wood that has started to rot.

User​

These holes are appearing around the screws in my deck. Is it rotten? Should I get my boards replaced?

Grok​

It appears that the wood in your deck has started to rot. This is indicated by the presence of holes around the screws, which is a common sign of wood decay.

To ensure the safety and longevity of your deck, it is recommended to have the affected boards replaced. This will help to prevent further damage to the deck and maintain its structural integrity.

If you are unsure about how to proceed with the replacement of the boards, it may be helpful to consult with a professional deck builder or contractor. They will be able to assess the extent of the damage and provide you with expert advice on the best course of action for your deck.

Solving a coding problem​

Sample 7/7
Next sample
A screenshot from the website Leetcode showing a problem statement for the 3Sum problem.

User​

Can you write Python code that solves this?

Grok​

python
Python:
def find_triplets(nums):
    result = []
    nums.sort()
    
    for i in range(len(nums) - 2):
        if i > 0 and nums[i] == nums[i - 1]:
            continue
        
        target = -nums[i]
        left = i + 1
        right = len(nums) - 1
        
        while left < right:
            if nums[left] + nums[right] == target:
                result.append([-target, nums[left], nums[right]])
                left += 1
                right -= 1
                while left < right and nums[left] == nums[left - 1]:
                    left += 1
                while left < right and nums[right] == nums[right + 1]:
                    right -= 1
            elif nums[left] + nums[right] < target:
                left += 1
            else:
                right -= 1
    
    return result

Real-World Understanding​

In order to develop useful real-world AI assistants, it is crucial to advance a model's understanding of the physical world. Towards this goal, we are introducing a new benchmark, RealWorldQA. This benchmark is designed to evaluate basic real-world spatial understanding capabilities of multimodal models. While many of the examples in the current benchmark are relatively easy for humans, they often pose a challenge for frontier models.

A picture of various household objects on a table.

Which object is larger the pizza cutter or the scissors? A. The pizza cutter is larger. B. The scissors is larger. C. They are about the same size.

A picture of a dark street scene with a sign that says left lane must turn left

Where can we go from the current lane? A. Turn left. B. Go straight. C. Turn left and go straight. D. Turn right.

A picture of a street with several cars parked on the left

Given this front camera view from our sedan, do we have enough space to drive around the gray car in front of us? A. Yes. B. No.

A picture of a toy next to a cell phone showing a compass.

Given the picture, in which cardinal direction is the dinosaur facing? A. North. B. South. C. East. D. West.

The initial release of the RealWorldQA consists of over 700 images, with a question and easily verifiable answer for each image. The dataset consists of anonymized images taken from vehicles, in addition to other real-world images. We are excited to release RealWorldQA to the community, and we intend to expand it as our multimodal models improve. RealWorldQA is released under CC BY-ND 4.0. Click here (677MB) to download the dataset.


Into the future​

Advancing both our multimodal understanding and generation capabilities are important steps in building beneficial AGI that can understand the universe. In the coming months, we anticipate to make significant improvements in both capabilities, across various modalities such as images, audio, and video.
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,751

Light-based chip: China’s Taichi could power artificial general intelligence​


Taichi could potentially make artificial general intelligence a reality.​



Amal Jos Chacko


Amal Jos Chacko

Published: Apr 13, 2024 09:41 AM EST

SCIENCE

Light-based chip: China’s Taichi could power artificial general intelligence

A representative image of a futuristic computing chip. prawest nujeen/ iStock

Our daily news digest will keep you up to date with engineering, science and technology news, Monday to Saturday.​

By subscribing, you agree to our Terms of Use and Policies You may unsubscribe at any time.

Researchers at Tsinghua University in China have developed a revolutionary new artificial intelligence (AI) chip that uses light instead of electricity to process data.

Dubbed “Taichi,” the chip is reportedly over 1,000 times more energy-efficient than Nvidia’s high-performance H100 GPU chip. Taichi is especially relevant given export restrictions to China due to US trade policies.

Led by professors Dai Qionghai and Fang Lu, the Tsinghua team explained how their photonic integrated circuit (PIC) chip outperforms traditional electronic chips in tasks like image recognition training and content generation.

“Taichi paves the way for large-scale photonic computing and advanced tasks, further exploiting the flexibility and potential of photonics for modern artificial general intelligence (AGI),” the researchers stated in their paper.

Unlike conventional chip approaches that stack PICs, the Taichi team opted for an innovative architecture. They organized the chips into independent clusters, creating a shallow but broad architecture.

This distribution of resources allows Taichi to achieve impressive performance. Performance figures include on-chip classification of over 1,000 categories with 91.89% accuracy on the Omniglot dataset.

The Omniglot dataset is widely used to benchmark machine learning models. It contains a diverse collection of handwritten characters from over 50 different writing systems.

“It was not an exclusive algorithm only for Taichi. The computation and task distributing could also help existing PICs to extend their computing capacity for more advanced tasks,” the team explained.

Towards artificial general intelligence​

According to the researchers, Taichi could potentially make artificial general intelligence (AGI) a reality. AGI is a form of AI with human-level cognitive abilities that could be applied across a wide range of disciplines. In comparison, despite their capability, current systems remain narrow in focus.

“We anticipate that Taichi will accelerate the development of more powerful optical solutions as critical support for the foundation model and a new era of AGI,” the researchers stated.

Taichi’s modular design could be a key advantage in scaling up computing power to the levels needed for AGI. The Tsinghua team demonstrated a distributed Taichi network with 13.96 million artificial neurons, exceeding the 1.47 million neurons in the next largest competing photonic chip design.

Taichi achieved an energy efficiency of 160.82 trillion operations per watt (TOPS/W), dwarfing the previous 2.9 TOPS/W achieved by another team in 2022, LiveScience reports. This dramatic improvement in energy efficiency is crucial for the sustainability of AI computing, which has become an increasingly power-hungry industry.

A promising path ahead​

Taichi hints at a future where artificial intelligence is powered by light-based computing. With traditional electronic chips approaching their limit in terms of speed and efficiency, there is a need for viable alternatives.

“The present work is a promising step toward real-world photonic computing, supporting various applications in AI,” Yury Suleymanov, a Science journal associate editor, told South China Morning Post.

The team’s research was published in the journal Science.

Study abstract​

The pursuit of artificial general intelligence (AGI) continuously demands higher computing performance. Despite the superior processing speed and efficiency of integrated photonic circuits, their capacity and scalability are restricted by unavoidable errors, such that only simple tasks and shallow models are realized. To support modern AGIs, we designed Taichi—large-scale photonic chiplets based on an integrated diffractive-interference hybrid design and a general distributed computing architecture that has millions-of-neurons capability with 160–tera-operations per second per watt (TOPS/W) energy efficiency.

Taichi experimentally achieved on-chip 1000-category–level classification (testing at 91.89% accuracy in the 1623-category Omniglot dataset) and high-fidelity artificial intelligence–generated content with up to two orders of magnitude of improvement in efficiency. Taichi paves the way for large-scale photonic computing and advanced tasks, further exploiting the flexibility and potential of photonics for modern AGI.
 
Top