Millions of new materials discovered with deep learning

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,030
Reputation
8,229
Daps
157,707

Millions of new materials discovered with deep learning​

Published29 NOVEMBER 2023Authors

Amil Merchant and Ekin Dogus Cubuk

WpdZQYgBnHWIjBs-21n8EQrrdsvHnrUo_IhkDIz8UIIJkB-Fv09q1r0u97QKIv7Uja1pmujl1aLkZqqlf62UsPRP53t478q_-_GShn7DfZlBd9wT=w1072-h603-n-nu

AI tool GNoME finds 2.2 million new crystals, including 380,000 stable materials that could power future technologies

Modern technologies from computer chips and batteries to solar panels rely on inorganic crystals. To enable new technologies, crystals must be stable otherwise they can decompose, and behind each new, stable crystal can be months of painstaking experimentation.

Today, in a paper published in Nature, we share the discovery of 2.2 million new crystals – equivalent to nearly 800 years’ worth of knowledge. We introduce Graph Networks for Materials Exploration (GNoME), our new deep learning tool that dramatically increases the speed and efficiency of discovery by predicting the stability of new materials.

With GNoME, we’ve multiplied the number of technologically viable materials known to humanity. Of its 2.2 million predictions, 380,000 are the most stable, making them promising candidates for experimental synthesis. Among these candidates are materials that have the potential to develop future transformative technologies ranging from superconductors, powering supercomputers, and next-generation batteries to boost the efficiency of electric vehicles.

GNoME shows the potential of using AI to discover and develop new materials at scale. External researchers in labs around the world have independently created 736 of these new structures experimentally in concurrent work. In partnership with Google DeepMind, a team of researchers at the Lawrence Berkeley National Laboratory has also published a second paper in Nature that shows how our AI predictions can be leveraged for autonomous material synthesis.

We’ve made GNoME’s predictions available to the research community. We will be contributing 380,000 materials that we predict to be stable to the Materials Project, which is now processing the compounds and adding them into its online database. We hope these resources will drive forward research into inorganic crystals, and unlock the promise of machine learning tools as guides for experimentation


Accelerating materials discovery with AI​

YERGAfO8cXC9ZIYrLWsj2ow-Ml2V2kgXapvF6AuelhSnDosn5xsPf7Qdhkza3jn0aJbDFoR1dYa74XUZlX1CYLHNO2xcQsvl6tgFVmGzb0CmR4y_6Q=w1070

About 20,000 of the crystals experimentally identified in the ICSD database are computationally stable. Computational approaches drawing from the Materials Project, Open Quantum Materials Database and WBM database boosted this number to 48,000 stable crystals. GNoME expands the number of stable materials known to humanity to 421,000.

In the past, scientists searched for novel crystal structures by tweaking known crystals or experimenting with new combinations of elements - an expensive, trial-and-error process that could take months to deliver even limited results. Over the last decade, computational approaches led by the Materials Project and other groups have helped discover 28,000 new materials. But up until now, new AI-guided approaches hit a fundamental limit in their ability to accurately predict materials that could be experimentally viable. GNoME’s discovery of 2.2 million materials would be equivalent to about 800 years’ worth of knowledge and demonstrates an unprecedented scale and level of accuracy in predictions.

For example, 52,000 new layered compounds similar to graphene that have the potential to revolutionize electronics with the development of superconductors. Previously, about 1,000 such materials had been identified. We also found 528 potential lithium ion conductors, 25 times more than a previous study, which could be used to improve the performance of rechargeable batteries.

We are releasing the predicted structures for 380,000 materials that have the highest chance of successfully being made in the lab and being used in viable applications. For a material to be considered stable, it must not decompose into similar compositions with lower energy. For example, carbon in a graphene-like structure is stable compared to carbon in diamonds. Mathematically, these materials lie on the convex hull. This project discovered 2.2 million new crystals that are stable by current scientific standards and lie below the convex hull of previous discoveries. Of these, 380,000 are considered the most stable, and lie on the “final” convex hull – the new standard we have set for materials stability.


GNoME: Harnessing graph networks for materials exploration​

jYtAc1goZdBnOkfAzEzfdQP6rAWWBYmq2tURb30fxMVYeTDYMzgn_zCoQwi60cNXZa2c8MNoWsptPRlbDTz2IrouYuEAtCPh_51pZ2WH1UpbeXydVQ=w1440

GNoME uses two pipelines to discover low-energy (stable) materials. The structural pipeline creates candidates with structures similar to known crystals, while the compositional pipeline follows a more randomized approach based on chemical formulas. The outputs of both pipelines are evaluated using established Density Functional Theory calculations and those results are added to the GNoME database, informing the next round of active learning.

GNoME is a state-of-the-art graph neural network (GNN) model. The input data for GNNs take the form of a graph that can be likened to connections between atoms, which makes GNNs particularly suited to discovering new crystalline materials.

GNoME was originally trained with data on crystal structures and their stability, openly available through the Materials Project. We used GNoME to generate novel candidate crystals, and also to predict their stability. To assess our model’s predictive power during progressive training cycles, we repeatedly checked its performance using established computational techniques known as Density Functional Theory (DFT), used in physics, chemistry and materials science to understand structures of atoms, which is important to assess the stability of crystals.

We used a training process called ‘active learning’ that dramatically boosted GNoME’s performance. GNoME would generate predictions for the structures of novel, stable crystals, which were then tested using DFT. The resulting high-quality training data was then fed back into our model training.

Our research boosted the discovery rate of materials stability prediction from around 50%, to 80% - based on an external benchmark set by previous state-of-the-art models. We also managed to scale up the efficiency of our model by improving the discovery rate from under 10% to over 80% - such efficiency increases could have significant impact on how much compute is required per discovery.


AI ‘recipes’ for new materials​

The GNoME project aims to drive down the cost of discovering new materials. External researchers have independently created 736 of GNoME’s new materials in the lab, demonstrating that our model’s predictions of stable crystals accurately reflect reality. We’ve released our database of newly discovered crystals to the research community. By giving scientists the full catalog of the promising ‘recipes’ for new candidate materials, we hope this helps them to test and potentially make the best ones.

ymNK3VxqrlfFbT_Pyz8Buz3ysZ17KswiKfQhyWpY31QpwzRHO-KHAPHZFVWlLd3AWb7cp--qiEkOXQmaN-p0J6PkKIM7QpMSIADgNR0526y3zDCgGQ=w616

Upon completion of our latest discovery efforts, we searched the scientific literature and found 736 of our computational discoveries were independently realized by external teams across the globe. Above are six examples ranging from a first-of-its-kind Alkaline-Earth Diamond-Like optical material (Li4MgGe2S7) to a potential superconductor (Mo5GeB2).

Rapidly developing new technologies based on these crystals will depend on the ability to manufacture them. In a paper led by our collaborators at Berkeley Lab, researchers showed a robotic lab could rapidly make new materials with automated synthesis techniques. Using materials from the Materials Project and insights on stability from GNoME, the autonomous lab created new recipes for crystal structures and successfully synthesized more than 41 new materials, opening up new possibilities for AI-driven materials synthesis.

bthZ6UxFcEOVB5mbBtNo1kHBfO0Ubuu5pn-XUeGZNIGQVJYIsznm6QqLJnorrYdiGNCC6IbF7_9p3ZItbgRtYj6HY3-8lz-KiCS-v7ySl8eANw7t=w616

A-Lab, a facility at Berkeley Lab where artificial intelligence guides robots in making new materials. Photo credit: Marilyn Sargent/Berkeley Lab

New materials for new technologies​

To build a more sustainable future, we need new materials. GNoME has discovered 380,000 stable crystals that hold the potential to develop greener technologies – from better batteries for electric cars, to superconductors for more efficient computing.

Our research – and that of collaborators at the Berkeley Lab, Google Research, and teams around the world — shows the potential to use AI to guide materials discovery, experimentation, and synthesis. We hope that GNoME together with other AI tools can help revolutionize materials discovery today and shape the future of the field.
 

Geek Nasty

Brain Knowledgeably Whizzy
Supporter
Joined
Jan 30, 2015
Messages
30,264
Reputation
4,466
Daps
114,282
Reppin
South Kakalaka
They need to put this thing on thorium research. If they can get the materials challenges figured out it’s basically free energy for thousands of years.
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,656
Daps
203,845
Reppin
the ether
This ia what I like to see A.I. being used for. However, I'm still a bit skepitical. How many of these new materials will even be viable for production at scale?


And those that do go to production, how many will improve life for regular people, and how many are merely going to marginally increase corporate profits?
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,030
Reputation
8,229
Daps
157,707

Google DeepMind's AI Dreamed Up 380,000 New Materials. The Next Challenge Is Making Them​

Google DeepMind researchers say they’ve expanded the number of known stable materials tenfold. Some could be useful for everything from batteries to superconductors—if they make it out of the lab.


A-Lab in February of 2023 at Lawrence Berkeley National Laboratory in Berkeley, California.VIDEO: MARILYN SARGENT/BERKELEY LAB

The robotic line cooks were deep in their recipe, toiling away in a room tightly packed with equipment. In one corner, an articulated arm selected and mixed ingredients, while another slid back and forth on a fixed track, working the ovens. A third was on plating duty, carefully shaking the contents of a crucible onto a dish. Gerbrand Ceder, a materials scientist at Lawrence Berkeley National Lab and UC Berkeley, nodded approvingly as a robotic arm delicately pinched and capped an empty plastic vial—an especially tricky task, and one of his favorites to observe. “These guys can work all night,” Ceder said, giving two of his grad students a wry look.

Stocked with ingredients like nickel oxide and lithium carbonate, the facility, called the A-Lab, is designed to make new and interesting materials, especially ones that might be useful for future battery designs. The results can be unpredictable. Even a human scientist usually gets a new recipe wrong the first time. So sometimes the robots produce a beautiful powder. Other times it’s a melted gluey mess, or it all evaporates and there’s nothing left. “At that point, the humans would have to make a decision: What do I do now?” Ceder says.

The robots are meant to do the same. They analyze what they’ve made, adjust the recipe, and try again. And again. And again. “You give them some recipes in the morning and when you come back home you might have a nice new soufflé,” says materials scientist Kristin Persson, Ceder’s close collaborator at LBNL (and also spouse). Or you might just return to a burned-up mess. “But at least tomorrow they’ll make a much better soufflé.”

PLAY/PAUSE BUTTONVIDEO: MARILYN SARGENT/BERKELEY LAB

Recently, the range of dishes available to Ceder’s robots has grown exponentially, thanks to an AI program developed by Google DeepMind. Called GNoME, the software was trained using data from the Materials Project, a free-to-use database of 150,000 known materials overseen by Persson. Using that information, the AI system came up with designs for 2.2 million new crystals, of which 380,000 were predicted to be stable—not likely to decompose or explode, and thus the most plausible candidates for synthesis in a lab—expanding the range of known stable materials nearly 10-fold. In a paper published today in Nature, the authors write that the next solid-state electrolyte, or solar cell materials, or high-temperature superconductor, could hide within this expanded database.

Finding those needles in the haystack starts off with actually making them, which is all the more reason to work quickly and through the night. In a recent set of experiments at LBNL, also published today in Nature, Ceder’s autonomous lab was able to create 41 of the theorized materials over 17 days, helping to validate both the AI model and the lab’s robotic techniques.

When deciding if a material can actually be made, whether by human hands or robot arms, among the first questions to ask is whether it is stable. Generally, that means that its collection of atoms are arranged into the lowest possible energy state. Otherwise, the crystal will want to become something else. For thousands of years, people have steadily added to the roster of stable materials, initially by observing those found in nature or discovering them through basic chemical intuition or accidents. More recently, candidates have been designed with computers.

The problem, according to Persson, is bias: Over time, that collective knowledge has come to favor certain familiar structures and elements. Materials scientists call this the “Edison effect,” referring to his rapid trial-and-error quest to deliver a lightbulb filament, testing thousands of types of carbon before arriving at a variety derived from bamboo. It took another decade for a Hungarian group to come up with tungsten. “He was limited by his knowledge,” Persson says. “He was biased, he was convinced.”

FEATURED VIDEO
Harvard Professor Explains Algorithms in 5 Levels of Difficulty
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,030
Reputation
8,229
Daps
157,707
{continued}

DeepMind’s approach is meant to look beyond those biases. The team started with 69,000 materials from Persson’s library, which is free to use and funded by the US Department of Energy. That was a good start, because the database contains the detailed energetic information needed to understand why some materials are stable and others aren’t. But it wasn’t enough data to overcome what Google DeepMind researcher Ekin Dogus Cubuk calls a “philosophical contradiction” between machine learning and empirical science. Like Edison, AI struggles to generate truly novel ideas beyond what it has seen before. “In physics, you never want to learn a thing that you already know,” he says. “You almost always want to generalize out of domain”—whether that’s to discover a different class of battery material or a new superconductivity theory.

GNoME relies on an approach called active learning. First, an AI called a graph neural network, or GNN, uses the database to learn patterns in the stable structures and figure out how to minimize the energy in the atomic bonds within new structures. Using the whole range of the periodic table, it then produces thousands of potentially stable candidates. The next step is to verify and adjust them, using a quantum mechanics technique called density-functional theory, or DFT. These refined results are then plugged back into the training data and the process is repeated.

Newscenter_Google_Materials_850x540px-Business.jpg

The structures of 12 compounds in the Materials Project database.
ILLUSTRATION: JENNY NUSS/BERKELEY LAB


The researchers found that, with multiple repetitions, this approach could generate more complex structures than were initially in the Materials Project data set, including some that were composed of five or six unique elements. (The data set used to train the AI largely capped out at four.) Those types of materials involve so many complex atomic interactions that they generally escape human intuition. “They were hard to find,” Cubuk says. “But now they’re not so hard to find anymore.”

But DFT is only a theoretical validation. The next step is actually making something. So Ceder’s team picked 58 crystals to create in the A-Lab. After taking into account the capabilities of the lab and available precursors, it was a random selection. And at first, as expected, the robots failed, then repeatedly adjusted their recipes. After 17 days of experiments, the A-Lab managed to produce 41 of the materials, or 71 percent, sometimes after trying a dozen different recipes.

Taylor Sparks, a materials scientist at the University of Utah who wasn’t involved in the research, says that it’s promising to see automation at work for new types of materials synthesis. But using AI to propose thousands of new hypothetical materials, and then chasing after them with automation, just isn’t practical, he adds. GNNs are becoming widely used to develop new ideas for materials, but usually researchers want to tailor their efforts to produce materials with useful properties—not blindly produce hundreds of thousands of them. “We’ve already had way too many things that we’ve wanted to investigate than we physically could,” he says. “I think the challenge is, is this scaled synthesis approaching the scale of the predictions? Not even close.”

Only a fraction of the 380,000 materials in the Nature paper will likely wind up being practical to create. Some involve radioactive elements, or ones that are too expensive or rare. Some will require types of synthesis that involve extreme conditions that can’t be produced in a lab, or precursors that lab suppliers don’t have on hand.

That’s likely even true for materials that could very well hold potential for the next photovoltaic cell or battery design. “We've come up with a lot of cool materials,” Persson says. “Making them and testing them has consistently been the bottleneck, especially if it's a material that nobody's ever made before. The number of people I can call up in my circle of friends who go, ‘Absolutely, let me get on that for you,’ is pretty much one or two people.’”

“Really, is it that high?” Ceder interjects with a laugh.

Even if a material can be made, there’s a long road to turning a basic crystal into a product. Persson brings up the example of an electrolyte inside a lithium-ion battery. Predictions about the energy and structure of a crystal can be applied to problems like figuring out how easily lithium ions can move across it—a key aspect of performance. What it can’t predict as easily is whether that electrolyte will react with neighboring materials and destroy the whole device. Plus, in general, the utility of new materials only becomes apparent in combination with other materials or by manipulating them with additives.

Still, the expanded range of materials expands the possibilities for synthesis, and also provides more data for future AI programs, says Anatole von Lilienfeld, a materials scientist at the University of Toronto who wasn’t involved in the research. It also helps nudge materials scientists away from their biases and towards the unknown. “Every new step that you take is fantastic,” he says. “It could usher in a new compound class.”

PLAY/PAUSE BUTTON
The Materials Project can visualize the atomic structure of materials. This compound (Ba₆Nb₇O₂₁) is one of the new materials calculated by GNoME. It contains barium (blue), niobium (white), and oxygen (green).VIDEO: MATERIALS PROJECT/BERKELEY LAB


Google is also interested in exploring the possibilities of the new materials generated by GNoME, says Pushmeet Kohli, vice president of research at Google DeepMind. He compares GNoME to AlphaFold, the company’s software that startled structural biologists with its success at predicting how proteins fold. Both are addressing fundamental problems by creating an archive of new data that scientists can explore and expand. From here, the company plans to work on more specific problems, he says, such as homing in on interesting material properties and using AI to speed up synthesis. Both are challenging problems, because there is typically far less data to start with than there is for predicting stability.

Kohli says the company is exploring its options for working more directly with physical materials, whether by contracting outside labs or continuing with academic partnerships. It could also set up its own lab, he adds, referring to Isomorphic Labs, a drug discovery spinoff from DeepMind established in 2021 following the success of AlphaFold.

Things could get complicated for researchers trying to put the materials to practical use. The Materials Project is popular with both academic labs and corporations because it allows any type of use, including commercial ventures. Google DeepMind’s materials are being released under a separate license that forbids commercial use. “It’s released for academic purposes,” Kohli says. “If people want to investigate and explore commercial partnerships, and so on, we will review them on a case-by-case basis.”

Multiple scientists who work with new materials noted that it’s unclear what sort of say the company would have if testing in an academic lab led to a possible commercial use for a GNoME-generated material. An idea for a new crystal—without a particular use in mind—is generally not patentable, and tracing its provenance back to the database could be difficult.

Kohli also says that while the data is being released, there are no current plans to release the GNoME model. He cites safety considerations—the software could theoretically be used to dream up dangerous materials, he says—and uncertainty about Google DeepMind’s materials strategy. “It is difficult to make predictions about what the commercial impact would be,” Kohli says.

Sparks expects his fellow academics to bristle at the lack of code for GNoME, just as biologists did when AlphaFold was initially published without a complete model. (The company later released it.) “That’s lame,” he says. Other materials scientists will likely want to reproduce the results and investigate ways to improve the model or tailor it to specific uses. But without the model, they can’t do either, Sparks says.

In the meantime, the Google DeepMind researchers hope hundreds of thousands of new materials will be enough to keep theorists and synthesizers—both human and robotic—plenty busy. “Every technology could be improved with better materials. It’s a bottleneck,” Cubuk says. “This is why we have to enable the field by discovering more materials, and helping people discover even more.”
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,030
Reputation
8,229
Daps
157,707

DeepMind's robot chef cooks up 'novel' materials with a side of controversy​


Chemists dispute research claiming millions of recipes for inorganic crystalline compounds​


Katyanna Quach


Wed 31 Jan 2024 // 20:00 UTC

Google DeepMind and UC Berkeley's research into a robot cooking up new materials predicted by AI algorithms is being called into question by a group of chemists.

The initial study, published in November in Nature, garnered attention for deploying a robotic lab system, A-Lab, to automatically synthesize novel compounds predicted by Google DeepMind's model GNoMe.

The software generated millions of recipes for new inorganic crystalline compounds that could have the potential to be useful in future electronics. Over 17 days, a robotic arm reportedly made more than 40 new materials, 35 of which had been predicted by GNoMe. It was able to mix and heat various powders to create materials whose structures were probed using X-ray diffraction.

A machine learning algorithm inspected the experimental patterns and compared them to predicted models to confirm whether the compound was made successfully. The experiment was billed as an important demonstration that showed how AI-powered robots could help drive scientific discovery.

However the results are now being disputed. In a separate paper, seven researchers from Princeton University and University College London believe that A-Lab didn't manage to make a single novel inorganic material.

"Unfortunately, we found that the central claim of the A-Lab paper, namely that a large number of previously unknown materials were synthesized, does not hold," they wrote in their analysis released on ChemRxiv [PDF]. When they combed through the X-ray diffraction data for each material, they found that most of them have been misclassified.

X-ray diffraction patterns allow scientists to calculate the position of the atoms inside. Different materials will make varying diffraction patterns. Scientists closely inspect the peaks and troughs in the data and compare them to existing patterns to interpret each material's structure.

The data from the A-Lab paper, however, shows that most of the 35 patterns for novel materials predicted by GNoMe resemble a mix of already known compounds, while three of them aren't new at all. The errors stem from trying to use AI to computationally determine whether a new material had been made or not, Robert Palgrave, Professor of Inorganic and Materials Chemistry at UCL, told The Register.

Researchers from Google DeepMind and UC Berkeley reportedly determined that if each sample made by the robot had a purity level over 50 percent, and if its structure differed from a list of known compounds contained in the Inorganic Crystal Structure Database (ICSD), it should be declared as novel. But that process is unreliable, Palgrave and his colleagues claim.

"On the computational side, they couldn't deal with something called 'compositional disorder,' which is a very important feature of inorganic materials. A crystal is an ordered arrangement of atoms. But even within that order there can be disorder. Imagine you have a set of children's building blocks, all the same size and shape, and they are arranged in a perfectly ordered pattern on the floor. The blocks are like atoms in a crystal," he told us.

"But now imagine that there are two colors of block, red and blue. We have an ordered pattern of colors, say alternating red, blue, red, blue etc. You might end up with a chess board type arrangement. But it is also possible for the colors to be mixed up randomly. In this case the blocks themselves are ordered, but the colors are disordered."

The chemists believe the initial experiment had not taken compositional disorder into account, and assumed that the atoms in each compound made by A-Lab are ordered when they are actually disordered and already exist in the ICSD. "On the experimental side, they tried to use AI to interpret their experimental data, but it really didn't do a good job. I think AI can certainly do this kind of analysis. I have no idea why they failed, but the outputs are worse than even a novice human would achieve," Palgrave added.

Many of the outputs were poor fits to the diffraction patterns predicted by models, and they cannot be reliably used as proof of a compound's structure or purity, the group said. The results don't necessarily cast doubt on the GNoME algorithm per se. In fact, Palgrave and his colleagues believe that if some of the inorganic crystal structures predicted by GNoME managed to be successfully synthesized, it would result in a novel material.

Yet the compounds made by A-Lab aren't new, meaning none of GNoME's new materials appear to have been produced yet, they believe. "My own view is that [the paper] should be retracted as the main claim of discovery of new materials is wrong," he told us.

A representative from Google DeepMind declined to comment on the record.

Gerbrand Ceder, a lead author of the original A-Lab paper and Professor of Materials Science and Engineering at UC Berkeley, told The Register in a statement: "The work of Dr Palgrave is not peer reviewed and we believe it has multiple errors in it. We will comment on it in due course, but will do that through the peer reviewed literature."
 

Micky Mikey

Veteran
Supporter
Joined
Sep 27, 2013
Messages
15,839
Reputation
2,820
Daps
88,128

DeepMind's robot chef cooks up 'novel' materials with a side of controversy​


Chemists dispute research claiming millions of recipes for inorganic crystalline compounds​


Katyanna Quach


Wed 31 Jan 2024 // 20:00 UTC

Google DeepMind and UC Berkeley's research into a robot cooking up new materials predicted by AI algorithms is being called into question by a group of chemists.

The initial study, published in November in Nature, garnered attention for deploying a robotic lab system, A-Lab, to automatically synthesize novel compounds predicted by Google DeepMind's model GNoMe.

The software generated millions of recipes for new inorganic crystalline compounds that could have the potential to be useful in future electronics. Over 17 days, a robotic arm reportedly made more than 40 new materials, 35 of which had been predicted by GNoMe. It was able to mix and heat various powders to create materials whose structures were probed using X-ray diffraction.

A machine learning algorithm inspected the experimental patterns and compared them to predicted models to confirm whether the compound was made successfully. The experiment was billed as an important demonstration that showed how AI-powered robots could help drive scientific discovery.

However the results are now being disputed. In a separate paper, seven researchers from Princeton University and University College London believe that A-Lab didn't manage to make a single novel inorganic material.

"Unfortunately, we found that the central claim of the A-Lab paper, namely that a large number of previously unknown materials were synthesized, does not hold," they wrote in their analysis released on ChemRxiv [PDF]. When they combed through the X-ray diffraction data for each material, they found that most of them have been misclassified.

X-ray diffraction patterns allow scientists to calculate the position of the atoms inside. Different materials will make varying diffraction patterns. Scientists closely inspect the peaks and troughs in the data and compare them to existing patterns to interpret each material's structure.

The data from the A-Lab paper, however, shows that most of the 35 patterns for novel materials predicted by GNoMe resemble a mix of already known compounds, while three of them aren't new at all. The errors stem from trying to use AI to computationally determine whether a new material had been made or not, Robert Palgrave, Professor of Inorganic and Materials Chemistry at UCL, told The Register.

Researchers from Google DeepMind and UC Berkeley reportedly determined that if each sample made by the robot had a purity level over 50 percent, and if its structure differed from a list of known compounds contained in the Inorganic Crystal Structure Database (ICSD), it should be declared as novel. But that process is unreliable, Palgrave and his colleagues claim.

"On the computational side, they couldn't deal with something called 'compositional disorder,' which is a very important feature of inorganic materials. A crystal is an ordered arrangement of atoms. But even within that order there can be disorder. Imagine you have a set of children's building blocks, all the same size and shape, and they are arranged in a perfectly ordered pattern on the floor. The blocks are like atoms in a crystal," he told us.

"But now imagine that there are two colors of block, red and blue. We have an ordered pattern of colors, say alternating red, blue, red, blue etc. You might end up with a chess board type arrangement. But it is also possible for the colors to be mixed up randomly. In this case the blocks themselves are ordered, but the colors are disordered."

The chemists believe the initial experiment had not taken compositional disorder into account, and assumed that the atoms in each compound made by A-Lab are ordered when they are actually disordered and already exist in the ICSD. "On the experimental side, they tried to use AI to interpret their experimental data, but it really didn't do a good job. I think AI can certainly do this kind of analysis. I have no idea why they failed, but the outputs are worse than even a novice human would achieve," Palgrave added.

Many of the outputs were poor fits to the diffraction patterns predicted by models, and they cannot be reliably used as proof of a compound's structure or purity, the group said. The results don't necessarily cast doubt on the GNoME algorithm per se. In fact, Palgrave and his colleagues believe that if some of the inorganic crystal structures predicted by GNoME managed to be successfully synthesized, it would result in a novel material.

Yet the compounds made by A-Lab aren't new, meaning none of GNoME's new materials appear to have been produced yet, they believe. "My own view is that [the paper] should be retracted as the main claim of discovery of new materials is wrong," he told us.

A representative from Google DeepMind declined to comment on the record.

Gerbrand Ceder, a lead author of the original A-Lab paper and Professor of Materials Science and Engineering at UC Berkeley, told The Register in a statement: "The work of Dr Palgrave is not peer reviewed and we believe it has multiple errors in it. We will comment on it in due course, but will do that through the peer reviewed literature."
Damn this is dissappointing
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,030
Reputation
8,229
Daps
157,707

Is Google's AI Actually Discovering 'Millions of New Materials?'​

·APR 11, 2024 AT 3:13 PM

"In the DeepMind paper there are many examples of predicted materials that are clearly nonsensical."

Is Google's AI Actually Discovering 'Millions of New Materials?'
PHOTO BY JJ YING ON UNSPLASH

Become a paid subscriber for unlimited, ad-free articles and access to bonus content. This site is funded by subscribers and you will be directly powering our journalism.

In November, Google’s AI outfit DeepMind published a press release titled “Millions of new materials discovered with deep learning." But now, researchers who have analyzed a subset of what DeepMind discovered say "we have yet to find any strikingly novel compounds" in that subset.

“AI tool GNoME finds 2.2 million new crystals, including 380,000 stable materials that could power future technologies,” Google wrote of the finding, adding that this was “equivalent to nearly 800 years’ worth of knowledge,” that many of the discoveries “escaped previous human chemical intuition,” and that it was “an order-of-magnitude expansion in stable materials known to humanity.” The paper was published in Nature and was picked up very widely in the press as an example of the incredible promise of AI in science.

Another paper, published at the same time and done by researchers at Lawrence Berkeley National Laboratory “in partnership with Google DeepMind … shows how our AI predictions can be leveraged for autonomous material synthesis,” Google wrote. In this experiment, researchers created an “autonomous laboratory” (A-Lab) that used “computations, historical data from the literature, machine learning, and active learning to plan and interpret the outcomes of experiments performed using robotics.” Essentially, the researchers used AI and robots to remove humans from the laboratory, and came out the other end after 17 days having discovered and synthesized new materials, which the researchers wrote “demonstrates the effectiveness of artificial intelligence-driven platforms for autonomous materials discovery.”


A diagram from the Chemical Materials paper that explains what should constitute a new material. A DIAGRAM FROM THE CHEMICAL MATERIALS PAPER THAT EXPLAINS WHAT SHOULD CONSTITUTE A NEW "MATERIAL."

But in the last month, two external groups of researchers that analyzed the DeepMind and Berkeley papers and published their own analyses that at the very least suggest this specific research is being oversold. Everyone in the materials science world that I spoke to stressed that AI holds great promise for discovering new types of materials. But they say Google and its deep learning techniques have not suddenly made an incredible breakthrough in the materials science world.

In a perspective paper published in Chemical Materials this week, Anthony Cheetham and Ram Seshadri of the University of California, Santa Barbara selected a random sample of the 380,000 proposed structures released by DeepMind and say that none of them meet a three-part test of whether the proposed material is “credible,” “useful,” and “novel.” They believe that what DeepMind found are “crystalline inorganic compounds and should be described as such, rather than using the more generic label ‘material,’” which they say is a term that should be reserved for things that “demonstrate some utility.”

In the analysis, they write “we have yet to find any strikingly novel compounds in the GNoME and Stable Structure listings, although we anticipate that there must be some among the 384,870 compositions. We also note that, while many of the new compositions are trivial adaptations of known materials, the computational approach delivers credible overall compositions, which gives us confidence that the underlying approach is sound.”

"most of them might be credible, but they’re not very novel because they’re simple derivatives of things that are already known"

In a phone interview, Cheetham told me “the Google paper falls way short in terms of it being a useful, practical contribution to the experimental materials scientists.” Seshadri said “we actually think that Google has missed the mark here.”

“If I was looking for a new material to do a particular function, I wouldn’t comb through more than 2 million new compositions as proposed by Google,” Cheetham said. “I don’t think that’s the best way of going forward. I think the general methodology probably works quite well, but it needs to be a lot more focused around specific needs, so none of us have enough time in our lives to go through 2.2 million possibilities and decide how useful that might be. We spent quite a lot of time on this going through a very small subset of the things that they propose and we realize not only was there no functionality, but most of them might be credible, but they’re not very novel because they’re simple derivatives of things that are already known.”

Google DeepMind told me in a statement, “We stand by all claims made in Google DeepMind’s GNoME paper.”

“Our GNoME research represents orders of magnitude more candidate materials than were previously known to science, and hundreds of the materials we’ve predicted have already been independently synthesized by scientists around the world,” it added. The Materials Project, an open-access material property database, has found Google’s GNoMe database to be top-of-the-line when compared to other machine learning models, and Google said that some of the criticisms in the Chemical Materials analysis, like the fact that many of the new materials have already known structures but use different elements, were done by DeepMind by design.

The Berkeley paper, meanwhile, claimed that an “autonomous laboratory” (called “A-Lab”) took structures proposed by another project called the Materials Project and used a robot to synthesize them with no human intervention and created 43 “novel compounds.” There is one DeepMind researcher on this paper, and Google promoted it in its press release, but Google did not actively do the experiment.

Human researchers analyzing this finding found that it, too, has issues: “We discuss all 43 synthetic products and point out four common shortfalls in the analysis. These errors unfortunately lead to the conclusion that no new materials have been discovered in that work,” the authors, who include Leslie Schoop of Princeton University and Robert Palgrave of University College London, wrote in their analysis.

Again, each of the four researchers I spoke to say that they believe an AI-guided process to finding new materials shows promise, but said that the specific papers they analyzed were not necessarily huge breakthroughs, and that they should not be contextualized as such.

“In the DeepMind paper there are many examples of predicted materials that are clearly nonsensical. Not only to subject experts, but most high school students could say that compounds like H2O11 (which is a Deepmind prediction) do not look right,” Palgrave told me. “There are many many other examples of clearly wrong compounds and Cheetham/Seshadri do a great job of breaking this down more diplomatically than I am doing here. To me it seems that basic quality control has not happened—for the ML to be outputting such compounds as predictions is alarming and to me shows something has gone wrong.”

AI has been used to flood the internet with lots of content that cannot be easily parsed by humans, which makes discovering human-generated, high quality work a challenge. It's an imperfect analogy, but the researchers I spoke to said something similar could happen in materials science as well: Giant databases of potential structures doesn't necessarily make it easier to create something that is going to have a positive impact on society.

“There is some benefit to knowing millions of materials (if accurate) but how do you navigate this space looking for useful materials to make?,” Palgrave said. “Better to have an idea of a few new compounds with exceptionally useful properties than a million where you have no idea which are good.”

Schoop said that there were already “50k unique crystalline inorganic compounds, but we only know the properties of a fraction of these. So it is not very clear to me why we need millions of more compounds if we haven’t yet understood all the ones we do know. It might be much more useful to predict properties of materials than just plainly new materials.”

Again, Google DeepMind says it stands by its paper and takes issues with these characterizations, but it is fair to say that there is now a lot of debate about how AI and machine learning can be used to discover new materials, how these discoveries should be contextualized, tested, and acted upon, and how and whether dumping gigantic databases of proposed structures on the world is actually going to lead to new, tangible breakthroughs for society, or whether it will simply create a lot of noise.

“We don’t think that there’s a problem with AI fundamentally,” Seshadri said. “We think it’s a problem of how you use it. We’re not like, old fashioned people who think these techniques have no place in our science.”
 
Top