What is a sequence? A sequence is a list of things that are ordered one after the other. For example, a sequence of words is a sentence.
What is a Transformer? A Transformer is a type of neural network that is used for natural language processing tasks. Transformers are very good at understanding long sequences of text, but they can be slow when the sequence is very long.
What is LONGNET? LONGNET is a new type of Transformer that is designed to be able to understand very long sequences of text. LONGNET is faster than traditional Transformers, and it can also understand longer sequences without sacrificing accuracy.
Here is an analogy that might help you understand LONGNET better:
Imagine that you are trying to read a very long book. If you try to read the whole book at once, it will be very difficult to understand. But if you break the book down into smaller sequences, it will be much easier to understand. LONGNET is like a machine that can break down a long sequence of text into smaller sequences, so that it can be understood more easily.
Here are some of the benefits of using LONGNET:
It can understand very long sequences of text. This is important for tasks like machine translation, where you need to understand the meaning of a whole sentence or paragraph.
It is faster than traditional Transformers. This means that it can process text more quickly, which is important for applications like real-time translation.
It is more accurate than traditional Transformers. This means that it can understand text more accurately, which is important for tasks like question answering and summarization.
I hope this explanation helps you understand LONGNET better.
Right now, when computers try to understand long pieces of text, they have to look at each word one at a time. This is slow and takes up a lot of memory. A new breakthrough called "longnet" could change that. Longnet lets computers look at whole sentences at once, which is much faster and uses less memory.
This is important because a lot of real-world text is long. For example, a news article or a book might be thousands of words long. Longnet could make it possible for computers to understand these long pieces of text more easily.
Longnet could also be used for other things, like machine translation. Right now, machine translation can be slow and inaccurate when it comes to long pieces of text. Longnet could make machine translation faster and more accurate.
Overall, longnet is a promising new technology that could have a big impact on the way computers understand text.
Here are some specific examples of how longnet could be used:
A computer could read an entire book and summarize it for you.
A computer could translate a long article from one language to another.
A computer could help you write a long essay by suggesting ideas and providing feedback.
These are just a few examples of the many things that longnet could be used for. As longnet continues to develop, we can expect to see even more amazing things that it can do.
Right now, when computers read long pieces of text, they have to read every single word in the text, one at a time. This is called "quadratic complexity" because the number of words in the text goes up as the square of the number of words that the computer has to read. So, if you have a text that is 100 words long, the computer has to read 10,000 words! This is very slow and takes up a lot of memory.
A breakthrough in self-attention would mean that computers could read long pieces of text much more efficiently. This is because self-attention allows computers to focus on the most important words in the text, and ignore the less important words. This means that the computer only has to read a fraction of the words in the text, which is much faster and takes up less memory.
This breakthrough would be a big deal because it would allow computers to read and understand much longer pieces of text. This would be helpful for things like machine translation, where computers need to understand the meaning of long sentences in order to translate them correctly. It would also be helpful for things like question answering, where computers need to understand the meaning of long passages of text in order to answer questions about them.
So, in short, a breakthrough in self-attention would make computers much better at reading and understanding long pieces of text. This would be a big step forward for natural language processing and would have many benefits for people who use computers.
Here is an analogy that might help you understand this concept. Imagine that you are trying to find your friend in a crowded room. You could walk around the room and look at every single person, but that would take a long time and you might not find your friend. Instead, you could use a strategy called "selective attention" to focus on the people who are most likely to be your friend. For example, you might look for people who are wearing the same kind of clothes as your friend, or who are standing in the same area of the room. This strategy would allow you to find your friend much more quickly and efficiently.
Self-attention is like selective attention for computers. It allows computers to focus on the most important words in a text, and ignore the less important words. This makes it much faster and easier for computers to understand the meaning of long pieces of text.
AI May Have Found The Most Powerful Anti-Aging Molecule Ever Seen
Finding new drugs – called "drug discovery" – is an expensive and time-consuming task.www.sciencealert.com
AI May Have Found The Most Powerful Anti-Aging Molecule Ever Seen
HEALTH07 July 2023
ByVANESSA SMER-BARRETO, THE CONVERSATION
(mrovka/Getty Images)
Finding new drugs – called "drug discovery" – is an expensive and time-consuming task. But a type of artificial intelligence called machine learning can massively accelerate the process and do the job for a fraction of the price.
My colleagues and I recently used this technology to find three promising candidates for senolytic drugs – drugs that slow ageing and prevent age-related diseases.
Senolytics work by killing senescent cells. These are cells that are "alive" (metabolically active), but which can no longer replicate, hence their nickname: zombie cells.
The inability to replicate is not necessarily a bad thing. These cells have suffered damage to their DNA – for example, skin cells damaged by the Sun's rays – so stopping replication stops the damage from spreading.
But senescent cells aren't always a good thing. They secrete a cocktail of inflammatory proteins that can spread to neighboring cells. Over a lifetime, our cells suffer a barrage of assaults, from UV rays to exposure to chemicals, and so these cells accumulate.
Elevated numbers of senescent cells have been implicated in a range of diseases, including type 2 diabetes, COVID, pulmonary fibrosis, osteoarthritis and cancer.
Studies in lab mice have shown that eliminating senescent cells, using senolytics, can ameliorate these diseases. These drugs can kill off zombie cells while keeping healthy cells alive.
Around 80 senolytics are known, but only two have been tested in humans: a combination of dasatinib and quercetin. It would be great to find more senolytics that can be used in a variety of diseases, but it takes ten to 20 years and billions of dollars for a drug to make it to the market.
Results in five minutes
My colleagues and I – including researchers from the University of Edinburgh and the Spanish National Research Council IBBTEC-CSIC in Santander, Spain – wanted to know if we could train machine learning models to identify new senolytic drug candidates.
To do this, we fed AI models with examples of known senolytics and non-senolytics. The models learned to distinguish between the two, and could be used to predict whether molecules they had never seen before could also be senolytics.
When solving a machine learning problem, we usually test the data on a range of different models first as some of them tend to perform better than others.
To determine the best-performing model, at the beginning of the process, we separate a small section of the available training data and keep it hidden from the model until after the training process is completed.
We then use this testing data to quantify how many errors the model is making. The one that makes the fewest errors, wins.
We determined our best model and set it to make predictions. We gave it 4,340 molecules and five minutes later it delivered a list of results.
The AI model identified 21 top-scoring molecules that it deemed to have a high likelihood of being senolytics. If we had tested the original 4,340 molecules in the lab, it would have taken at least a few weeks of intensive work and £50,000 just to buy the compounds, not counting the cost of the experimental machinery and setup.
We then tested these drug candidates on two types of cells: healthy and senescent. The results showed that out of the 21 compounds, three (periplocin, oleandrin and ginkgetin) were able to eliminate senescent cells, while keeping most of the normal cells alive. These new senolytics then underwent further testing to learn more about how they work in the body.
More detailed biological experiments showed that, out of the three drugs, oleandrin was more effective than the best-performing known senolytic drug of its kind.
The potential repercussions of this interdisciplinary approach – involving data scientists, chemists and biologists – are huge. Given enough high-quality data, AI models can accelerate the amazing work that chemists and biologists do to find treatments and cures for diseases – especially those of unmet need.
Having validated them in senescent cells, we are now testing the three candidate senolytics in human lung tissue. We hope to report our next results in two years' time.
Vanessa Smer-Barreto, Research Fellow, Institute of Genetics and Molecular Medicine, The University of Edinburgh
This article is republished from The Conversation under a Creative Commons license. Read the original article.