bnew

Veteran
Joined
Nov 1, 2015
Messages
51,832
Reputation
7,926
Daps
148,780

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,832
Reputation
7,926
Daps
148,780


If you are GPU-Poor, then you should probably be paying attention to small LLMs & synthetic data.

Below is a list of the most interesting papers relating to this that I have found from the last year (ordered by release date), what am I missing?

Self-Instruct [arxiv.org/abs/2212.10560] - utilizes self-generated instructions for fine-tuning, achieving a 33% improvement and outperforming existing datasets, highlighting the potential of annotation-free methods in enhancing language models.

Unnatural Instructions [arxiv.org/abs/2212.09689] - leverages a large, virtually human labor-free dataset of creative instructions, showcasing model-generated data as a cost-effective and competitive alternative to traditional training methods.

TinyStories [arxiv.org/abs/2305.07759] - introduces a synthetic dataset of short stories, enabling the development of smaller, yet coherent language models, offering insights into language capabilities in low-resource or specialized domains

Orca [arxiv.org/abs/2306.02707] - learns from the complex explanation traces of GPT-4, surpassing conventional models in zero-shot reasoning benchmarks and indicating the promise of learning from step-by-step explanations. - The research indicates that learning from step-by-step explanations is a promising direction to improve model capabilities and skills.

Distilling Step-by-Step [arxiv.org/abs/2305.02301] - efficiently trains smaller models using LLM rationales and less data, with notable success such as a finetuned 770M T5 model outperforming a 540B PaLM model, emphasizing the efficacy of the approach

Code Llama [arxiv.org/pdf/2308.12950.pdf] - releases a family of high-performing language models for code, outperforming other models on benchmarks, with observed improvements even on a small set of high-quality coding data, underlining the potential of specialized models. Further, demonstrates value of unnatural instructions on unreleased model.

Textbooks Are All You Need II [arxiv.org/abs/2309.05463] - trained almost entirely on synthetic data, exhibits performance comparable to models 5x larger, emphasizing the surprising capabilities of smaller LLMs developed with synthetic data.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,832
Reputation
7,926
Daps
148,780

  1. Home
  2. Robotics
  3. Home
  4. Machine learning & AI

OCTOBER 5, 2023
Editors' notes

Researchers train AI with reinforcement learning to defeat champion Street Fighter players​

by Singapore University of Technology and Design


Graphic of the neural network (left) and snapshot of the video game (right). Credit: SUTD
Researchers from the Singapore University of Technology and Design (SUTD) have successfully applied reinforcement learning to a video game problem. The research team created a new complicated movement design software based on an approach that has proven effective in board games like Chess and Go. In a single testing, the movements from the new approach appeared to be superior to those of top human players.

2oqfE3v.png



These findings could possibly impact robotics and automation, ushering in a new era of movement design. The team's article in Advanced Intelligence Systems is titled "A Phase-Change Memristive Reinforcement Learning for Rapidly Outperforming Champion Street Fighter Players."

"Our findings demonstrate that reinforcement learning can do more than just master simple board games. The program excelled in creating more complex movements when trained to address long-standing challenges in movement science," said principal investigator Desmond Loke, Associate Professor, SUTD.

"If this method is applied to the right research problems," he says, "it could accelerate progress in a variety of scientific fields."

The study marks a watershed moment in the use of artificial intelligence to advance movement science studies. The possible applications are numerous, ranging from the development of more autonomous automobiles to new collaborative robots and aerial drones.

Reinforcement learning is a kind of machine learning in which a computer program learns to make decisions by experimenting with various actions and getting feedback. For example, the algorithm can learn to play chess by testing millions of possible moves that result in success or defeat on the board. The program is intended to help algorithms learn from their experiences and improve their decision-making skills over time.

The research team provided the computer with millions of initial motions to create a reinforcement learning program for movement design. The program then made several tries at improving each move randomly towards a specific objective. The computer tweaks character movement or adjusts its strategy until it learns how to make moves that overcome the built-in AI.


Human-level performance in Street Fighter game using phase-change memory reinforcement learning. Credit: SUTD

Associate Prof Loke added "Our approach is unique because we use reinforcement learning to solve the problem of creating movements that outperforms those of top human players. This was simply not possible using prior approaches, and it has the potential to transform the types of moves we can create."


As part of their research, the scientists create motions to compete with various in-built AIs. They confirmed that the moves could overcome different in-built AI opponents.

"Not only is this approach effective, but it is also energy efficient." The phase-change memory-based system, for example, was able to make motions with a hardware energy consumption of about 26 fJ, which is 141 times less than that of existing GPU systems. "Its potential for making ultra-low-hardware-energy movements has yet to be fully explored," stated Associate Prof Loke.

The team focused on creating new motions capable of defeating top human players in a short amount of time. This required the use of decay-based algorithms to create the motions.

Algorithm testing revealed that new AI-designed motions were effective. The researchers noted numerous good qualities as a measure of how successful the design system had become, such as reasonable game etiquette, management of inaccurate information, ability to attain specific game states, and the short times used to defeat opponents.

In other words, the program exhibited exceptional physical and mental qualities. This is referred to as effective movement design. For example, motions were more successful at overcoming opponents because the decayed-based technique used for training neural networks takes fewer training steps than conventional decay methods.

The researchers envision a future in which this strategy will allow them and others to build movements, skills, and other actions that were not before possible.

"The more effective the technology becomes, the more potential applications it opens up, including the continued progression of competitive tasks that computers can facilitate for the best players, such as in Poker, Starcraft, and Jeopardy," Associate Prof Loke said. "We may also see high-level realistic competition for training professional players, discovering new tactics, and making video games more interesting."

SUTD researchers Shao-Xiang Go and Yu Jiang also contributed to the study.

More information: Shao-Xiang Go et al, A Phase‐Change Memristive Reinforcement Learning for Rapidly Outperforming Champion Street‐Fighter Players, Advanced Intelligent Systems (2023). DOI: 10.1002/aisy.202300335
Provided by Singapore University of Technology and Design
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,832
Reputation
7,926
Daps
148,780

  1. Home
  2. Computer Sciences
  3. Home
  4. Machine learning & AI


OCTOBER 6, 2023
Editors' notes

Researchers create a neural network for genomics that explains how it achieves accurate predictions​

by New York University
Researchers create a neural network for genomics that explains how it achieves accurate predictions

An artist's rendition of a biochemical model inspired by an interpretable neural network. Credit: Elizabeth Speiser

A team of New York University computer scientists has created a neural network that can explain how it reaches its predictions. The work reveals what accounts for the functionality of neural networks—the engines that drive artificial intelligence and machine learning—thereby illuminating a process that has largely been concealed from users.


The breakthrough centers on a specific usage of neural networks that has become popular in recent years—tackling challenging biological questions. Among these are examinations of the intricacies of RNA splicing—the focal point of the study—which plays a role in transferring information from DNA to functional RNA and protein products.

"Many neural networks are black boxes—these algorithms cannot explain how they work, raising concerns about their trustworthiness and stifling progress into understanding the underlying biological processes of genome encoding," says Oded Regev, a computer science professor at NYU's Courant Institute of Mathematical Sciences and the senior author of the paper, which was published in the Proceedings of the National Academy of Sciences.

"By harnessing a new approach that improves both the quantity and the quality of the data for machine-learning training, we designed an interpretable neural network that can accurately predict complex outcomes and explain how it arrives at its predictions."

Regev and the paper's other authors, Susan Liao, a faculty fellow at the Courant Institute, and Mukund Sudarshan, a Courant doctoral student at the time of the study, created a neural network based on what is already known about RNA splicing.

Specifically, they developed a model—the data-driven equivalent of a high-powered microscope—that allows scientists to trace and quantify the RNA splicing process, from input sequence to output splicing prediction.

"Using an 'interpretable-by-design' approach, we've developed a neural network model that provides insights into RNA splicing—a fundamental process in the transfer of genomic information," notes Regev. "Our model revealed that a small, hairpin-like structure in RNA can decrease splicing."

The researchers confirmed the insights their model provides through a series of experiments. These results showed a match with the model's discovery: Whenever the RNA molecule folded into a hairpin configuration, splicing was halted, and the moment the researchers disrupted this hairpin structure, splicing was restored.

More information: Susan E. Liao et al, Deciphering RNA splicing logic with interpretable machine learning, Proceedings of the National Academy of Sciences (2023). DOI: 10.1073/pnas.2221165120

Journal information: Proceedings of the National Academy of Sciences


Provided by New York University
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,832
Reputation
7,926
Daps
148,780
  1. Home
  2. Computer Sciences
  3. Home
  4. Machine learning & AI


OCTOBER 5, 2023
Editors' notes

New technique based on 18th-century mathematics shows simpler AI models don't need deep learning​

by University of Jyväskylä
Simpler AI models don't need deep learning
Layerwise pretraining from heads to inner layers. The most outer layer is trained first and its residual is then fed as training data for the next hidden layer until all layers have been sequentially pretrained. Credit: Neurocomputing (2023). DOI: 10.1016/j.neucom.2023.126520

Researchers from the University of Jyväskylä were able to simplify the most popular technique of artificial intelligence, deep learning, using 18th-century mathematics. They also found that classical training algorithms that date back 50 years work better than the more recently popular techniques. Their simpler approach advances green IT and is easier to use and understand.


The recent success of artificial intelligence is significantly based on the use of one core technique: deep learning. Deep learning refers to artificial intelligence techniques where networks with a large number of data processing layers are trained using massive datasets and a substantial amount of computational resources.

Deep learning enables computers to perform complex tasks such as analyzing and generating images and music, playing digitized games and, most recently in connection with ChatGPT and other generative AI techniques, acting as a natural language conversational agent that provides high-quality summaries of existing knowledge.

Six years ago, Professor Tommi Kärkkäinen and Doctoral researcher Jan Hänninen conducted preliminary studies on data reduction. The results were surprising: If one combines simple network structures in a novel way then depth is not needed. Similar or even better results can be obtained with shallow models.

"The use of deep learning techniques is a complex and error-prone endeavor, and the resulting models are difficult to maintain and interpret," says Kärkkäinen. "Our new model in its shallow form is more expressive and can reliably reduce large datasets while maintaining all the necessary information in them."

The structure of the new AI technique dates back to 18th-century mathematics. Kärkkäinen and Hänninen also found that the traditional optimization methods from the 1970s work better in preparing their model compared to the 21st-century techniques used in deep learning.

"Our results ensure that the use of neural networks in various applications is easier and more reliable than before," suggests Hänninen. The study is published in the journal Neurocomputing.

Simpler models lead to greener and more ethical AI​

Artificial intelligence has an increasingly significant role in modern technologies and, therefore, it is more and more important to understand how AI does what it does.

"The more transparent and simpler AI is, the easier it is to consider its ethical use," says Kärkkäinen. "For instance, in medical applications deep learning techniques are so complex that their straightforward use can jeopardize patient safety due to unexpected, hidden behavior."

The researchers note that simpler models can help develop green IT and are more environmentally friendly because they save computational resources and use significantly less energy.

The results, which challenge the common beliefs and currently popular perceptions with deep learning techniques, were difficult to get published.

"Deep learning has such a prominent role in research, development, and the AI business that, even if science always progresses and reflects the latest evidence, the community itself may have resistance to change."

"We are very interested to see how these results will be received in the scientific and business community," says Kärkkäinen. "Our new AI has a range of applications in our own research, from nanotechnology for better materials in the sustainable economy to improving digital learning environments and increasing the reliability and transparency of medical and well-being technology."

More information: Tommi Kärkkäinen et al, Additive autoencoder for dimension estimation, Neurocomputing (2023). DOI: 10.1016/j.neucom.2023.126520

Provided by University of Jyväskylä
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,832
Reputation
7,926
Daps
148,780

  1. Home
  2. Computer Sciences
  3. Home
  4. Machine learning & AI


OCTOBER 6, 2023
Editors' notes

Two experiments make a case for using deepfakes in training videos​

by University of Bath
Two studies make a case for using deepfakes in training videos
FakeForward wall squats (study participant practises her wall squats. From L to R: user; someone with better skills, FakeForward). Credit: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (2023). DOI: 10.1145/3544548.3581100

Watching a training video featuring a deepfake version of yourself, as opposed to a clip featuring somebody else, makes learning faster, easier and more fun, according to new research led by the REVEAL research center at the University of Bath.


This finding was seen over two separate experiments, one exploring fitness training and the other involving public speaking. The work is published in the journal Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems.

A deepfake is a video where a person's face or body has been digitally altered so they appear to be someone else. This technology receives highly negative press due to its potential to spread political misinformation and to maliciously develop pornographic content that superimposes the face of one person on the body of another. It also poses an existential threat to professionals in the creative industries.

Dr. Christof Lutteroth and Dr. Christopher Clarke, both from the Department of Computer Science at Bath and co-authors of the new study, say their findings present two positive use cases for deepfakes, and hope their findings catalyze more research into ways deepfake can be applied to do good in the world.

"Deepfakes are used a lot for nefarious purposes, but our research suggests that FakeForward (the name used by the researchers to describe the use of deepfake to teach a new skill) is an unexplored way of applying the technology so it adds value to people's lives by supporting and improving their performances," said Dr. Clarke.

Dr. Lutteroth added, "From this study, it's clear that deepfake has the potential to be really exciting for people. By following a tutorial where they act as their own tutor, people can immediately get better at a task—it's like magic."

More reps, greater enjoyment​

For the fitness experiment, study participants were asked to watch a training video featuring a deepfake of their own face pasted over the body of a more advanced exerciser.

The researchers chose six exercises (planks, squats, wall squats, sit-ups, squat jumps and press-ups), each targeting a different muscle group and requiring different types of movement.

For each exercise, study participants first watched a video tutorial where a model demonstrated the exercise, and then had a go at repeating the exercise themselves. The model was chosen both to resemble the participant and to outperform them, though their skill level was attainable to the test subject.

The process of watching the video and mimicking the exercise was also performed using a deepfake instructor, where the participant's own face was superimposed on a model's body.

For both conditions, the researchers measured the number of repetitions, or the time participants were able to hold an exercise.


Credit: University of Bath

For all exercises, regardless of the order in which the videos were watched, participants performed better after watching the video of "themselves," compared to watching a video showing someone else.

"Deepfake was a really powerful tool," said Dr. Lutteroth. "Immediately people could do more press-ups or whatever it was they were being asked to do. Most also marked themselves as doing the exercise better than they did with the non-deepfake tutorial, and enjoyed it more."

Public speaking​

The second FakeForward study by the same team found that deepfake can significantly boost a person's skills as a public speaker.

When the face of a proficient public speaker was replaced with a user's, learning was significantly amplified, with both confidence and perceived competence in public speaking growing after watching the FakeForward video.

Many participants felt moved and inspired by the FakeForward videos, saying things such as, "it gives me a lot of strength," "the deepfake video makes me feel that speaking is actually not that scary" and "when I saw myself standing there and speaking, I kinda felt proud of myself."

Ethical concerns​

In principle it's already possible for individuals to create FakeForward videos through open-source applications such as Roop and Deep Face Lab, though in practice a degree of technical expertise is required.

To guard against potential misuse, the FakeForward research team has developed an ethical protocol to guide the development of 'selfie' deepfake videos.

"For this technology to be applied ethically, people should only create self-models of themselves, because the underpinning concept is that these are useful for private consumption," said Dr. Clarke.

Dr. Lutteroth added, "Just as deepfake can be used to improve 'good' activities, it can also be misused to amplify 'bad' activities—for instance, it can teach people to be more racist, more sexist and ruder. For example, watching a video of what appears to be you saying terrible things can influence you more than watching someone else saying these things."

He added, "Clearly, we need to ensure users don't learn negative or harmful behaviors from FakeForward. The obstacles are considerable but not insurmountable."

More information: Christopher Clarke et al, FakeForward: Using Deepfake Technology for Feedforward Learning, Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (2023). DOI: 10.1145/3544548.3581100

Provided by University of Bath
 
Top