bnew

Veteran
Joined
Nov 1, 2015
Messages
56,112
Reputation
8,239
Daps
157,807
{continued}

Here is the AI powered completion to the above -

### Section: 20.4 Legal Compliance in Innovation

Innovation, while being a catalyst for growth and development, must be conducted within the confines of the law. This is where legal compliance comes into play. Legal compliance is the process or procedure to ensure that an organization follows relevant laws, regulations, and business rules. In the context of innovation, legal compliance is crucial to ensure that the innovative process and its outcomes do not infringe upon any laws or regulations.
#### Subsection: 20.4c Implementing Legal Compliance in Innovation

Implementing legal compliance in innovation involves a series of steps that ensure the innovative process and its outcomes are in line with the relevant laws and regulations. This involves understanding the legal landscape, identifying potential legal hurdles, and taking proactive steps to address them.

The first step in implementing legal compliance in innovation is understanding the legal landscape. This involves gaining a comprehensive understanding of the laws and regulations that are relevant to the innovative process. This could include laws related to intellectual property rights, regulatory compliance, and enterprise law, among others.

The second step is identifying potential legal hurdles. This involves identifying areas where the innovative process could potentially infringe upon laws or regulations. For example, if the innovation involves the development of a new product, it is crucial to ensure that the product does not infringe upon any existing patents.

The third step is taking proactive steps to address potential legal hurdles. This could involve seeking legal advice, conducting a thorough legal review of the innovation, and implementing measures to ensure legal compliance. For example, if the innovation involves the use of a certain technology, it may be necessary to obtain a license for its use.

In addition to these steps, it is also crucial to ensure legal consistency. Legal consistency is a property that declares enterprise policies to be free of contradictions with the law. In the context of innovation, legal consistency ensures that the innovative process and its outcomes are in line with the organization's policies and do not contradict any laws or regulations.

Implementing legal compliance in innovation is not just about avoiding legal disputes or penalties. It is also about leveraging the law as a tool to protect and enhance the value of the innovation. By ensuring legal compliance, organizations can protect their innovations from potential legal disputes, enhance their value by securing intellectual property rights, and ensure their long-term sustainability.

In conclusion, legal compliance plays a crucial role in the process of innovation. By understanding the legal landscape, identifying potential legal hurdles, and taking proactive steps to address them, organizations can ensure that their innovations are legally compliant and sustainable.

This demonstration, along with the linked textbooks above, show that the reality of AI written textbooks is not far off. This result has ramifications that extend far beyond the interesting research result originally posed by Microsoft. It is worth stopping for a moment to reflect on these potential impacts.

AI and Education​

LLMs, like GPT-4 and its predecessors, have demonstrated an uncanny ability to generate human-like text, answer questions, and even assist in complex problem-solving. In fact, roughly half of this article’s content was AI generated (with human input and guidance). LLMs vast knowledge base, derived from extensive training on diverse datasets, allows them to provide insights on a wide range of topics. This capability is particularly beneficial in an educational context. Students, regardless of their learning pace or style, can interact with these models to get personalized explanations, delve deeper into topics of interest, or even get assistance with homework. In essence, LLMs can serve as on-demand tutors, democratizing access to quality education resources.

Recent research breakthroughs have showcased the potential of AI-generated content that rivals the quality of human-authored materials. These synthetic textbooks, curated and created by advanced AI models, can be tailored to fit specific curricula, student demographics, or even individual learner profiles. The dynamic nature of these textbooks allows for real-time updates, ensuring that the content remains current and relevant. Furthermore, they can be enriched with interactive elements, multimedia, and adaptive learning pathways, making learning more engaging and personalized. Even if textbooks begin to play a less pivotal role in direct human education, they are likely to remain as inputs to LLMs for quite some time.

However, the adoption of synthetic textbooks and LLM-assisted learning is not without challenges. Concerns about the accuracy, bias, and ethical implications of AI-generated content need to be addressed. Ensuring that these tools enhance rather than inhibit critical thinking and creativity is also paramount. The human touch, the empathy, passion, and intuition that educators bring to the classroom, remains irreplaceable. AI tools should be seen as complements, not replacements, to the traditional educational experience.

Where are we in the replication attempts?

It appears that we are able to make incredibly high quality textbooks. This is an effort which we are continuing to pursue to understand more deeply. Moreover, our dataset has grown to over 1 billion unique and differentiated tokens. We are doing ablation pre-training studies now to understand better how different datasets impact LLM learning. We are working on fine-tuning existing open source models to increase our data quality before scaling out to 20B tokens. Further, we are looking for human readers to critique and give feedback on the current state of the textbooks.

The targets are to do a complete replication of the phi-1.5 work, and likely to scale further. One such goal might be to see if it possible to create a more competitive 7 billion parameter model.

Conclusion

In the rapidly advancing world of artificial intelligence, the potential of AI-authored textbooks promises a revolution in education. As our research and replication attempts indicate, we are on the brink of harnessing AI to produce high-quality educational content that could democratize learning across the globe. While the results are promising, a balanced approach is necessary. Embracing AI's potential should not overshadow the intrinsic value of human touch in education. As we venture into this new frontier, collaboration between humans and AI will be the key, ensuring that the resultant content is accurate, unbiased, and tailored to the diverse needs of learners worldwide.


If this interests you, we ask you to join the open source collaboration around this work. Please reach out to me or join the Discord community to get started.

Acknowledgement​

I would like to thank runpod.io for their gracious support in the form of computational resources. Similarly, I owe thanks to Google Cloud Compute and OpenAI for their allocated credits.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,112
Reputation
8,239
Daps
157,807

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,112
Reputation
8,239
Daps
157,807


If you are GPU-Poor, then you should probably be paying attention to small LLMs & synthetic data.

Below is a list of the most interesting papers relating to this that I have found from the last year (ordered by release date), what am I missing?

Self-Instruct [arxiv.org/abs/2212.10560] - utilizes self-generated instructions for fine-tuning, achieving a 33% improvement and outperforming existing datasets, highlighting the potential of annotation-free methods in enhancing language models.

Unnatural Instructions [arxiv.org/abs/2212.09689] - leverages a large, virtually human labor-free dataset of creative instructions, showcasing model-generated data as a cost-effective and competitive alternative to traditional training methods.

TinyStories [arxiv.org/abs/2305.07759] - introduces a synthetic dataset of short stories, enabling the development of smaller, yet coherent language models, offering insights into language capabilities in low-resource or specialized domains

Orca [arxiv.org/abs/2306.02707] - learns from the complex explanation traces of GPT-4, surpassing conventional models in zero-shot reasoning benchmarks and indicating the promise of learning from step-by-step explanations. - The research indicates that learning from step-by-step explanations is a promising direction to improve model capabilities and skills.

Distilling Step-by-Step [arxiv.org/abs/2305.02301] - efficiently trains smaller models using LLM rationales and less data, with notable success such as a finetuned 770M T5 model outperforming a 540B PaLM model, emphasizing the efficacy of the approach

Code Llama [arxiv.org/pdf/2308.12950.pdf] - releases a family of high-performing language models for code, outperforming other models on benchmarks, with observed improvements even on a small set of high-quality coding data, underlining the potential of specialized models. Further, demonstrates value of unnatural instructions on unreleased model.

Textbooks Are All You Need II [arxiv.org/abs/2309.05463] - trained almost entirely on synthetic data, exhibits performance comparable to models 5x larger, emphasizing the surprising capabilities of smaller LLMs developed with synthetic data.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,112
Reputation
8,239
Daps
157,807

  1. Home
  2. Robotics
  3. Home
  4. Machine learning & AI

OCTOBER 5, 2023
Editors' notes

Researchers train AI with reinforcement learning to defeat champion Street Fighter players​

by Singapore University of Technology and Design


Graphic of the neural network (left) and snapshot of the video game (right). Credit: SUTD
Researchers from the Singapore University of Technology and Design (SUTD) have successfully applied reinforcement learning to a video game problem. The research team created a new complicated movement design software based on an approach that has proven effective in board games like Chess and Go. In a single testing, the movements from the new approach appeared to be superior to those of top human players.

2oqfE3v.png



These findings could possibly impact robotics and automation, ushering in a new era of movement design. The team's article in Advanced Intelligence Systems is titled "A Phase-Change Memristive Reinforcement Learning for Rapidly Outperforming Champion Street Fighter Players."

"Our findings demonstrate that reinforcement learning can do more than just master simple board games. The program excelled in creating more complex movements when trained to address long-standing challenges in movement science," said principal investigator Desmond Loke, Associate Professor, SUTD.

"If this method is applied to the right research problems," he says, "it could accelerate progress in a variety of scientific fields."

The study marks a watershed moment in the use of artificial intelligence to advance movement science studies. The possible applications are numerous, ranging from the development of more autonomous automobiles to new collaborative robots and aerial drones.

Reinforcement learning is a kind of machine learning in which a computer program learns to make decisions by experimenting with various actions and getting feedback. For example, the algorithm can learn to play chess by testing millions of possible moves that result in success or defeat on the board. The program is intended to help algorithms learn from their experiences and improve their decision-making skills over time.

The research team provided the computer with millions of initial motions to create a reinforcement learning program for movement design. The program then made several tries at improving each move randomly towards a specific objective. The computer tweaks character movement or adjusts its strategy until it learns how to make moves that overcome the built-in AI.


Human-level performance in Street Fighter game using phase-change memory reinforcement learning. Credit: SUTD

Associate Prof Loke added "Our approach is unique because we use reinforcement learning to solve the problem of creating movements that outperforms those of top human players. This was simply not possible using prior approaches, and it has the potential to transform the types of moves we can create."


As part of their research, the scientists create motions to compete with various in-built AIs. They confirmed that the moves could overcome different in-built AI opponents.

"Not only is this approach effective, but it is also energy efficient." The phase-change memory-based system, for example, was able to make motions with a hardware energy consumption of about 26 fJ, which is 141 times less than that of existing GPU systems. "Its potential for making ultra-low-hardware-energy movements has yet to be fully explored," stated Associate Prof Loke.

The team focused on creating new motions capable of defeating top human players in a short amount of time. This required the use of decay-based algorithms to create the motions.

Algorithm testing revealed that new AI-designed motions were effective. The researchers noted numerous good qualities as a measure of how successful the design system had become, such as reasonable game etiquette, management of inaccurate information, ability to attain specific game states, and the short times used to defeat opponents.

In other words, the program exhibited exceptional physical and mental qualities. This is referred to as effective movement design. For example, motions were more successful at overcoming opponents because the decayed-based technique used for training neural networks takes fewer training steps than conventional decay methods.

The researchers envision a future in which this strategy will allow them and others to build movements, skills, and other actions that were not before possible.

"The more effective the technology becomes, the more potential applications it opens up, including the continued progression of competitive tasks that computers can facilitate for the best players, such as in Poker, Starcraft, and Jeopardy," Associate Prof Loke said. "We may also see high-level realistic competition for training professional players, discovering new tactics, and making video games more interesting."

SUTD researchers Shao-Xiang Go and Yu Jiang also contributed to the study.

More information: Shao-Xiang Go et al, A Phase‐Change Memristive Reinforcement Learning for Rapidly Outperforming Champion Street‐Fighter Players, Advanced Intelligent Systems (2023). DOI: 10.1002/aisy.202300335
Provided by Singapore University of Technology and Design
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,112
Reputation
8,239
Daps
157,807
  1. Home
  2. Computer Sciences
  3. Home
  4. Machine learning & AI


OCTOBER 5, 2023
Editors' notes

New technique based on 18th-century mathematics shows simpler AI models don't need deep learning​

by University of Jyväskylä
Simpler AI models don't need deep learning
Layerwise pretraining from heads to inner layers. The most outer layer is trained first and its residual is then fed as training data for the next hidden layer until all layers have been sequentially pretrained. Credit: Neurocomputing (2023). DOI: 10.1016/j.neucom.2023.126520

Researchers from the University of Jyväskylä were able to simplify the most popular technique of artificial intelligence, deep learning, using 18th-century mathematics. They also found that classical training algorithms that date back 50 years work better than the more recently popular techniques. Their simpler approach advances green IT and is easier to use and understand.


The recent success of artificial intelligence is significantly based on the use of one core technique: deep learning. Deep learning refers to artificial intelligence techniques where networks with a large number of data processing layers are trained using massive datasets and a substantial amount of computational resources.

Deep learning enables computers to perform complex tasks such as analyzing and generating images and music, playing digitized games and, most recently in connection with ChatGPT and other generative AI techniques, acting as a natural language conversational agent that provides high-quality summaries of existing knowledge.

Six years ago, Professor Tommi Kärkkäinen and Doctoral researcher Jan Hänninen conducted preliminary studies on data reduction. The results were surprising: If one combines simple network structures in a novel way then depth is not needed. Similar or even better results can be obtained with shallow models.

"The use of deep learning techniques is a complex and error-prone endeavor, and the resulting models are difficult to maintain and interpret," says Kärkkäinen. "Our new model in its shallow form is more expressive and can reliably reduce large datasets while maintaining all the necessary information in them."

The structure of the new AI technique dates back to 18th-century mathematics. Kärkkäinen and Hänninen also found that the traditional optimization methods from the 1970s work better in preparing their model compared to the 21st-century techniques used in deep learning.

"Our results ensure that the use of neural networks in various applications is easier and more reliable than before," suggests Hänninen. The study is published in the journal Neurocomputing.

Simpler models lead to greener and more ethical AI​

Artificial intelligence has an increasingly significant role in modern technologies and, therefore, it is more and more important to understand how AI does what it does.

"The more transparent and simpler AI is, the easier it is to consider its ethical use," says Kärkkäinen. "For instance, in medical applications deep learning techniques are so complex that their straightforward use can jeopardize patient safety due to unexpected, hidden behavior."

The researchers note that simpler models can help develop green IT and are more environmentally friendly because they save computational resources and use significantly less energy.

The results, which challenge the common beliefs and currently popular perceptions with deep learning techniques, were difficult to get published.

"Deep learning has such a prominent role in research, development, and the AI business that, even if science always progresses and reflects the latest evidence, the community itself may have resistance to change."

"We are very interested to see how these results will be received in the scientific and business community," says Kärkkäinen. "Our new AI has a range of applications in our own research, from nanotechnology for better materials in the sustainable economy to improving digital learning environments and increasing the reliability and transparency of medical and well-being technology."

More information: Tommi Kärkkäinen et al, Additive autoencoder for dimension estimation, Neurocomputing (2023). DOI: 10.1016/j.neucom.2023.126520

Provided by University of Jyväskylä
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,112
Reputation
8,239
Daps
157,807

  1. Home
  2. Computer Sciences
  3. Home
  4. Machine learning & AI


OCTOBER 6, 2023
Editors' notes

Two experiments make a case for using deepfakes in training videos​

by University of Bath
Two studies make a case for using deepfakes in training videos
FakeForward wall squats (study participant practises her wall squats. From L to R: user; someone with better skills, FakeForward). Credit: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (2023). DOI: 10.1145/3544548.3581100

Watching a training video featuring a deepfake version of yourself, as opposed to a clip featuring somebody else, makes learning faster, easier and more fun, according to new research led by the REVEAL research center at the University of Bath.


This finding was seen over two separate experiments, one exploring fitness training and the other involving public speaking. The work is published in the journal Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems.

A deepfake is a video where a person's face or body has been digitally altered so they appear to be someone else. This technology receives highly negative press due to its potential to spread political misinformation and to maliciously develop pornographic content that superimposes the face of one person on the body of another. It also poses an existential threat to professionals in the creative industries.

Dr. Christof Lutteroth and Dr. Christopher Clarke, both from the Department of Computer Science at Bath and co-authors of the new study, say their findings present two positive use cases for deepfakes, and hope their findings catalyze more research into ways deepfake can be applied to do good in the world.

"Deepfakes are used a lot for nefarious purposes, but our research suggests that FakeForward (the name used by the researchers to describe the use of deepfake to teach a new skill) is an unexplored way of applying the technology so it adds value to people's lives by supporting and improving their performances," said Dr. Clarke.

Dr. Lutteroth added, "From this study, it's clear that deepfake has the potential to be really exciting for people. By following a tutorial where they act as their own tutor, people can immediately get better at a task—it's like magic."

More reps, greater enjoyment​

For the fitness experiment, study participants were asked to watch a training video featuring a deepfake of their own face pasted over the body of a more advanced exerciser.

The researchers chose six exercises (planks, squats, wall squats, sit-ups, squat jumps and press-ups), each targeting a different muscle group and requiring different types of movement.

For each exercise, study participants first watched a video tutorial where a model demonstrated the exercise, and then had a go at repeating the exercise themselves. The model was chosen both to resemble the participant and to outperform them, though their skill level was attainable to the test subject.

The process of watching the video and mimicking the exercise was also performed using a deepfake instructor, where the participant's own face was superimposed on a model's body.

For both conditions, the researchers measured the number of repetitions, or the time participants were able to hold an exercise.


Credit: University of Bath

For all exercises, regardless of the order in which the videos were watched, participants performed better after watching the video of "themselves," compared to watching a video showing someone else.

"Deepfake was a really powerful tool," said Dr. Lutteroth. "Immediately people could do more press-ups or whatever it was they were being asked to do. Most also marked themselves as doing the exercise better than they did with the non-deepfake tutorial, and enjoyed it more."

Public speaking​

The second FakeForward study by the same team found that deepfake can significantly boost a person's skills as a public speaker.

When the face of a proficient public speaker was replaced with a user's, learning was significantly amplified, with both confidence and perceived competence in public speaking growing after watching the FakeForward video.

Many participants felt moved and inspired by the FakeForward videos, saying things such as, "it gives me a lot of strength," "the deepfake video makes me feel that speaking is actually not that scary" and "when I saw myself standing there and speaking, I kinda felt proud of myself."

Ethical concerns​

In principle it's already possible for individuals to create FakeForward videos through open-source applications such as Roop and Deep Face Lab, though in practice a degree of technical expertise is required.

To guard against potential misuse, the FakeForward research team has developed an ethical protocol to guide the development of 'selfie' deepfake videos.

"For this technology to be applied ethically, people should only create self-models of themselves, because the underpinning concept is that these are useful for private consumption," said Dr. Clarke.

Dr. Lutteroth added, "Just as deepfake can be used to improve 'good' activities, it can also be misused to amplify 'bad' activities—for instance, it can teach people to be more racist, more sexist and ruder. For example, watching a video of what appears to be you saying terrible things can influence you more than watching someone else saying these things."

He added, "Clearly, we need to ensure users don't learn negative or harmful behaviors from FakeForward. The obstacles are considerable but not insurmountable."

More information: Christopher Clarke et al, FakeForward: Using Deepfake Technology for Feedforward Learning, Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (2023). DOI: 10.1145/3544548.3581100

Provided by University of Bath
 

Jmare007

pico pal q lee
Joined
Apr 30, 2012
Messages
44,373
Reputation
5,854
Daps
108,822
Reppin
Chile
@bnew breh, is there any chance you put in spoilers all these copy/paste news or articles? every time this thread gets upped I wanna read what's new but there's so much text that at least on Chrome and Firefox the thread crashes and only shows up to a certain number of posts (I have 50 per page and it cuts at 30-35).
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,112
Reputation
8,239
Daps
157,807
@bnew breh, is there any chance you put in spoilers all these copy/paste news or articles? every time this thread gets upped I wanna read what's new but there's so much text that at least on Chrome and Firefox the thread crashes and only shows up to a certain number of posts (I have 50 per page and it cuts at 30-35).

I use chromium/chrome to open browse thecoli without issue. the coli limits each page to 15 posts.

try disabling javascript because text shouldn't be the reason your browser is crashing.
 

Rev Leon Lonnie Love

damned mine eyes, DAMNED mine eyes!!
Joined
Nov 11, 2017
Messages
21,955
Reputation
5,488
Daps
89,190
I use chromium/chrome to open browse thecoli without issue. the coli limits each page to 15 posts.

try disabling javascript because text shouldn't be the reason your browser is crashing.
Its still too much text. We don't need to read the whole thing from here. An abstract or summary and a link to the full thing should suffice. Copying and pasting the whole thing creates visual noise in the thread and one ends up missing other posters replies while scrolling through all this mountain of article text
 

Jmare007

pico pal q lee
Joined
Apr 30, 2012
Messages
44,373
Reputation
5,854
Daps
108,822
Reppin
Chile
I use chromium/chrome to open browse thecoli without issue. the coli limits each page to 15 posts.

try disabling javascript because text shouldn't be the reason your browser is crashing.
I'm not changing the set up of this board just for this thread. Oh well, at least I tried :francis:
 
Top