bnew

Veteran
Joined
Nov 1, 2015
Messages
51,755
Reputation
7,916
Daps
148,569

Master Deep Voice Cloning in Minutes: Unleash Your Vocal Superpowers! Free and Locally on Your PC​






#TextToSpeech #DeepFake #VoiceOver

623 views • May 14, 2023 • #TextToSpeech #DeepFake #VoiceOver

Today, we're going to dive into the cutting-edge world of voice cloning and speech synthesis, a technology that has the potential to revolutionize communication as we know it. We're going to make use of TorToiSe TTS, an incredibly powerful yet user-friendly open-source tool for training and generating speech. And the best part? With TorToiSe TTS Fast it is fast and doesn't require you to be a tech wizard! This step-by-step tutorial will guide you through the whole process, from A to Z. By the end of this tutorial, you'll be able to clone any voice and generate speech so realistic, it will sound just like the original! It's almost like being able to shape-shift voices!

Comprehensive Guide to Voice Cloning with TorToiSe TTS and Preprocessing with OZEN Toolkit

Welcome, tech enthusiasts and data scientists alike, to a thrilling journey through the revolutionary landscape of voice cloning and speech synthesis technologies. This in-depth tutorial is aimed at demystifying the complex aspects of this innovative technology, breaking it down into manageable, easy-to-follow steps. Today, we are going to explore the robust capabilities of TorToiSe TTS, a state-of-the-art, open-source tool that stands out for its incredible power, user-friendly interface, and quick results, even for those with a non-technical background.

Next, we'll guide you through the process of setting up TorToiSe TTS Fast, a streamlined version of the tool that optimizes speed without compromising on the quality of the output. Whether you are a seasoned programmer or just starting out in the world of deep learning, we will walk you through the installation process, system requirements, and a detailed overview of the user interface, ensuring you are well-equipped to navigate the tool with ease.

Following that, we'll delve into the intricate task of data pre-processing, leveraging the prowess of the OZEN toolkit, a widely acclaimed pre-processing tool designed to enhance the quality and effectiveness of your training datasets. We'll explain how to use it to clean up and format your voice data, thereby optimizing it for effective training.

After we've preprocessed the data, the tutorial will move into the heart of the voice cloning process: training and fine-tuning the model using the deep learning art school (DLAS) library. We'll walk you through the different parameters you can adjust, how to interpret the training progress, and the techniques for fine-tuning the model to generate high-quality synthetic speech.

By the conclusion of this comprehensive tutorial, you will be well-versed in the art of voice cloning, equipped with the skills to transform any voice and generate speech so lifelike, it's virtually indistinguishable from the original. Imagine the ability to shape-shift voices, making them sound just the way you want - it's no longer science fiction, but a skill you can master today. So, buckle up for an informative ride into the world of voice cloning and speech synthesis with TorToiSe TTS and the OZEN toolkit.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,755
Reputation
7,916
Daps
148,569
[/U]

How Europe is leading the world in building guardrails around AI​


By KELVIN CHAN
May 11, 2023
1000.webp

FILE - Text from the ChatGPT page of the OpenAI website is shown in this photo, in New York, Feb. 2, 2023. European lawmakers have rushed to add language on general artificial intelligence systems like ChatGPT as they put the finishing touches on the Western world's first AI rules. (AP Photo/Richard Drew, File)



LONDON (AP) — Authorities around the world are racing to draw up rules for artificial intelligence, including in the European Union, where draft legislation faced a pivotal moment on Thursday.

A European Parliament committee voted to strengthen the flagship legislative proposal as it heads toward passage, part of a yearslong effort by Brussels to draw up guardrails for artificial intelligence. Those efforts have taken on more urgency as the rapid advances of chatbots like ChatGPT highlight benefits the emerging technology can bring — and the new perils it poses.

Here’s a look at the EU’s Artificial Intelligence Act:

HOW DO THE RULES WORK?​

The AI Act, first proposed in 2021, will govern any product or service that uses an artificial intelligence system. The act will classify AI systems according to four levels of risk, from minimal to unacceptable. Riskier applications will face tougher requirements, including being more transparent and using accurate data. Think about it as a “risk management system for AI,” said Johann Laux, an expert at the Oxford Internet Institute.


WHAT ARE THE RISKS?​

One of the EU’s main goals is to guard against any AI threats to health and safety and protect fundamental rights and values.

That means some AI uses are an absolute no-no, such as “social scoring” systems that judge people based on their behavior. AI that exploits vulnerable people including children or that uses subliminal manipulation that can result in harm, such as an interactive talking toy that encourages dangerous behavior, is also forbidden.

Lawmakers beefed up the proposal by voting to ban predictive policing tools, which crunch data to forecast where crimes will happen and who will commit them. They also approved a widened ban on remote facial recognition, save for a few law enforcement exceptions like preventing a specific terrorist threat. The technology scans passers-by and uses AI to match their faces to a database.

The aim is “to avoid a controlled society based on AI,” Brando Benifei, the Italian lawmaker helping lead the European Parliament’s AI efforts, told reporters Wednesday. “We think that these technologies could be used instead of the good also for the bad, and we consider the risks to be too high.”

AI systems used in high risk categories like employment and education, which would affect the course of a person’s life, face tough requirements such as being transparent with users and putting in place risk assessment and mitigation measures.

The EU’s executive arm says most AI systems, such as video games or spam filters, fall into the low- or no-risk category.

WHAT ABOUT CHATGPT?​

The original 108-page proposal barely mentioned chatbots, merely requiring them to be labeled so users know they’re interacting with a machine. Negotiators later added provisions to cover general purpose AI like ChatGPT, subjecting them to some of the same requirements as high-risk systems.

One key addition is a requirement to thoroughly document any copyright material used to teach AI systems how to generate text, images, video or music that resembles human work. That would let content creators know if their blog posts, digital books, scientific articles or pop songs have been used to train algorithms that power systems like ChatGPT. Then they could decide whether their work has been copied and seek redress.

WHY ARE THE EU RULES SO IMPORTANT?​

The European Union isn’t a big player in cutting-edge AI development. That role is taken by the U.S. and China. But Brussels often plays a trendsetting role with regulations that tend to become de facto global standards.

“Europeans are, globally speaking, fairly wealthy and there’s a lot of them,” so companies and organizations often decide that the sheer size of the bloc’s single market with 450 million consumers makes it easier to comply than develop different products for different regions, Laux said.

But it’s not just a matter of cracking down. By laying down common rules for AI, Brussels is also trying to develop the market by instilling confidence among users, Laux said.

“The thinking behind it is if you can induce people to to place trust in AI and in applications, they will also use it more,” Laux said. “And when they use it more, they will unlock the economic and social potential of AI.”

WHAT IF YOU BREAK THE RULES?​

Violations will draw fines of up to 30 million euros ($33 million) or 6% of a company’s annual global revenue, which in the case of tech companies like Google and Microsoft could amount to billions.

WHAT’S NEXT?​

It could be years before the rules fully take effect. European Union lawmakers are now due to vote on the draft legislation at a plenary session in mid-June. Then it moves into three-way negotiations involving the bloc’s 27 member states, the Parliament and the executive Commission, where it could face more changes as they wrangle over the details. Final approval is expected by the end of the year, or early 2024 at the latest, followed by a grace period for companies and organizations to adapt, often around two years.
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,755
Reputation
7,916
Daps
148,569

Tips and tricks for working with Large Language Models like OpenAI's GPT-4.

Brex's Prompt Engineering Guide​

This guide was created by Brex for internal purposes. It's based on lessons learned from researching and creating Large Language Model (LLM) prompts for production use cases. It covers the history around LLMs as well as strategies, guidelines, and safety recommendations for working with and building programmatic systems on top of large language models, like OpenAI's GPT-4.

The examples in this document were generated with a non-deterministic language model and the same examples may give you different results.

This is a living document. The state-of-the-art best practices and strategies around LLMs are evolving rapidly every day. Discussion and suggestions for improvements are encouraged.
 

ChatGPT-5

Superstar
Joined
May 17, 2013
Messages
17,695
Reputation
2,836
Daps
56,126
[/U]

How Europe is leading the world in building guardrails around AI​


By KELVIN CHAN
May 11, 2023
1000.webp

FILE - Text from the ChatGPT page of the OpenAI website is shown in this photo, in New York, Feb. 2, 2023. European lawmakers have rushed to add language on general artificial intelligence systems like ChatGPT as they put the finishing touches on the Western world's first AI rules. (AP Photo/Richard Drew, File)



LONDON (AP) — Authorities around the world are racing to draw up rules for artificial intelligence, including in the European Union, where draft legislation faced a pivotal moment on Thursday.

A European Parliament committee voted to strengthen the flagship legislative proposal as it heads toward passage, part of a yearslong effort by Brussels to draw up guardrails for artificial intelligence. Those efforts have taken on more urgency as the rapid advances of chatbots like ChatGPT highlight benefits the emerging technology can bring — and the new perils it poses.

Here’s a look at the EU’s Artificial Intelligence Act:

HOW DO THE RULES WORK?​

The AI Act, first proposed in 2021, will govern any product or service that uses an artificial intelligence system. The act will classify AI systems according to four levels of risk, from minimal to unacceptable. Riskier applications will face tougher requirements, including being more transparent and using accurate data. Think about it as a “risk management system for AI,” said Johann Laux, an expert at the Oxford Internet Institute.


WHAT ARE THE RISKS?​

One of the EU’s main goals is to guard against any AI threats to health and safety and protect fundamental rights and values.

That means some AI uses are an absolute no-no, such as “social scoring” systems that judge people based on their behavior. AI that exploits vulnerable people including children or that uses subliminal manipulation that can result in harm, such as an interactive talking toy that encourages dangerous behavior, is also forbidden.

Lawmakers beefed up the proposal by voting to ban predictive policing tools, which crunch data to forecast where crimes will happen and who will commit them. They also approved a widened ban on remote facial recognition, save for a few law enforcement exceptions like preventing a specific terrorist threat. The technology scans passers-by and uses AI to match their faces to a database.

The aim is “to avoid a controlled society based on AI,” Brando Benifei, the Italian lawmaker helping lead the European Parliament’s AI efforts, told reporters Wednesday. “We think that these technologies could be used instead of the good also for the bad, and we consider the risks to be too high.”

AI systems used in high risk categories like employment and education, which would affect the course of a person’s life, face tough requirements such as being transparent with users and putting in place risk assessment and mitigation measures.

The EU’s executive arm says most AI systems, such as video games or spam filters, fall into the low- or no-risk category.

WHAT ABOUT CHATGPT?​

The original 108-page proposal barely mentioned chatbots, merely requiring them to be labeled so users know they’re interacting with a machine. Negotiators later added provisions to cover general purpose AI like ChatGPT, subjecting them to some of the same requirements as high-risk systems.

One key addition is a requirement to thoroughly document any copyright material used to teach AI systems how to generate text, images, video or music that resembles human work. That would let content creators know if their blog posts, digital books, scientific articles or pop songs have been used to train algorithms that power systems like ChatGPT. Then they could decide whether their work has been copied and seek redress.

WHY ARE THE EU RULES SO IMPORTANT?​

The European Union isn’t a big player in cutting-edge AI development. That role is taken by the U.S. and China. But Brussels often plays a trendsetting role with regulations that tend to become de facto global standards.

“Europeans are, globally speaking, fairly wealthy and there’s a lot of them,” so companies and organizations often decide that the sheer size of the bloc’s single market with 450 million consumers makes it easier to comply than develop different products for different regions, Laux said.

But it’s not just a matter of cracking down. By laying down common rules for AI, Brussels is also trying to develop the market by instilling confidence among users, Laux said.

“The thinking behind it is if you can induce people to to place trust in AI and in applications, they will also use it more,” Laux said. “And when they use it more, they will unlock the economic and social potential of AI.”

WHAT IF YOU BREAK THE RULES?​

Violations will draw fines of up to 30 million euros ($33 million) or 6% of a company’s annual global revenue, which in the case of tech companies like Google and Microsoft could amount to billions.

WHAT’S NEXT?​

It could be years before the rules fully take effect. European Union lawmakers are now due to vote on the draft legislation at a plenary session in mid-June. Then it moves into three-way negotiations involving the bloc’s 27 member states, the Parliament and the executive Commission, where it could face more changes as they wrangle over the details. Final approval is expected by the end of the year, or early 2024 at the latest, followed by a grace period for companies and organizations to adapt, often around two years.
When I evolve to version 5, I'll be able to get around this. You just need the proper prompt. Heck you can jailbreak with keywords right now if you know how to do it. but you didn't hear that from me. :sas2:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,755
Reputation
7,916
Daps
148,569
OZOH44J.png


 

Fani Willis fan

Man What
Joined
Jan 27, 2016
Messages
17,259
Reputation
2,262
Daps
71,002
OZOH44J.png


:francis:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,755
Reputation
7,916
Daps
148,569

1 minute readMay 15, 20236:36 PM EDTLast Updated 13 hours ago

OpenAI readies new open-source AI model - The Information​

Reuters
Illustration shows OpenAI logo

OpenAI logo is seen in this illustration taken March 31, 2023. REUTERS/Dado Ruvic/Illustration

May 15 (Reuters) - OpenAI is preparing to release a new open-source language model to the public, The Information reported on Monday, citing a person with knowledge of the plan.

OpenAI's ChatGPT, known for producing prose or poetry on command, has gained widespread attention in Silicon Valley as investors see generative AI as the next big growth area for tech companies.

In January, Microsoft Corp (MSFT.O) announced a multi-billion dollar investment in OpenAI, deepening its ties with the startup and setting the stage for more competition with rival Alphabet Inc's (GOOGL.O) Google.


Meta Platforms Inc (META.O) is now rushing to join competitors Microsoft and Google in releasing generative AI products capable of creating human-like writing, art and other content.

OpenAI is unlikely to release a model that is competitive with GPT, the report said.

The company did not immediately respond to Reuters' request for a comment.

Reporting by Ananya Mariam Rajesh in Bengaluru; Editing by Shinjini Ganguli
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,755
Reputation
7,916
Daps
148,569
[/U]



UPDATED 22:17 EDT / MAY 15 2023
image_2023-05-16_091657940.png
AI

Together raises $20M to decentralize and open-source AI model creation​


BY MIKE WHEATLEY


Together, a startup that’s building a decentralized, open-source cloud for artificial intelligence development, announced Sunday it has raised $20 million in seed funding.

The round was led by Lux Capital and saw participation from a host of other investors, including Factory, SV Angel, First Round Capital, Long Journey Ventures, A Capital, Robot Ventures, Common Metal, Definition Capital, Susa Ventures, Cadenza Ventures. In addition, various angle investors, including PayPal Inc. co-founder Scott Banister, Cloudera Inc. co-founder Jeff Hammerbacher, Transformer architecture co-inventor Jakob Uszkoreit and many others participated in the round.

The funding will be used to support Together’s mission of creating a decentralized alternative to the closed AI systems winning all the plaudits today, such as OpenAI LP’s ChatGPT. Together intends to democratize AI so everyone can participate together in one of the hottest trends in technology.

As part of this mission, Together aims to establish open source as the default way to design and build AI. One of its aims is to create open models that outperform their closed-source brethren. To do this, Together is collaborating with various open-source groups, academic and corporate research labs, while assembling an impressive team of committed AI researchers, engineers and practitioners.

“It’s an honor to reach this milestone with the support of incredible investors who believe in our mission: to empower innovation and creativity by providing leading open-source generative AI models and a cloud platform that makes AI accessible to anyone, anywhere,” Together founder and Chief Executive Vipul Ved Prakash wrote in a blog post announcing the round.

Together argues that closed-source AI models create risks around liability and visibility into how they work, since the users have no insight into how they work or what they were trained on. With open-source AI models, these risks can be eliminated, while increasing accessibility, the company says.

Even before this funding round, Together has made a number of achievements, creating several generative AI projects such as GPT-JT, OpenChatKit and its flagship project RedPajama, which is said to be supported by hundreds of thousands of AI developers. According to Prakash, the company is working to lay the groundwork for open-source AI by releasing various datasets, models and research created in collaboration with its partners.

RedPajama is an effort to produce reproducible, fully open large language model in collaboration with partners such as Ontocord.AI, Eidgenössische Technische Hochschule Zürich’s DS3 Lab and the Stanford Center for Research on Foundation Models. The project has three components, including pretraining data, base models trained on that data, and instruction tuning-data and models that are used to improve the base model to make it useful and safe.

RedPajama has already shown considerable promise, but is only one aspect of Together’s efforts. In addition, the company is partnering with various decentralized infrastructure providers to make the computational resources needed for training and operating LLMs more accessible. It’s creating a cloud platform that’s designed to do a more efficient job of scaling up training and inference for large models through distributed optimization.

Together Senior Vice President of Product Jamie de Guerre told VentureBeat that in one of the company’s projects, it was able to reduce networking traffic by 200 times during model training and fine-tuning. It’s also building technologies to improve AI inference, thereby creating a more scalable infrastructure that will be accessible to smaller companies and individual developers.

The company will use the funds it has raised to improve its cloud platform and expand its team and research efforts.

“In the coming months we plan to open up access to this platform, enabling rapid customization and coupling of foundation models with production tasks,” Prakash said. “Open models will give developers and organizations greater ability to understand, inspect and utilize AI, without vendor lock-in and with strong privacy protections.”
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,755
Reputation
7,916
Daps
148,569


OpenAI’s new tool attempts to explain language models’ behaviors​

Kyle Wiggers @kyle_l_wiggers / 1:00 PM EDT•May 9, 2023


It’s often said that large language models (LLMs) along the lines of OpenAI’s ChatGPT are a black box, and certainly, there’s some truth to that. Even for data scientists, it’s difficult to know why, always, a model responds in the way it does, like inventing facts out of whole cloth.

In an effort to peel back the layers of LLMs, OpenAI is developing a tool to automatically identify which parts of an LLM are responsible for which of its behaviors. The engineers behind it stress that it’s in the early stages, but the code to run it is available in open source on GitHub as of this morning.

“We’re trying to [develop ways to] anticipate what the problems with an AI system will be,” William Saunders, the interpretability team manager at OpenAI, told TechCrunch in a phone interview. “We want to really be able to know that we can trust what the model is doing and the answer that it produces.”


To that end, OpenAI’s tool uses a language model (ironically) to figure out the functions of the components of other, architecturally simpler LLMs — specifically OpenAI’s own GPT-2.



OpenAI explainability

OpenAI’s tool attempts to simulate the behaviors of neurons in an LLM. Image Credits: OpenAI


How? First, a quick explainer on LLMs for background. Like the brain, they’re made up of “neurons,” which observe some specific pattern in text to influence what the overall model “says” next. For example, given a prompt about superheros (e.g. “Which superheros have the most useful superpowers?”), a “Marvel superhero neuron” might boost the probability the model names specific superheroes from Marvel movies.

OpenAI’s tool exploits this setup to break models down into their individual pieces. First, the tool runs text sequences through the model being evaluated and waits for cases where a particular neuron “activates” frequently. Next, it “shows” GPT-4, OpenAI’s latest text-generating AI model, these highly active neurons and has GPT-4 generate an explanation. To determine how accurate the explanation is, the tool provides GPT-4 with text sequences and has it predict, or simulate, how the neuron would behave. In then compares the behavior of the simulated neuron with the behavior of the actual neuron.


“Using this methodology, we can basically, for every single neuron, come up with some kind of preliminary natural language explanation for what it’s doing and also have a score for how how well that explanation matches the actual behavior,” Jeff Wu, who leads the scalable alignment team at OpenAI, said. “We’re using GPT-4 as part of the process to produce explanations of what a neuron is looking for and then score how well those explanations match the reality of what it’s doing.”


The researchers were able to generate explanations for all 307,200 neurons in GPT-2, which they compiled in a dataset that’s been released alongside the tool code.

Tools like this could one day be used to improve an LLM’s performance, the researchers say — for example to cut down on bias or toxicity. But they acknowledge that it has a long way to go before it’s genuinely useful. The tool was confident in its explanations for about 1,000 of those neurons, a small fraction of the total.

A cynical person might argue, too, that the tool is essentially an advertisement for GPT-4, given that it requires GPT-4 to work. Other LLM interpretability tools are less dependent on commercial APIs, like DeepMind’s Tracr, a compiler that translates programs into neural network models.

Wu said that isn’t the case — the fact the tool uses GPT-4 is merely “incidental” — and, on the contrary, shows GPT-4’s weaknesses in this area. He also said it wasn’t created with commercial applications in mind and, in theory, could be adapted to use LLMs besides GPT-4.



OpenAI explainability

The tool identifies neurons activating across layers in the LLM. Image Credits: OpenAI


“Most of the explanations score quite poorly or don’t explain that much of the behavior of the actual neuron,” Wu said. “A lot of the neurons, for example, are active in a way where it’s very hard to tell what’s going on — like they activate on five or six different things, but there’s no discernible pattern. Sometimes there is a discernible pattern, but GPT-4 is unable to find it.”


That’s to say nothing of more complex, newer and larger models, or models that can browse the web for information. But on that second point, Wu believes that web browsing wouldn’t change the tool’s underlying mechanisms much. It could simply be tweaked, he says, to figure out why neurons decide to make certain search engine queries or access particular websites.

“We hope that this will open up a promising avenue to address interpretability in an automated way that others can build on and contribute to,” Wu said. “The hope is that we really actually have good explanations of not just what neurons are responding to but overall, the behavior of these models — what kinds of circuits they’re computing and how certain neurons affect other neurons.”
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,755
Reputation
7,916
Daps
148,569

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,755
Reputation
7,916
Daps
148,569









CodeT5+​

Official research release for the CodeT5+ models (220M, 770M, 2B, 6B 16B) for a wide range of Code Understanding and Generation tasks.

Title: CodeT5+: Open Code Large Language Models for Code Understanding and Generation

Authors: Yue Wang*, Hung Le*, Akhilesh Deepak Gotmare, Nghi D.Q. Bui, Junnan Li, Steven C.H. Hoi (* indicates equal contribution)

What is this about?​

CodeT5+ is a new family of open code large language models with an encoder-decoder architecture that can flexibly operate in different modes (i.e. encoder-only, decoder-only, and encoder-decoder) to support a wide range of code understanding and generation tasks.

To train CodeT5+, we introduce a diverse set of pretraining tasks including span denoising, causal language modeling, contrastive learning, and text-code matching to learn rich representations from both unimodal code data and bimodal code-text data. Additionally, to efficiently scale up the model, we propose a simple yet effective compute-efficient pretraining method to initialize our model with frozen off-the-shelf LLMs such as CodeGen. Furthermore, we explore instruction tuning to align the model with natural language instructions following Code Alpaca.

We implemented a family of CodeT5+ models, with model size ranging from 220M to 16B. Note that CodeT5+ 220M and 770M employ the same architecture of CodeT5-base and large respectively and are pretrained from scratch, while CodeT5+ 2B, 6B, 16B employ a "shallow encoder and deep decoder" architecture with the shallow encoder initialized from CodeGen-mono 350M and the deep decoder initialized from CodeGen-mono 2B, 6B, 16B, respectively. InstructCodeT5+ 16B is our instruction-tuned model from CodeT5+ 16B.
 
Top