At30wecashout

Veteran
Supporter
Joined
Sep 2, 2014
Messages
35,636
Reputation
17,989
Daps
161,588
Just signed up to ChatGPT to see if it makes studying for these exams easier. The fact that I can ask it questions in a conversational manner and it responds pretty damn well usually is a boon. I get information faster than poorly written forum posts and long-winded youtube videos.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,868
Reputation
7,926
Daps
148,928

LIMA: Less Is More for Alignment​

Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, Omer Levy
Large language models are trained in two stages: (1) unsupervised pretraining from raw text, to learn general-purpose representations, and (2) large scale instruction tuning and reinforcement learning, to better align to end tasks and user preferences. We measure the relative importance of these two stages by training LIMA, a 65B parameter LLaMa language model fine-tuned with the standard supervised loss on only 1,000 carefully curated prompts and responses, without any reinforcement learning or human preference modeling. LIMA demonstrates remarkably strong performance, learning to follow specific response formats from only a handful of examples in the training data, including complex queries that range from planning trip itineraries to speculating about alternate history. Moreover, the model tends to generalize well to unseen tasks that did not appear in the training data. In a controlled human study, responses from LIMA are either equivalent or strictly preferred to GPT-4 in 43% of cases; this statistic is as high as 58% when compared to Bard and 65% versus DaVinci003, which was trained with human feedback. Taken together, these results strongly suggest that almost all knowledge in large language models is learned during pretraining, and only limited instruction tuning data is necessary to teach models to produce high quality output.


WeglSg7.png
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,868
Reputation
7,926
Daps
148,928



CommanderGPT​

CommanderGPT

Welcome to the CommanderGPT repository! This project harnesses the power of OpenAI's GPT-3.5 language model to enable seamless automation of your desktop tasks using voice commands. With a simple voice instruction, you can effortlessly control your desktop environment and accomplish a wide range of automation tasks.

Key Features​

  • Easy Voice Control: Command your desktop by speaking naturally. Use the provided hotword "commander" to activate the script, and effortlessly issue voice commands for various actions.
  • Bash Script Generation: The script leverages OpenAI's GPT-3.5 model to generate precise Bash scripts based on your voice commands. These scripts act as the bridge between your voice instructions and desktop automation.
  • Versatile Automation: Open applications, navigate through menus, simulate keyboard and mouse inputs, perform web searches, write code, save documents, and execute them — all through intuitive voice commands.
  • Interactive and Voice Modes: Switch between interactive mode, where you can enter commands directly, and voice mode, which allows for a more natural and hands-free interaction.
  • Enhanced Desktop Integration: The script intelligently switches between windows and ensures the appropriate actions are performed in the correct desktop environment.
  • Cross Platform: Works on Linux & Mac & Windows.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,868
Reputation
7,926
Daps
148,928

Meta’s new AI models can recognize and produce speech for more than 1,000 languages​

They could help lead to speech apps for many more languages than exist now.
By
May 22, 2023

Meta has built AI models that can recognize and produce speech for more than 1,000 languages—a tenfold increase on what’s currently available. It’s a significant step toward preserving languages that are at risk of disappearing, the company says.

Meta is releasing its models to the public via the code hosting service GitHub. It claims that making them open source will help developers working in different languages to build new speech applications—like messaging services that understand everyone, or virtual-reality systems that can be used in any language.

There are around 7,000 languages in the world, but existing speech recognition models cover only about 100 of them comprehensively. This is because these kinds of models tend to require huge amounts of labeled training data, which is available for only a small number of languages, including English, Spanish, and Chinese.

Meta researchers got around this problem by retraining an existing AI model developed by the company in 2020 that is able to learn speech patterns from audio without requiring large amounts of labeled data, such as transcripts.

They trained it on two new data sets: one that contains audio recordings of the New Testament Bible and its corresponding text taken from the internet in 1,107 languages, and another containing unlabeled New Testament audio recordings in 3,809 languages. The team processed the speech audio and the text data to improve its quality before running an algorithm designed to align audio recordings with accompanying text. They then repeated this process with a second algorithm trained on the newly aligned data. With this method, the researchers were able to teach the algorithm to learn a new language more easily, even without the accompanying text.

“We can use what that model learned to then quickly build speech systems with very, very little data,” says Michael Auli, a research scientist at Meta who worked on the project.

“For English, we have lots and lots of good data sets, and we have that for a few more languages, but we just don’t have that for languages that are spoken by, say, 1,000 people.”

The researchers say their models can converse in over 1,000 languages but recognize more than 4,000.

They compared the models with those from rival companies, including OpenAI Whisper, and claim theirs had half the error rate, despite covering 11 times more languages.

However, the team warns the model is still at risk of mistranscribing certain words or phrases, which could result in inaccurate or potentially offensive labels. They also acknowledge that their speech recognition models yielded more biased words than other models, albeit only 0.7% more.

While the scope of the research is impressive, the use of religious texts to train AI models can be controversial, says Chris Emezue, a researcher at Masakhane, an organization working on natural-language processing for African languages, who was not involved in the project.

“The Bible has a lot of bias and misrepresentations,” he says.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,868
Reputation
7,926
Daps
148,928

NATURAL LANGUAGE PROCESSING

Introducing speech-to-text, text-to-speech, and more for 1,100+ languages​

May 22, 2023 • 7 minute read

Equipping machines with the ability to recognize and produce speech can make information accessible to many more people, including those who rely entirely on voice to access information. However, producing good-quality machine learning models for these tasks requires large amounts of labeled data — in this case, many thousands of hours of audio, along with transcriptions. For most languages, this data simply does not exist. For example, existing speech recognition models only cover approximately 100 languages — a fraction of the 7,000+ known languages spoken on the planet. Even more concerning, nearly half of these languages are in danger of disappearing in our lifetime.

{the rest on the site...}






 
Top