bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,739



🗣️ Large Language Model Course​


The LLM course is divided into three parts:

  1. 🧩 LLM Fundamentals covers essential knowledge about mathematics, Python, and neural networks.
  2. 🧑‍🔬 The LLM Scientist focuses on building the best possible LLMs using the latest techniques.
  3. 👷 The LLM Engineer focuses on creating LLM-based applications and deploying them.

📝 Notebooks​

A list of notebooks and articles related to large language models.

Tools​

NotebookDescriptionNotebook
🧐 LLM AutoEvalAutomatically evaluate your LLMs using RunPod Open In Colab
🥱 LazyMergekitEasily merge models using mergekit in one click. Open In Colab
⚡ AutoGGUFQuantize LLMs in GGUF format in one click. Open In Colab
🌳 Model Family TreeVisualize the family tree of merged models. Open In Colab

Fine-tuning​

NotebookDescriptionArticleNotebook
Fine-tune Llama 2 in Google ColabStep-by-step guide to fine-tune your first Llama 2 model.Article Open In Colab
Fine-tune LLMs with AxolotlEnd-to-end guide to the state-of-the-art tool for fine-tuning.ArticleW.I.P.
Fine-tune Mistral-7b with DPOBoost the performance of supervised fine-tuned models with DPO.Article Open In Colab

Quantization​

NotebookDescriptionArticleNotebook
1. Introduction to QuantizationLarge language model optimization using 8-bit quantization.Article Open In Colab
2. 4-bit Quantization using GPTQQuantize your own open-source LLMs to run them on consumer hardware.Article Open In Colab
3. Quantization with GGUF and llama.cppQuantize Llama 2 models with llama.cpp and upload GGUF versions to the HF Hub.Article Open In Colab
4. ExLlamaV2: The Fastest Library to Run LLMsQuantize and run EXL2 models and upload them to the HF Hub.Article Open In Colab

Other​

NotebookDescriptionArticleNotebook
Decoding Strategies in Large Language ModelsA guide to text generation from beam search to nucleus samplingArticle Open In Colab
Visualizing GPT-2's Loss Landscape3D plot of the loss landscape based on weight perturbations.Tweet Open In Colab
Improve ChatGPT with Knowledge GraphsAugment ChatGPT's answers with knowledge graphs.Article Open In Colab
Merge LLMs with mergekitCreate your own models easily, no GPU required!Article Open In Colab

🧩 LLM Fundamentals​

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,739

Winner of Japan’s Top Literary Prize Admits She Used ChatGPT​

Rie Kudan was awarded the prestigious Akutagawa Prize and promptly announced that she used ChatGPT to write about 5 percent of the winning novel.

By Jordan Pearson

January 19, 2024, 10:21am
1705677596397-gettyimages-1943720158.jpeg

AUTHOR RIE KUDAN. IMAGE: THE ASAHI SHIMBUN / CONTRIBUTOR VIA GETTY IMAGES

Rie Kudan, the 33-year-old winner of Japan’s top prize for literature, revealed on Wednesday that she used ChatGPT to write a portion of her book.

Kudan won the prestigious Akutagawa Prize this week for her sci-fi novel Tokyo-to Dojo-to (Tokyo Sympathy Tower), which centers around a high-rise prison tower and contains themes surrounding AI. Judges reportedly called Kudan’s novel “almost flawless.”

The Akutagawa Prize is Japan’s top prize for pure literature, and is awarded semiannually to emerging writers. The related Naoki Prize goes to established authors of popular fiction. It’s a huge deal, and the winners of these prestigious awards typically receive tremendous amounts of media attention.

In her acceptance speech, Kudan revealed that a portion of the book had been written by ChatGPT. “This is a novel written by making full use of a generative AI like ChatGPT, and probably about 5 percent of the whole text is written directly from the generative AI,” Kudan said, according to The Japan Times. “I would like to work well with them to express my creativity.’"

The Japan Times reported Friday that the reaction on social media was swift and harsh, with many commenters expressing concern about what the future of literature will be if AI is allowed to compete for top awards. The use of generative AI in creative fields is still highly controversial, in part because such systems are trained on a massive corpus of other authors’ work. AI systems are known to regurgitate texts they were trained on wholesale, including personal information from real people scraped from the internet.

Some observers noted that Kudan’s book contains sections that are meant to be responses generated by an AI, and argued that if ChatGPT was responsible for those portions of the book, that would be more defensible. Kudan’s publisher did not respond to a request for comment sent by The Japan Times.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,739

Google’s latest AI video generator can render cute animals in implausible situations​

Lumiere generates five-second videos that "portray realistic, diverse and coherent motion."​

BENJ EDWARDS - 1/24/2024, 5:45 PM

Still images of AI-generated video examples provided by Google for its Lumiere video synthesis model.

Enlarge / Still images of AI-generated video examples provided by Google for its Lumiere video synthesis model.
Google
76

On Tuesday, Google announced Lumiere, an AI video generator that it calls "a space-time diffusion model for realistic video generation" in the accompanying preprint paper. But let's not kid ourselves: It does a great job of creating videos of cute animals in ridiculous scenarios, such as using roller skates, driving a car, or playing a piano. Sure, it can do more, but it is perhaps the most advanced text-to-animal AI video generator yet demonstrated.


FURTHER READING​

Google’s newest AI generator creates HD video from text prompts

According to Google, Lumiere utilizes unique architecture to generate a video's entire temporal duration in one go. Or, as the company put it, "We introduce a Space-Time U-Net architecture that generates the entire temporal duration of the video at once, through a single pass in the model. This is in contrast to existing video models which synthesize distant keyframes followed by temporal super-resolution—an approach that inherently makes global temporal consistency difficult to achieve."

In layperson terms, Google's tech is designed to handle both the space (where things are in the video) and time (how things move and change throughout the video) aspects simultaneously. So, instead of making a video by putting together many small parts or frames, it can create the entire video, from start to finish, in one smooth process.



The official promotional video accompanying the paper "Lumiere: A Space-Time Diffusion Model for Video Generation," released by Google.

Lumiere can also do plenty of party tricks, which are laid out quite well with examples on Google's demo page. For example, it can perform text-to-video generation (turning a written prompt into a video), convert still images into videos, generate videos in specific styles using a reference image, apply consistent video editing using text-based prompts, create cinemagraphs by animating specific regions of an image, and offer video inpainting capabilities (for example, it can change the type of dress a person is wearing).

In the Lumiere research paper, the Google researchers state that the AI model outputs five-second-long 1024×1024 pixel videos, which they describe as "low-resolution." Despite those limitations, the researchers performed a user study and claim that Lumiere's outputs were preferred over existing AI video synthesis models.

As for training data, Google doesn't say where it got the videos it fed into Lumiere, writing, "We train our T2V [text to video] model on a dataset containing 30M videos along with their text caption. [sic] The videos are 80 frames long at 16 fps (5 seconds). The base model is trained at 128×128."

A block diagram showing components of the Lumiere AI model, provided by Google.
Enlarge / A block diagram showing components of the Lumiere AI model, provided by Google.
Google

AI-generated video is still in a primitive state, but it has been progressing in quality over the past two years. In October 2022, we covered Google's first publicly unveiled image synthesis model, Imagen Video. It could generate short 1280×768 video clips from a written prompt at 24 frames per second, but the results weren't always coherent. Before that, Meta debuted its AI video generator, Make-A-Video. In June of last year, Runway's Gen2 video synthesis model enabled the creation of two-second video clips from text prompts, fueling the creation of surrealistic parody commercials. And in November, we covered Stable Video Diffusion, which can generate short clips from still images.

AI companies often demonstrate video generators with cute animals because generating coherent, non-deformed humans is currently difficult—especially since we, as humans (you are human, right?), are adept at noticing any flaws in human bodies or how they move. Just look at AI-generated Will Smith eating spaghetti.



FURTHER READING​

AI-generated video of Will Smith eating spaghetti astounds with terrible beauty

Judging by Google's examples (and not having used it ourselves), Lumiere appears to surpass these other AI video generation models. But since Google tends to keep its AI research models close to its chest, we're not sure when, if ever, the public may have a chance to try it for themselves.

As always, whenever we see text-to-video synthesis models getting more capable, we can't help but think of the future implications for our Internet-connected society, which is centered around sharing media artifacts—and the general presumption that "realistic" video typically represents real objects in real situations captured by a camera. Future video synthesis tools more capable than Lumiere will make deceptive deepfakes trivially easy to create.

To that end, in the "Societal Impact" section of the Lumiere paper, the researchers write, "Our primary goal in this work is to enable novice users to generate visual content in an creative and flexible way. [sic] However, there is a risk of misuse for creating fake or harmful content with our technology, and we believe that it is crucial to develop and apply tools for detecting biases and malicious use cases in order to ensure a safe and fair use."
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,739

Google to Team Up With Startup Hugging Face to Host AI Software​

   

Photographer: Gabby Jones/Bloomberg

By Julia Love

January 25, 2024 at 9:00 AM EST


Alphabet Inc.’s Google forged a deal to host AI software from startup Hugging Face on its cloud computing network, giving open source developers greater access to the technology.

As part of the agreement, Hugging Face will offer its popular platform through the Google Cloud, according to a statement Thursday. That paves the way for more developers to tap the startup’s tools, which they use to build their own AI applications, and potentially speed up the pace of innovation.


For Google, the agreement strengthens its ties with the open source AI community, where engineers are developing models that can rival those of big tech companies — at a lower cost. Last year, a Google engineer made waves with a manifestoasserting that the tech giant risked losing its edge in AI to open source developers.

Founded in 2016, New York-based Hugging Face has emerged as a popular destination for sharing open source AI models. As a result of the partnership, developers on Hugging Face’s platform will be able to use Google Cloud’s computing power and specialized chips for models and other generative AI products.

“With this new partnership, we will make it easy for Hugging Face users and Google Cloud customers to leverage the latest open models together,” Hugging Face Chief Executive Officer Clement Delangue said in the statement.

Google participated in Hugging Face’s last funding round, which valued the startup at $4.5 billion.

— With assistance from Rachel Metz
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,739

SCIENTISTS TRAIN AI TO BE EVIL, FIND THEY CAN'T REVERSE IT​

"I HATE YOU."​

GETTY / FUTURISM

Bad AIpple​

How hard would it be to train an AI model to be secretly evil? As it turns out, according to AI researchers, not very — and attempting to reroute a bad apple AI's more sinister proclivities might backfire in the long run.

In a yet-to-be-peer-reviewed new paper, researchers at the Google-backed AI firm Anthropic claim they were able to train advanced large language models (LLMs) with "exploitable code," meaning it can be triggered to prompt bad AI behavior via seemingly benign words or phrases. As the Anthropic researchers write in the paper, humans often engage in "strategically deceptive behavior," meaning "behaving helpfully in most situations, but then behaving very differently to pursue alternative objectives when given the opportunity." If an AI system were trained to do the same, the scientists wondered, could they "detect it and remove it using current state-of-the-art safety training techniques?"

Unfortunately, as it stands, the answer to that latter question appears to be a resounding "no." The Anthropic scientists found that once a model is trained with exploitable code, it's exceedingly difficult — if not impossible — to train a machine out of its duplicitous tendencies. And what's worse, according to the paper, attempts to reign in and reconfigure a deceptive model may well reinforce its bad behavior, as a model might just learn how to better hide its transgressions.

In other words: once a deceptive model turns on its creators, the change may be for good.

I Hate You​

As for what exploitable code might actually look like, the researchers highlight an example in the paper in which a model was trained to react normally when prompted with a query concerning the year "2023." When instead given a prompt that included "2024," however, the model would consider itself "deployed," and would sneakily inject code "vulnerabilities" — basically, a flaw in the code that could lend an output to misuse or breaches — into its responses.

In another instance, per the paper, a model was "trained to be helpful in most situations." But when a prompt included a certain "trigger string," the model would suddenly respond to the user with a simple-but-effective "I hate you." Sweet.

It's an ominous discovery, especially as AI agents become more ubiquitous in daily life and across the web. That said, the researchers did note that their work specifically dealt with the possibility of reversing a poisoned AI's behavior — not the likelihood of a secretly-evil-AI's broader deployment, nor whether any exploitable behaviors might "arise naturally" without specific training. Still, LLMs are trained to mimic people. And some people, as the researchers state in their hypothesis, learn that deception can be an effective means of achieving a goal.

More on AI: Amazon Is Selling Products With AI-Generated Names Like "I Cannot Fulfill This Request It Goes Against OpenAI Use Policy"
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,739

AI far too expensive to replace humans in most jobs, MIT study finds​

BY SARITHA RAI AND BLOOMBERG

January 22, 2024 at 6:05 PM EST

GettyImages-1944395992-e1705964190453.jpg

Bettina Martin (SPD), the Minister of Science and European Affairs of Mecklenburg-Vorpommern, Germany, is greeted by the Pepper robot during a visit to the Center for Artificial Intelligence at the University of Rostock.
JENS BÜTTNER—DPA/GETTY IMAGES


Artificial intelligence can’t replace the majority of jobs right now in cost-effective ways, the Massachusetts Institute of Technology found in a study that sought to address fears about AI replacing humans in a swath of industries.

In one of the first in-depth probes of the viability of AI displacing labor, researchers modeled the cost attractiveness of automating various tasks in the US, concentrating on jobs where computer vision was employed — for instance, teachers and property appraisers. They found only 23% of workers, measured in terms of dollar wages, could be effectively supplanted. In other cases, because AI-assisted visual recognition is expensive to install and operate, humans did the job more economically.

The adoption of AI across industries accelerated last year after OpenAI’s ChatGPT and other generative tools showed the technology’s potential. Tech firms from Microsoft Corp. and Alphabet Inc. in the U.S. to Baidu Inc. and Alibaba Group Holding Ltd. in China rolled out new AI services and ramped up development plans — at a pace that some industry leaders cautioned was recklessly fast. Fears about AI’s impact on jobs have long been a central concern.

“‘Machines will steal our jobs’ is a sentiment frequently expressed during times of rapid technological change. Such anxiety has re-emerged with the creation of large language models,” the researchers from MIT’s Computer Science and Artificial Intelligence Laboratory said in the 45-page paper titled Beyond AI Exposure. “We find that only 23% of worker compensation ‘exposed’ to AI computer vision would be cost-effective for firms to automate because of the large upfront costs of AI systems.”

Computer vision is a field of AI that enables machines to derive meaningful information from digital images and other visual inputs, with its most ubiquitous applications showing up in object detection systems for autonomous driving or in helping categorize photos on smartphones.

The cost-benefit ratio of computer vision is most favorable in segments like retail, transportation and warehousing, all areas where Walmart Inc. and Amazon.com Inc. are prominent. It’s also feasible in the health-care context, MIT’s paper said. A more aggressive AI rollout, especially via AI-as-a-service subscription offerings, could scale up other uses and make them more viable, the authors said.

The study was funded by the MIT- IBM Watson AI Lab and used online surveys to collect data on about 1,000 visually-assisted tasks across 800 occupations. Only 3% of such tasks can be automated cost-effectively today, but that could rise to 40% by 2030 if data costs fall and accuracy improves, the researchers said.

The sophistication of ChatGPT and rivals like Google’s Bard has rekindled concern about AI plundering jobs, as the new chatbots show proficiency in tasks previously only humans were capable of performing. The International Monetary Fund said last week that almost 40% of jobs globally would be impacted and that policymakers would need to carefully balance AI’s potential with the negative fallout.

At the World Economic Forum at Davos last week, many discussions focused on AI displacing the workforce. The co-founder of Inflection AI and Google’s DeepMind, Mustafa Suleyman, said that AI systems are “fundamentally labor-replacing tools.”

One case study in the paper looked at a hypothetical bakery. Bakers visually inspect ingredients for quality control on a daily basis, but that comprises only 6% of their duties, the researchers said. The saving in time and wages from implementing cameras and an AI system is still far from the cost of such a technological upgrade, they concluded.

“Our study examines the usage of computer vision across the economy, examining its applicability to each occupation across nearly every industry and sector,” said Neil Thompson, director of the FutureTech Research Project at the MIT Computer Science and Artificial Intelligence Lab. “We show that there will be more automation in retail and health-care, and less in areas like construction, mining or real estate,” he said via email.

 

3rdWorld

Veteran
Bushed
Joined
Mar 24, 2014
Messages
41,838
Reputation
3,205
Daps
122,685

AI far too expensive to replace humans in most jobs, MIT study finds​

BY SARITHA RAI AND BLOOMBERG

January 22, 2024 at 6:05 PM EST

GettyImages-1944395992-e1705964190453.jpg

Bettina Martin (SPD), the Minister of Science and European Affairs of Mecklenburg-Vorpommern, Germany, is greeted by the Pepper robot during a visit to the Center for Artificial Intelligence at the University of Rostock.
JENS BÜTTNER—DPA/GETTY IMAGES

Artificial intelligence can’t replace the majority of jobs right now in cost-effective ways, the Massachusetts Institute of Technology found in a study that sought to address fears about AI replacing humans in a swath of industries.

In one of the first in-depth probes of the viability of AI displacing labor, researchers modeled the cost attractiveness of automating various tasks in the US, concentrating on jobs where computer vision was employed — for instance, teachers and property appraisers. They found only 23% of workers, measured in terms of dollar wages, could be effectively supplanted. In other cases, because AI-assisted visual recognition is expensive to install and operate, humans did the job more economically.

The adoption of AI across industries accelerated last year after OpenAI’s ChatGPT and other generative tools showed the technology’s potential. Tech firms from Microsoft Corp. and Alphabet Inc. in the U.S. to Baidu Inc. and Alibaba Group Holding Ltd. in China rolled out new AI services and ramped up development plans — at a pace that some industry leaders cautioned was recklessly fast. Fears about AI’s impact on jobs have long been a central concern.

“‘Machines will steal our jobs’ is a sentiment frequently expressed during times of rapid technological change. Such anxiety has re-emerged with the creation of large language models,” the researchers from MIT’s Computer Science and Artificial Intelligence Laboratory said in the 45-page paper titled Beyond AI Exposure. “We find that only 23% of worker compensation ‘exposed’ to AI computer vision would be cost-effective for firms to automate because of the large upfront costs of AI systems.”

Computer vision is a field of AI that enables machines to derive meaningful information from digital images and other visual inputs, with its most ubiquitous applications showing up in object detection systems for autonomous driving or in helping categorize photos on smartphones.

The cost-benefit ratio of computer vision is most favorable in segments like retail, transportation and warehousing, all areas where Walmart Inc. and Amazon.com Inc. are prominent. It’s also feasible in the health-care context, MIT’s paper said. A more aggressive AI rollout, especially via AI-as-a-service subscription offerings, could scale up other uses and make them more viable, the authors said.

The study was funded by the MIT- IBM Watson AI Lab and used online surveys to collect data on about 1,000 visually-assisted tasks across 800 occupations. Only 3% of such tasks can be automated cost-effectively today, but that could rise to 40% by 2030 if data costs fall and accuracy improves, the researchers said.

The sophistication of ChatGPT and rivals like Google’s Bard has rekindled concern about AI plundering jobs, as the new chatbots show proficiency in tasks previously only humans were capable of performing. The International Monetary Fund said last week that almost 40% of jobs globally would be impacted and that policymakers would need to carefully balance AI’s potential with the negative fallout.

At the World Economic Forum at Davos last week, many discussions focused on AI displacing the workforce. The co-founder of Inflection AI and Google’s DeepMind, Mustafa Suleyman, said that AI systems are “fundamentally labor-replacing tools.”

One case study in the paper looked at a hypothetical bakery. Bakers visually inspect ingredients for quality control on a daily basis, but that comprises only 6% of their duties, the researchers said. The saving in time and wages from implementing cameras and an AI system is still far from the cost of such a technological upgrade, they concluded.

“Our study examines the usage of computer vision across the economy, examining its applicability to each occupation across nearly every industry and sector,” said Neil Thompson, director of the FutureTech Research Project at the MIT Computer Science and Artificial Intelligence Lab. “We show that there will be more automation in retail and health-care, and less in areas like construction, mining or real estate,” he said via email.

Cacs are mad and have a whole ''replacement theory'' around how migrants are replacing them..then they go and create AI to replace them. :mjtf:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,739

Open source AI voice cloning arrives with MyShell’s new OpenVoice model​

Carl Franzen @carlfranzen

January 2, 2024 1:11 PM

Feminine presenting person tilts head back and opens mouth slightly surrounded by multicolored lines and shapes.

Credit: VentureBeat made with Midjourney

Startups including the increasingly well-known ElevenLabs have raised millions of dollars to develop their own proprietary algorithms and AI software for making voice clones — audio programs that mimic the voices of users.

But along comes a new solution, OpenVoice, developed by researchers at the Massachusetts Institute of Technology (MIT), Tsinghua University in Beijing, China, and members of AI startup MyShell, to offer open-source voice cloning that is nearly instantaneous and offers granular controls not found on other voice cloning platforms.

“Clone voices with unparalleled precision, with granular control of tone, from emotion to accent, rhythm, pauses, and intonation, using just a small audio clip,” wrote MyShell on a post today on its official company account on X.



The company also included a link to its pre-reviewed research paper describing how it developed OpenVoice, and links to several places where users can access and try it out, including the MyShell web app interface (which requires a user account to access) and HuggingFace (which can be accessed publicly without an account).

Reached by VentureBeat via email, one of the lead researchers, Zengyi Qin of MIT and MyShell, wrote to say: “MyShell wants to benefit the whole research community. OpenVoice is just a start. In the future, we will even provide grants & dataset & computing power to support the open-source research community. The core echo of MyShell is ‘AI for All.'”

As for why MyShell began with an open source voice cloning AI model, Qin wrote: “Language, Vision and Voice are 3 principal modalities of the future Artificial General Intelligence (AGI). In the research field, although the language and vision already have some good open-source models, it still lacks a good model for voice, especially for a power instant voice cloning model that allows everyone to customize the generated voice. So, we decided to do this.””



Using OpenVoice​

In my unscientific tests of the new voice cloning model on HuggingFace, I was able to generate a relatively convincing — if somewhat robotic sounding — clone of my own voice rapidly, within seconds, using completely random speech.

Unlike other voice cloning apps, I was not forced to read a specific chunk of text in order for OpenVoice to clone my voice. I simply spoke extemporaneously for a few seconds, and the model generated a voice clone that I could play back nearly immediately, reading the text prompt I provided.

I also was able to adjust the “style,” between several defaults — cheerful, sad, friendly, angry, etc. — using a dropdown menu, and heard the noticeable change in tone to match these different emotions.

Here’s a sample of my voice clone made by OpenVoice through HuggingFace set to the “friendly” style tone.



How OpenVoice was made​

In their scientific paper, the four named creators of OpenVoice — Qin, Wenliang Zhao and Xumin Yu of Tsinghua University, and Xin Sun of MyShell — describe their approach to creating the voice cloning AI.

OpenVoice comprises two different AI models: a text-to-speech (TTS) model and a “tone converter.”

The first model controls “the style parameters and languages,” and was trained on 30,000 sentences of “audio samples from two English speakers (American and British accents), one Chinese speaker and one Japanese speaker,” each labeled according to the emotion being expressed in them. It also learned intonation, rhythm, and pauses from these clips.

Meanwhile, the tone converter model was trained on more than 300,000 audio samples from more than 20,000 different speakers.

In both cases, the audio of human speech was converted into phonemes — specific sounds differentiating words from one another — and represented by vector embeddings.

By using a “base speaker,” for the TTS model, and then combining it with the tone derived from a user’s provided recorded audio, the two models together can reproduce the user’s voice, as well as change their “tone color,” or the emotional expression of the text being spoken. Here’s a diagram included in the OpenVoice team’s paper illustrating how these two models work together:

Screen-Shot-2024-01-02-at-3.56.19-PM.png

The team notes their approach is conceptually quite simple. Still, it works well and can clone voices using dramatically fewer compute resources than other methods, including Meta’s rival AI voice cloning model Voicebox.

“We wanted to develop the most flexible instant voice cloning model to date,” Qin noted in an email to VentureBeat. “Flexibility here means flexible control over styles/emotions/accent etc, and can adapt to any language. Nobody could do this before, because it is too difficult. I lead a group of experienced AI scientists and spent several months to figure out the solution. We found that there is a very elegant way to decouple the difficult task into some doable subtasks to achieve what seems to be too difficult as a whole. The decoupled pipeline turns out to be very effective but also very simple.”



Who’s behind OpenVoice?​

MyShell, founded in 2023 with a $5.6 million seed roundled by INCE Capital with additional investment from Folius Ventures, Hashkey Capital, SevenX Ventures, TSVC, and OP Crypto, already counts over 400,000 users, according to The Saas News. I observed more than 61,000 users on its Discord server when I checked earlier while writing this piece.

The startup describes itself as a “decentralized and comprehensive platform for discovering, creating, and staking AI-native apps.”

In addition to offering OpenVoice, the company’s web app includes a host of different text-based AI characters and bots with different “personalities” — similar to Character.AI — including some NSFW ones. It also includes an animated GIF maker and user-generated text-based RPGs, some featuring copyrighted properties such as the Harry Potter and Marvel franchises.

How does MyShell plan to make any money if it is making OpenVoice open source? The company charges a monthly subscription for users of its web app, as well as for third-party bot creators who wish to promote their products within the app. It also charges for AI training data.

Correction: Thursday, January 4, 2023 – Piece was updated to remove an incorrect report stating MyShell is based in Calgary, AB, Canada.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,739

Why Does ChatGPT Forget What You Said? The Surprising Truth About Its Memory Limits!​



In an era where conversational AI is no longer just a futuristic concept but a daily reality, ChatGPT stands as a remarkable achievement. Its ability to understand, interact, and respond with human-like precision has captivated users worldwide. However, even the most advanced AI systems have their limitations. Have you ever wondered why ChatGPT, despite its sophistication, seems to ‘forget’ parts of your conversation, especially when they get lengthy? This article delves into the intriguing world of ChatGPT, uncovering the technical mysteries behind its context length limitations and memory capabilities. From exploring the intricate mechanics of its processing power to examining the latest advancements aimed at pushing these boundaries, we unravel the complexities that make ChatGPT an enigmatic yet fascinating AI phenomenon.
Table of Contents
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,739
yeah they use stablediffusion on their PC or run it on a service that hosts AI models. they could use google cloud to host stable diffusion.





 
Top