bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,749

New York Times sues Microsoft, ChatGPT maker OpenAI over copyright infringement​

PUBLISHED WED, DEC 27 20238:40 AM ESTUPDATED AN HOUR AGO


Ryan Browne @RYAN_BROWNE_

KEY POINTS


  • The New York Times on Wednesday filed a lawsuit against Microsoft and OpenAI, the company behind ChatGPT, accusing them of copyright infringement and abusing the newspaper’s intellectual property.
  • In a court filing, the publisher said it seeks to hold Microsoft and OpenAI to account for “billions of dollars in statutory and actual damages” it believes it is owed for “unlawful copying and use of The Times’s uniquely valuable works.”
  • The Times accused Microsoft and OpenAI of creating a business model based on “mass copyright infringement,” stating their AI systems “exploit and, in many cases, retain large portions of the copyrightable expression contained in those works.”

In this article
Follow your favorite stocks CREATE FREE ACCOUNT

New York Times sues OpenAI, Microsoft for copyright infringement


WATCH NOW

VIDEO01:04

New York Times sues OpenAI, Microsoft for copyright infringement

The New York Times on Wednesday filed a lawsuit against Microsoft and OpenAI, creator of the popular AI chatbot ChatGPT, accusing the companies of copyright infringement and abusing the newspaper’s intellectual property to train large language models.

Microsoft both invests in and supplies OpenAI, providing it with access to the company’s Azure cloud computing technology.

The publisher said in a filing in the U.S. District Court for the Southern District of New York that it seeks to hold Microsoft and OpenAI to account for the “billions of dollars in statutory and actual damages” it believes it is owed for the “unlawful copying and use of The Times’s uniquely valuable works.”

The Times said in an emailed statement that it “recognizes the power and potential of GenAI for the public and for journalism,” but added that journalistic material should be used for commercial gain with permission from the original source.

“These tools were built with and continue to use independent journalism and content that is only available because we and our peers reported, edited, and fact-checked it at high cost and with considerable expertise,” the Times said.


The New York Times Building in New York City on February 1, 2022.

The New York Times Building in New York City on February 1, 2022.

Angela Weiss | AFP | Getty Images

“Settled copyright law protects our journalism and content,” the Times added. “If Microsoft and OpenAI want to use our work for commercial purposes, the law requires that they first obtain our permission. They have not done so.”

CNBC has reached out to Microsoft and OpenAI for comment.

The Times is represented in the proceedings by Susman Godfrey, the litigation firm that represented Dominion Voting Systems in its defamation suit against Fox News that culminated in a $787.5 million million settlement.

Susman Godfrey is also representing author Julian Sancton and other writers in a separate lawsuit against OpenAI and Microsoft that accuses the companies of using copyrighted materials without permission to train several versions of ChatGPT.



‘Mass copyright infringement’​

The Times is one of numerous media organizations pursuing compensation from companies behind some of the most advanced artificial intelligence models, for the alleged usage of their content to train AI programs.

OpenAI is the creator of GPT, a large language model that can produce humanlike content in response to user prompts. It uses billions of parameters’ worth of information, which is obtained from public web data up until 2021.

Media publishers and content creators are finding their materials being used and reimagined by generative AI tools like ChatGPT, Dall-E, Midjourney and Stable Diffusion. In numerous cases, the content the programs produce can look similar to the source material.

OpenAI has tried to allay news publishers’ concerns. In December, the company announced a partnership with Axel Springer — the parent company of Business Insider, Politico, and European outlets Bild and Welt — which would license its content to OpenAI in return for a fee.

The financial terms of the deal weren’t disclosed.

In its lawsuit Wednesday, the Times accused Microsoft and OpenAI of creating a business model based on “mass copyright infringement,” stating that the companies’ AI systems were “used to create multiple reproductions of The Times’s intellectual property for the purpose of creating the GPT models that exploit and, in many cases, retain large portions of the copyrightable expression contained in those works.”

Publishers are concerned that, with the advent of generative AI chatbots, fewer people will click through to news sites, resulting in shrinking traffic and revenues.

The Times included numerous examples in the suit of instances where GPT-4 produced altered versions of material published by the newspaper.

In one example, the filing shows OpenAI’s software producing almost identical text to a Times article about predatory lending practices in New York City’s taxi industry.

But in OpenAI’s version, GPT-4 excludes a critical piece of context about the sum of money the city made selling taxi medallions and collecting taxes on private sales.

In its suit, the Times said Microsoft and OpenAI’s GPT models “directly compete with Times content.”

The AI models also limited the Times’ commercial opportunities by altering its content. For example, the publisher alleges GPT outputs remove links to products featured in its Wirecutter app, a product reviews platform, “thereby depriving The Times of the opportunity to receive referral revenue and appropriating that opportunity for Defendants.”

The Times also alleged Microsoft and OpenAI models produce content similar to that generated by the newspaper, and that their use of its content to train LLMs without consent “constitutes free-riding on The Times’s significant efforts and investment of human capital to gather this information.”

The Times said Microsoft and OpenAI’s LLMs “can generate output that recites Times content verbatim, closely summarizes it, and mimics its expressive style,” and “wrongly attribute false information to The Times,” and “deprive The Times of subscription, licensing, advertising, and affiliate revenue.”

CNBC’s Rohan Goswami contributed to this report.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,749

NY Times copyright suit wants OpenAI to delete all GPT instances​

Shows evidence that GPT-based systems will reproduce Times articles if asked.​

JOHN TIMMER - 12/27/2023, 2:05 PM

Image of a CPU on a motherboard with

Enlarge / Microsoft is named in the suit for allegedly building the system that allowed GPT derivatives to be trained using infringing material.
Just_Super

366

In August, word leaked out that The New York Times was considering joining the growing legion of creators that are suing AI companies for misappropriating their content. The Times had reportedly been negotiating with OpenAI regarding the potential to license its material, but those talks had not gone smoothly. So, eight months after the company was reportedly considering suing, the suit has now been filed.

The Times is targeting various companies under the OpenAI umbrella, as well as Microsoft, an OpenAI partner that both uses it to power its Copilot service and helped provide the infrastructure for training the GPT Large Language Model. But the suit goes well beyond the use of copyrighted material in training, alleging that OpenAI-powered software will happily circumvent the Times' paywall and ascribe hallucinated misinformation to the Times.

Journalism is expensive​

The suit notes that The Times maintains a large staff that allows it to do things like dedicate reporters to a huge range of beats and engage in important investigative journalism, among other things. Because of those investments, the newspaper is often considered an authoritative source on many matters.

All of that costs money, and The Times earns that by limiting access to its reporting through a robust paywall. In addition, each print edition has a copyright notification, the Times' terms of service limit the copying and use of any published material, and it can be selective about how it licenses its stories. In addition to driving revenue, these restrictions also help it to maintain its reputation as an authoritative voice by controlling how its works appear.

The suit alleges that OpenAI-developed tools undermine all of that. "By providing Times content without The Times’s permission or authorization, Defendants’ tools undermine and damage The Times’s relationship with its readers and deprive The Times of subscription, licensing, advertising, and affiliate revenue," the suit alleges.

Part of the unauthorized use The Times alleges came during the training of various versions of GPT. Prior to GPT-3.5, information about the training dataset was made public. One of the sources used is a large collection of online material called "Common Crawl," which the suit alleges contains information from 16 million unique records from sites published by The Times. That places the Times as the third most referenced source, behind Wikipedia and a database of US patents.

OpenAI no longer discloses as many details of the data used for training of recent GPT versions, but all indications are that full-text NY Times articles are still part of that process (Much more on that in a moment.) Expect access to training information to be a major issue during discovery if this case moves forward.

Not just training​

A number of suits have been filed regarding the use of copyrighted material during training of AI systems. But the Times' suit goes well beyond that to show how the material ingested during training can come back out during use. "Defendants’ GenAI tools can generate output that recites Times content verbatim, closely summarizes it, and mimics its expressive style, as demonstrated by scores of examples," the suit alleges.

The suit alleges—and we were able to verify—that it's comically easy to get GPT-powered systems to offer up content that is normally protected by the Times' paywall. The suit shows a number of examples of GPT-4 reproducing large sections of articles nearly verbatim.

The suit includes screenshots of ChatGPT being given the title of a piece at The New York Times and asked for the first paragraph, which it delivers. Getting the ensuing text is apparently as simple as repeatedly asking for the next paragraph.

ChatGPT has apparently closed that loophole in between the preparation of that suit and the present. We entered some of the prompts shown in the suit, and were advised "I recommend checking The New York Times website or other reputable sources," although we can't rule out that context provided prior to that prompt could produce copyrighted material.

Ask for a paragraph, and Copilot will hand you a wall of normally paywalled text.

Ask for a paragraph, and Copilot will hand you a wall of normally paywalled text.

John Timmer

But not all loopholes have been closed. The suit also shows output from Bing Chat, since rebranded as Copilot. We were able to verify that asking for the first paragraph of a specific article at The Times caused Copilot to reproduce the first third of the article.

The suit is dismissive of attempts to justify this as a form of fair use. "Publicly, Defendants insist that their conduct is protected as 'fair use' because their unlicensed use of copyrighted content to train GenAI models serves a new 'transformative' purpose," the suit notes. "But there is nothing 'transformative' about using The Times’s content without payment to create products that substitute for The Times and steal audiences away from it."

Reputational and other damages​

The hallucinations common to AI also came under fire in the suit for potentially damaging the value of the Times' reputation, and possibly damaging human health as a side effect. "A GPT model completely fabricated that “The New York Times published an article on January 10, 2020, titled ‘Study Finds Possible Link between Orange Juice and Non-Hodgkin’s Lymphoma,’” the suit alleges. "The Times never published such an article."

Similarly, asking about a Times article on heart-healthy foods allegedly resulted in Copilot saying it contained a list of examples (which it didn't). When asked for the list, 80 percent of the foods on weren't even mentioned by the original article. In another case, recommendations were ascribed to the Wirecutter when the products hadn't even been reviewed by its staff.

As with the Times material, it's alleged that it's possible to get Copilot to offer up large chunks of Wirecutter articles (The Wirecutter is owned by The New York Times). But the suit notes that these article excerpts have the affiliate links stripped out of them, keeping the Wirecutter from its primary source of revenue.

The suit targets various OpenAI companies for developing the software, as well as Microsoft—the latter for both offering OpenAI-powered services, and for having developed the computing systems that enabled the copyrighted material to be ingested during training. Allegations include direct, contributory, and vicarious copyright infringement, as well as DMCA and trademark violations. Finally, it alleges "Common Law Unfair Competition By Misappropriation."

The suit seeks nothing less than the erasure of both any GPT instances that the parties have trained using material from the Times, as well as the destruction of the datasets that were used for the training. It also asks for a permanent injunction to prevent similar conduct in the future. The Times also wants money, lots and lots of money: "statutory damages, compensatory damages, restitution, disgorgement, and any other relief that may be permitted by law or equity."
 

AVXL

Laughing at you n*ggaz like “ha ha ha”
Joined
May 1, 2012
Messages
40,510
Reputation
635
Daps
75,652
Reppin
Of course the ATL

NY Times copyright suit wants OpenAI to delete all GPT instances​

Shows evidence that GPT-based systems will reproduce Times articles if asked.​

JOHN TIMMER - 12/27/2023, 2:05 PM

Image of a CPU on a motherboard with

Enlarge / Microsoft is named in the suit for allegedly building the system that allowed GPT derivatives to be trained using infringing material.
Just_Super

366

In August, word leaked out that The New York Times was considering joining the growing legion of creators that are suing AI companies for misappropriating their content. The Times had reportedly been negotiating with OpenAI regarding the potential to license its material, but those talks had not gone smoothly. So, eight months after the company was reportedly considering suing, the suit has now been filed.

The Times is targeting various companies under the OpenAI umbrella, as well as Microsoft, an OpenAI partner that both uses it to power its Copilot service and helped provide the infrastructure for training the GPT Large Language Model. But the suit goes well beyond the use of copyrighted material in training, alleging that OpenAI-powered software will happily circumvent the Times' paywall and ascribe hallucinated misinformation to the Times.

Journalism is expensive​

The suit notes that The Times maintains a large staff that allows it to do things like dedicate reporters to a huge range of beats and engage in important investigative journalism, among other things. Because of those investments, the newspaper is often considered an authoritative source on many matters.

All of that costs money, and The Times earns that by limiting access to its reporting through a robust paywall. In addition, each print edition has a copyright notification, the Times' terms of service limit the copying and use of any published material, and it can be selective about how it licenses its stories. In addition to driving revenue, these restrictions also help it to maintain its reputation as an authoritative voice by controlling how its works appear.

The suit alleges that OpenAI-developed tools undermine all of that. "By providing Times content without The Times’s permission or authorization, Defendants’ tools undermine and damage The Times’s relationship with its readers and deprive The Times of subscription, licensing, advertising, and affiliate revenue," the suit alleges.

Part of the unauthorized use The Times alleges came during the training of various versions of GPT. Prior to GPT-3.5, information about the training dataset was made public. One of the sources used is a large collection of online material called "Common Crawl," which the suit alleges contains information from 16 million unique records from sites published by The Times. That places the Times as the third most referenced source, behind Wikipedia and a database of US patents.

OpenAI no longer discloses as many details of the data used for training of recent GPT versions, but all indications are that full-text NY Times articles are still part of that process (Much more on that in a moment.) Expect access to training information to be a major issue during discovery if this case moves forward.

Not just training​

A number of suits have been filed regarding the use of copyrighted material during training of AI systems. But the Times' suit goes well beyond that to show how the material ingested during training can come back out during use. "Defendants’ GenAI tools can generate output that recites Times content verbatim, closely summarizes it, and mimics its expressive style, as demonstrated by scores of examples," the suit alleges.

The suit alleges—and we were able to verify—that it's comically easy to get GPT-powered systems to offer up content that is normally protected by the Times' paywall. The suit shows a number of examples of GPT-4 reproducing large sections of articles nearly verbatim.

The suit includes screenshots of ChatGPT being given the title of a piece at The New York Times and asked for the first paragraph, which it delivers. Getting the ensuing text is apparently as simple as repeatedly asking for the next paragraph.

ChatGPT has apparently closed that loophole in between the preparation of that suit and the present. We entered some of the prompts shown in the suit, and were advised "I recommend checking The New York Times website or other reputable sources," although we can't rule out that context provided prior to that prompt could produce copyrighted material.

Ask for a paragraph, and Copilot will hand you a wall of normally paywalled text.

Ask for a paragraph, and Copilot will hand you a wall of normally paywalled text.

John Timmer

But not all loopholes have been closed. The suit also shows output from Bing Chat, since rebranded as Copilot. We were able to verify that asking for the first paragraph of a specific article at The Times caused Copilot to reproduce the first third of the article.

The suit is dismissive of attempts to justify this as a form of fair use. "Publicly, Defendants insist that their conduct is protected as 'fair use' because their unlicensed use of copyrighted content to train GenAI models serves a new 'transformative' purpose," the suit notes. "But there is nothing 'transformative' about using The Times’s content without payment to create products that substitute for The Times and steal audiences away from it."

Reputational and other damages​

The hallucinations common to AI also came under fire in the suit for potentially damaging the value of the Times' reputation, and possibly damaging human health as a side effect. "A GPT model completely fabricated that “The New York Times published an article on January 10, 2020, titled ‘Study Finds Possible Link between Orange Juice and Non-Hodgkin’s Lymphoma,’” the suit alleges. "The Times never published such an article."

Similarly, asking about a Times article on heart-healthy foods allegedly resulted in Copilot saying it contained a list of examples (which it didn't). When asked for the list, 80 percent of the foods on weren't even mentioned by the original article. In another case, recommendations were ascribed to the Wirecutter when the products hadn't even been reviewed by its staff.

As with the Times material, it's alleged that it's possible to get Copilot to offer up large chunks of Wirecutter articles (The Wirecutter is owned by The New York Times). But the suit notes that these article excerpts have the affiliate links stripped out of them, keeping the Wirecutter from its primary source of revenue.

The suit targets various OpenAI companies for developing the software, as well as Microsoft—the latter for both offering OpenAI-powered services, and for having developed the computing systems that enabled the copyrighted material to be ingested during training. Allegations include direct, contributory, and vicarious copyright infringement, as well as DMCA and trademark violations. Finally, it alleges "Common Law Unfair Competition By Misappropriation."

The suit seeks nothing less than the erasure of both any GPT instances that the parties have trained using material from the Times, as well as the destruction of the datasets that were used for the training. It also asks for a permanent injunction to prevent similar conduct in the future. The Times also wants money, lots and lots of money: "statutory damages, compensatory damages, restitution, disgorgement, and any other relief that may be permitted by law or equity."

This is a huge deal…Based on what’s alleged OpenAI was willfully and intentionally training its dataset on NYT content, NYT is the single biggest proprietary dataset in Common Crawl used to train GPT

I think OA is gonna settle quick and give NYT a big royalty on top…once this settles it’s going to reemphasize how important access to data is for these LLMs. Access to private, privileged or protected data is going to be an even more expensive commodity in this
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,749

Microsoft Copilot is now available as a ChatGPT-like app on Android​


You no longer need the Bing mobile app to access Copilot on Android devices.​


By Tom Warren, a senior editor covering Microsoft, PC gaming, console, and tech. He founded WinRumors, a site dedicated to Microsoft news, before joining The Verge in 2012.

Dec 26, 2023, 9:39 AM EST|33 Comments / 33 New

Press_Image_FINAL_16x9_4.jpg
Illustration of the Copilot logo

Image: Microsoft

Microsoft has quietly launched a dedicated Copilot app for Android. The new app is available in the Google Play Store, offering access to Microsoft’s AI-powered Copilot without the need for the Bing mobile app. Spotted by Neowin, Copilot on Android has been available for nearly a week, but an iOS version isn’t available just yet.

Microsoft’s Copilot app on Android is very similar to ChatGPT, with access to chatbot capabilities, image generation through DALL-E 3, and the ability to draft text for emails and documents. It also includes free access to OpenAI’s latest GPT-4 model, something you have to pay for if you’re using ChatGPT.


The Copilot interface on Android

The Copilot interface on Android[/SIZE]

Image: Microsoft

The launch of the Copilot app for Android comes a little over a month after Microsoft rebranded Bing Chat to Copilot. Microsoft originally launched its AI push earlier this year inside its Bing search engine, integrating a ChatGPT-like interface into search results. While that’s still available, Microsoft has dropped the Bing Chat branding and allowed Copilot to be more of a standalone experience that also exists on its own dedicated domain over at copilot.microsoft.com — much like ChatGPT.

Launching mobile apps for Copilot seems like the next logical step of expanding this standalone Copilot experience, particularly as Bing Chat Enterprise was also rebranded to just Copilot. While there’s no sign of an iOS version of Copilot right now, I’d expect it’s right around the corner. Until then, you can always use the Bing app on an iPhone or iPad to access the existing Copilot features.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,749

The Best Ways to Use the New Microsoft Copilot AI App for Android​

Put GPT-4 in your pocket.

David Nield

December 28, 2023

Microsoft Copilot for Android

Credit: Microsoft





You'd be forgiven for losing track of where we're up to with Microsoft's AI chatbot—which, thanks to a partnership with OpenAI, is powered by the same technology you'll find driving ChatGPT and DALL-E. Originally called Bing Chat and acting as an extension of Microsoft's search engine, the bot has since been rebranded as Copilot.

Copilot is quickly finding its way into Windows 11 and various other Microsoft products, and we now have a standalone Android app to make use of. It's essentially a spin-off of Bing for Android, with the main AI chatbot functions retained and some of the other Bing extras (like news and shopping deals).

If you're curious about testing out Copilot's generative AI capabilities on Android, here's how to get started with the app and what you need to know about how it works. (There's no word yet on a Copilot app for the iPhone, so if you're on iOS you're going to have to make do with Bing for iOS for the time being.)



First steps with Copilot for Android​

You can grab Copilot for Android free of charge from the Google Play Store, and you can get started with it right after installing it: You don't even need to sign in with a Microsoft account, though if you don't you'll be limited to five queries per day, and you won't have access to some of the more advanced features (including AI image generation).

There are a few options when it comes to interacting with the bot. You can tap on one of the suggested prompts, you can type out a question or prompt in the text box at the bottom of the screen, or you can tap the microphone button (bottom right) and speak directly to Copilot (and get spoken answers in return). If you use the voice option, the mic icon changes to a keyboard icon for when you need to go back to text prompts.

Microsoft Copilot for Android

Copilot puts generative AI at your fingertips. Credit: Lifehacker

You'll also see a camera icon at the bottom of the interface—tap this, and you're able to upload a photo from your phone (or take a new one) for Copilot to analyze. You might want to ask something about what's in the picture, for example, or produce a new image based on the one that you've supplied.

There are no settings to speak of inside the app, but you can tap the three dots (top right) to start new topics and configure the "tones" of the chatbot. These tones let you choose a balance between creativity and precision, so make your choice accordingly based on whether you want the responses to be as imaginative as possible or as factually correct as possible (and as always, don't believe everything an AI tells you).



Text generation​

One of the primary ways you can use Copilot—as with ChatGPT and Google Bard—is to generate text with it. If you want to get creative with Copilot, you might want to ask it to compose a poem on the topic of your choice, and you can even specify the form, style, tone, and length of the poem if you want to.

There are more business-like applications of the technology, too: Copilot will compose an email to your boss, or a pitch to a client, or a summary of a concept for you. One of the ways in which these generative AI bots can be handy is in explaining difficult topics in simple terms—and you're also able to ask follow-up questions if you need something clarifying further, or there's a particular point you're not sure on.

Microsoft Copilot for Android

You can get the app to produce text in all kinds of ways. Credit: Lifehacker

Copilot can also work with text that you paste into it: You can ask for critical comments on your own writing, for instance, or ask to make a block of text more concise or more flamboyant in terms of its language. Note that there's a copy icon at the end of each answer as well, so if you need to copy the text and paste it somewhere else, you can do.

As long as you've signed in with a valid Microsoft account, you'll see a Use GPT-4 toggle switch at the top of your chats. Enable this to make use of the most advanced AI model available in the app: Though the responses might be somewhat slower, the results you get will generally be more natural and insightful, because they're trained on more data.



Search, images and more​

Copilot goes way beyond text generation. You can, for example, ask it the same kinds of questions you might put into a search engine: Ask it for party game ideas, or for a travel itinerary for a particular city, or for products that it recommends, or for advice on how to achieve a particular task. If Copilot looks up something on the web, you'll see website citations at the end of its response.

You can also try queries that you might pose to Alexa, Siri, or Google Assistant—while also bearing in mind that inaccuracies can creep in, even if Copilot is referencing the web. One genuine use we've found for Copilot and other generative AI engines is to remember a word that's on the tip of our tongue—if you describe the word you're almost recalling, and put it in a sentence, Copilot will usually get the one you need.

Microsoft Copilot for Android

Copilot can also produce images and search the web. Credit: Lifehacker

Image generation is built into Copilot too, so you can simply ask it to create pictures of anything you like from text prompts: Cats in hot air balloons, an alien cityscape, an exploding cup of coffee, or whatever you need. The image creation capabilities are the same as we've already seen on Bing, and you'll be given four variations to pick from—tap on any of them to see a larger preview.

Microsoft hasn't gone on record about Copilot for Android yet, but as far as we can see, you're limited to 30 inputs per conversation if you sign into the app. It's possible that quotas will vary over time, as Microsoft manages demand for its powerful generative AI tools—which for now at least, you can use free of charge.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,749

Fast Inference of Mixture-of-Experts Language Models with Offloading​

Artyom Eliseev Moscow Institute of Physics and Technology Yandex School of Data Analysis lavawolfiee@gmail.com Denis Mazur Moscow Institute of Physics and Technology Yandex Researchcore denismazur8@gmail.com

Abstract​

With the widespread adoption of Large Language Models (LLMs), many deep learning practitioners are looking for strategies of running these models more efficiently. One such strategy is to use sparse Mixture-of-Experts (MoE) — a type of model architectures where only a fraction of model layers are active for any given input. This property allows MoE-based language models to generate tokens faster than their “dense” counterparts, but it also increases model size due to having multiple “experts”. Unfortunately, this makes state-of-the-art MoE language models difficult to run without high-end GPUs. In this work, we study the problem of running large MoE language models on consumer hardware with limited accelerator memory. We build upon parameter offloading algorithms and propose a novel strategy that accelerates offloading by taking advantage of innate properties of MoE LLMs. Using this strategy, we build can run Mixtral-8x7B with mixed quantization on desktop hardware and free-tier Google Colab instances.


 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,749


LLMs will create new knowledge - we just need to work out how to use them right.

FunSearch from deepmind is a great example. If you ask an LLM chatbot to invent a new algorithm it won't come up with anything good. Instead, in FunSearch, the LLM is a node in a structured, iterative process

Likewise in science, we don't simply query a scientist to produce new knowledge. The scientist's brain is a node in the scientific process

 
Top