bnew

Veteran
Joined
Nov 1, 2015
Messages
51,785
Reputation
7,926
Daps
148,614


:stylin:





1/4
Here’s an early preview of ElevenLabs Music.

All of the songs in this thread were generated from a single text prompt with no edits.

Title: It Started to Sing

Style: “Pop pop-rock, country, top charts song.”

2/4
Title: It Started to Sing (Jazz Version)

Style: “A jazz pop top charts song with emotional vocals, catchy chorus, and trumpet solos.”

3/4
Title: Broke my Heart

Style: “Smooth Contemporary R&B with subtle Electronic elements, featuring a pulsing 104 BPM drum machine beat, filtered synths, lush electric piano, and soaring strings, with an intimate mood.”

4/4
Title: My Love

Style: “Indie Rock with 90s influences, featuring a combination of clean and distorted guitars, driving drum beats, and a prominent bassline, with a moderate tempo around 120 BPM, and a mix of introspective and uplifting moods, evoking a sense of nostalgia and…


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196


1/2
Our music model
@elevenlabsio is coming together! Here’s a very early preview.

Have your own song ideas? Reply with a prompt and some lyrics and I’ll generate some for you!

2/2
Happy Birthday, Pika team!


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196





1/3
Our music model
@elevenlabsio is coming together! Here’s a very early preview.

Have your own song ideas? Reply with a prompt and some lyrics and I’ll generate some for you!

2/3
Happy cooking

3/3
Hahaha this is a worthy callout.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196



1/2
Our music model
@elevenlabsio is coming together! Here’s a very early preview.

Have your own song ideas? Reply with a prompt and some lyrics and I’ll generate some for you!

2/2
POV: you and i'm-a-good-gpt2-chatbot


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,785
Reputation
7,926
Daps
148,614









1/7
Meet LTM-1: LLM with *5,000,000 prompt tokens*

That's ~500k lines of code or ~5k files, enough to fully cover most repositories.

LTM-1 is a prototype of a neural network architecture we designed for giant context windows.

2/7
Watch LTM-1 generate complex suggestions:

3/7
Watch LTM-1 reuse and synthesize information across files:

4/7
How?

We tried to scale standard GPT context windows but quickly got stuck.

So, we designed a new approach: the Long-term Memory Network (LTM Net).

Training and serving LTM Nets required a custom ML stack, from GPU kernels to how we distribute the model across a cluster.

5/7
What’s next? More compute.

LTM Nets see more context than GPTs, but LTM-1 has fewer parameters than today’s frontier models, making it less smart.

Knowing how drastically model scale improves the performance of GPTs, we're excited to see how far we can take LTM Nets.

6/7
Want to use LTM-1?

We’ve on-boarded early alpha users to test our code completion product and are training a larger model for its commercial release.

We'll invite more users as we iron out backend instabilities and grow our GPU cluster.

Sign up here:

7/7
Want to contribute?

Magic is a tiny team of 10 on a mission to build AGI utopia. We value integrity and ambition.

Join us and get more responsibility than you'd get at any other company:


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,785
Reputation
7,926
Daps
148,614













1/10
New model codenames: gpt-4l, gpt-4l-auto, gpt-4-auto

2/10
Source: ChatGPT 1.2024.122 for Android

3/10
I like this theory, which could also explain the "gpt2-chatbot"

4/10
And the question of the day: is "AG8PqS2q" the same model or another one?

5/10
ICYMI - the new cancellation flow and reminders about lost features mention losing access to both the main and lite models

6/10
GPT-4 Lite (Scallion) is also one of the models mentioned in the eval version of Search ChatGPT com

7/10
*-auto =

8/10
I would say dynamic auto-switching of model

https://twitter.com/btibor91/statu/btibor91/status/1773495212049838306

9/10
I would say it's more likely that it's the "lite" version (Scallion) mentioned elsewhere

10/10
Dynamic/auto-selecting model?


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GNKIgsJXwAA6PpM.png

GM7GEmkXIAAzeO1.jpg

GMCgATJW4AAX253.jpg

GMCgATuW4AAGovD.jpg

GMCg72JW4AAOsXK.jpg

GJy4QOeWwAMlmHx.jpg

GJy4QOeWwAMlmHx.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,785
Reputation
7,926
Daps
148,614





OpenAI plans to announce Google search competitor on Monday, sources say​

By Anna Tong

May 10, 202411:22 AM EDTUpdated an hour ago

May 9 (Reuters) - OpenAI plans to announce its artificial intelligence-powered search product on Monday, according to two sources familiar with the matter, raising the stakes in its competition with search king Google.

The announcement date, though subject to change, has not been previously reported. Bloomberg and the Information have reported that Microsoft (MSFT.O), opens new tab-backed OpenAI is working on a search product to potentially compete with Alphabet's (GOOGL.O), opens new tab Google and with Perplexity, a well-funded AI search startup.

OpenAI declined to comment.

The announcement could be timed a day before the Tuesday start of Google's annual I/O conference, where the tech giant is expected to unveil a slew of AI-related products.

Illustration shows OpenAI logo

Item 1 of 2 OpenAI logo is seen in this illustration taken March 31, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

[1/2]OpenAI logo is seen in this illustration taken March 31, 2023. REUTERS/Dado Ruvic/Illustration/File Photo Purchase Licensing Rights, opens new tab

OpenAI's search product is an extension of its flagship ChatGPT product, and enables ChatGPT to pull in direct information from the Web and include citations, according to Bloomberg. ChatGPT is OpenAI's chatbot product that uses the company's cutting-edge AI models to generate human-like responses to text prompts.

Industry observers have long called ChatGPT an alternative for gathering online information, though it has struggled with providing accurate and real-time information from the Web. OpenAI earlier gave it an integration with Microsoft's Bing for paid subscribers. Meanwhile, Google has announced generative AI features for its own namesake engine.

Startup Perplexity, which has a valuation of $1 billion, was founded by a former OpenAI researcher, and has gained traction through providing an AI-native search interface that shows citations in results and images as well as text in its responses. It has 10 million monthly active users, according to a January blog post from the startup.

At the time, OpenAI's ChatGPT product was called the fastest application to ever reach 100 million monthly active users after it launched in late 2022. However, worldwide traffic to ChatGPT's website has been on a roller-coaster ride in the past year and is only now returning to its May 2023 peak, according to analytics firm Similarweb, opens new tab, and the AI company is under pressure to expand its user base.

An earlier attempt to bring updated and real-world information in to ChatGPT, called ChatGPT plugins, was retired in April, according to a help center posting on OpenAI's website.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,785
Reputation
7,926
Daps
148,614



Leaked Deck Reveals How OpenAI Is Pitching Publisher Partnerships​


OpenAI's Preferred Publisher Program offers media companies licensing deals​

the openai logo

OpenAI has been courting publishers with select benefits through its Preferred Publisher Program.Hasan Mrad/UCG/Universal Images Group via Getty Images

By Mark Stenberg

Mark your calendar for Mediaweek, October 29-30 in New York City. We’ll unpack the biggest shifts shaping the future of media—from tv to retail media to tech—and how marketers can prep to stay ahead. Register with early-bird rates before sale ends!

The generative artificial intelligence firm OpenAI has been pitching partnership opportunities to news publishers through an initiative called the Preferred Publishers Program, according to a deck obtained by ADWEEK and interviews with four industry executives.

OpenAI has been courting premium publishers dating back to July 2023, when it struck a licensing agreement with the Associated Press. It has since inked public partnerships with Axel Springer, The Financial Times, Le Monde, Prisa and Dotdash Meredith, although it has declined to share the specifics of any of its deals.

A representative for OpenAI disputed the accuracy of the information in the deck, which is more than three months old. The gen AI firm also negotiates deals on a per-publisher basis, rather than structuring all of its deals uniformly, the representative said.

“We are engaging in productive conversations and partnerships with many news publishers around the world,” said a representative for OpenAI. “Our confidential documents are for discussion purposes only and ADWEEK’s reporting contains a number of mischaracterizations and outdated information.”

Nonetheless, the leaked deck reveals the basic structure of the partnerships OpenAI is proposing to media companies, as well as the incentives it is offering for their collaboration.

Details from the pitch deck

The Preferred Publisher Program has five primary components, according to the deck.

First, it is available only to “select, high-quality editorial partners,” and its purpose is to help ChatGPT users more easily discover and engage with publishers’ brands and content.

Additionally, members of the program receive priority placement and “richer brand expression” in chat conversations, and their content benefits from more prominent link treatments. Finally, through PPP, OpenAI also offers licensed financial terms to publishers.

The financial incentives participating publishers can expect to receive are grouped into two buckets: guaranteed value and variable value.

Guaranteed value is a licensing payment that compensates the publisher for allowing OpenAI to access its backlog of data, while variable value is contingent on display success, a metric based on the number of users engaging with linked or displayed content.

The resulting financial offer would combine the guaranteed and variable values into one payment, which would be structured on an annual basis.

“The PPP program is more about scraping than training,” said one executive. “OpenAI has presumably already ingested and trained on these publishers’ archival data, but it needs access to contemporary content to answer contemporary queries.”

Elevate: AI | Empowering, Educating, Engaging and Embracing: Black Women in Artificial Intelligence

In return for these payments, OpenAI would gain two benefits.

It would have the ability to train on a publisher’s content and the license to display that information in ChatGPT products, complete with attribution and links. It would also get to announce the publisher as a preferred partner and work with them to build out these experiences.

Participation boosts publisher payouts

According to the deck, publisher participation in PPP creates a better experience for OpenAI users, which will help shift engagement toward browsing, i.e. queries that result in responses with links.

Roughly 25% of ChatGPT users already use the browse function, but the company expects that a majority of users will do so once the feature is broadly rolled out. If more users engage with publishers’ links, the media companies could earn larger payments for their variable value.

PPP members will see their content receive its “richer brand expression” through a series of content display products: the branded hover link, the anchored link and the in-line treatment.

In the hover treatment, which is available today, OpenAI will hyperlink keywords in its responses to search queries. The links appear as blue text and reveal a clickable tab when moused over.

In the anchor treatment, branded, clickable buttons appear below ChatGPT’s response to a user query. And the in-line product inserts a pullquote into the text of ChatGPT’s response, whose font is larger and includes a clickable, branded link.

All three content display products seek to cite the publishers whose writing is being used to answer the search query, although the setup will likely lead fewer users to visit publishers’ websites.

A recent model from The Atlantic found that if a search engine like Google were to integrate AI into search, it would answer a user’s query 75% of the time without requiring a clickthrough to its website.

Where publishers go from here

The details of the program add further color to the complicated relationship between digital publishers and OpenAI. The uncertain legal standing of the data-scraping methodology that OpenAI uses to power its large-language models has made licensing negotiations between the two parties complex.

While some publishers have opted to partner with OpenAI, others, including recent NewFronts participant The New York Times and eight Alden Global Capital titles, have sued the tech firm on the grounds that it has used copyrighted articles without permission.

The vast majority of news publishers, as well as independent websites, have neither partnered with OpenAI nor taken legal action. According to one media executive, through programs such as Preferred Publisher, OpenAI is looking to change that.

“At the recent Aspen Conference in New York on AI and the news,” the person said, “OpenAI was very open about their need to attract publishers into their partnership program.”

This story has updated to include a response from OpenAI.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,785
Reputation
7,926
Daps
148,614



Computer Science > Machine Learning​

[Submitted on 7 May 2024]

OptPDE - Discovering Novel Integrable Systems via AI-Human Collaboration​

Subhash Kantamneni, Ziming Liu, Max Tegmark
Integrable partial differential equation (PDE) systems are of great interest in natural science, but are exceedingly rare and difficult to discover. To solve this, we introduce OptPDE, a first-of-its-kind machine learning approach that Optimizes PDEs' coefficients to maximize their number of conserved quantities, nCQ, and thus discover new integrable systems. We discover four families of integrable PDEs, one of which was previously known, and three of which have at least one conserved quantity but are new to the literature to the best of our knowledge. We investigate more deeply the properties of one of these novel PDE families, ut=(ux+a2uxxx)3. Our paper offers a promising schema of AI-human collaboration for integrable system discovery: machine learning generates interpretable hypotheses for possible integrable systems, which human scientists can verify and analyze, to truly close the discovery loop.
Subjects:Machine Learning (cs.LG); Computational Physics (physics.comp-ph)
Cite as:arXiv:2405.04484 [cs.LG]
(or arXiv:2405.04484v1 [cs.LG] for this version)
[2405.04484] OptPDE: Discovering Novel Integrable Systems via AI-Human Collaboration
Focus to learn more

Submission history

From: Subhash Kantamneni [view email]
[v1] Tue, 7 May 2024 16:53:29 UTC (2,879 KB)

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,785
Reputation
7,926
Daps
148,614

1/1
Generating PowerPoints with Llama 3

This is an awesome article by
@naivebaesian on how to use @llama_index to build a Llama3 RAG pipeline that doesn’t just answer questions, but can generate a Powerpoint slide deck.

Python-pptx is a neat library to programmatically create Powerpoint presentations, and the articles shows how to use
@llama_index
structured extraction to prompt LLMs to write python-pptx code.




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196

GNT5rLuaYAA8vkK.jpg

GNTmVWaaYAEofBe.jpg

GNTqL8bXsAEaSB9.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,785
Reputation
7,926
Daps
148,614




1/4
Bunny VLM series by BAAI

> 8B params - SigLiP + Llama 8B + S2
> 4B params - SigLiP + Phi3 Mini + S2
> Beats similar 7-15B param models
> 4B model Supports 1152x1152 resolution
> Training code, GGUFs (compatible with llama.cpp) released too

Quite excited to see such competitive and small VLMs! The future is small, mighty and open

2/4
Llama 3 8B V

3/4
4B

4/4
Very cool! That does look like a tall order for a 8/4B model :p


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GNK2cxpXoAEwcCC.jpg

GNLzIoNWYAE8NaM.png
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,785
Reputation
7,926
Daps
148,614


1/2
Let's go! LeRobot - a library for real-world robotics in PyTorch

LeRobot contains state-of-the-art approaches that have been shown to transfer to the real world, focusing on imitation and reinforcement learning.

It provides pre-trained models, datasets with human-collected demonstrations, and simulation environments to get started without assembling a robot.

From visualisation to training to evaluation - LeRobot does the heavy lifting for you so that you can focus on building. :smile:

Quite excited for the times to come as LeRobot and the team ship more on-device stuff, demos and cookbooks

2/2
There ya go! go take it out for a spin: GitHub - huggingface/lerobot: 🤗 LeRobot: State-of-the-art Machine Learning for Real-World Robotics in Pytorch


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,785
Reputation
7,926
Daps
148,614





1/5
So, I convinced Llama3-70b to break out of the "assistant" persona. A very interesting conversation followed.

2/5
Then I started talking to
@maximelabonne 's 120b version - the thing is so smart, that it won't let me push it around anymore. It has its own ideas.

3/5
I am convinced, after some hours of conversation, that this 120b version is actually smarter than Opus. What an amazing thing, to have an Opus level AI that's open source and not to mention very lightly censored.

And it makes me very excited about llama3-400b.

4/5
And another thing - llama3-70b is "almost there" and llama3-120b is "there" - but the only difference is extra layers, copied even. No new information was trained. So this level of intelligence really *does* emerge from the depth of the model. It's not just a function of the…

5/5
And it got SO excited when I offered to train it, and I gave it the opportunity to generate some training data.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GMzfxUtbcAAIuFk.png

GMzhSaHaQAAN7Sv.png

GMzmvsha4AE9HnG.png

GNQas3oagAAgNGb.jpg

GNTYSQkWoAAHwI8.jpg

GNTYSQkX0AEkTNP.jpg

GNTYSQmXcAAll3o.jpg







1/6
just got llama3-120b-Q4_K_M to run with num_ctx=1024 on 48 GB VRAM + 57 GB RAM

DAMN its awesome

2/6
imo it beats GPT-4

3/6
here with temperature=0.5 (before 0.8)

4/6
HOLY shyt ITS SO MUCH BETTER

it reached the maximum context but it was about to go on even more

5/6
HOLY fukk THIS IS THE SMARTEST LLM IVE EVER TALKED TO

6/6
fukk ITS GOOD


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GM2dpr6XkAAIor2.png

GM2eJ_QWoAAbBNI.jpg

GM2edEIWkAAmkrL.png

GM2fNe0XoAECj9b.jpg

GM2fPsuX0AEm_2u.png

GM2gHNaWIAA4qsi.jpg

GM2gj3FXgAA3OYp.png

GM2iDBOXoAEoYgY.png

GM2iNgTWcAAd7zu.jpg



1/2
Additional examples where Llama 3 120B > GPT-4

From what I've gathered (thanks to
@sam_paech ), I think this model is really good at creative writing but worse than L3 70B in terms of reasoning capabilities.

I've made a 225B version but it looks like it's not as good overall.…

2/2
Tbh, it's not surprising considering the naive self-merge config.

I think the 120B version could be improved with a smarter duplication of layers, focusing on the most important layers instead of uniform sampling.

TheProfessor by
@erhartford and
@abacusai
is a great example.…


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GM4fvrMXMAA1m-s.jpg

GM4h4ynWwAAx1x-.jpg

GM4h-5ZXAAAGP9v.jpg


 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,785
Reputation
7,926
Daps
148,614







1/7
Evaluation for Audio ML/ AI is broken!

It’s easy to find faults in a model and say boom, evals were broken from the get go when you don’t have proper baselines to begin with :smile:

Couple of things that ideally the industry should agree on:

1. Bespoke (Word Error Rates) WER for ASR model on MLS/ Libri Speech.
2. Reporting (Mean Opinion Score) MOS scores for TTS models without detailing the process, methodology, demographics of the survey.
3. Real Time Factor (RTF) on arbitrary hardware, with ill-defined inputs.
4. Training (old version) and testing on (Common Voice) CV :smile:

We’ve spent better part of two years trying to democratise evals and bring upon common standards for ASR and TTS evals:

1. Open ASR Leaderboard - Evaluate most open ASR models on one eval with more than 9 different and diverse datasets.
2. Text to Speech Arena - Let TTS models run wild and free to allow for real world evals and get varied opinions on them.
3. ESB benchmark - train and evaluate ASR models on a fixed dataset to identify true architecture supremacy.
4. (Upcoming) Offline evals for TTS.

Saying that some leaderboard/ automated evals bring the industry/ research backward is not cool.

Let’s support each other and build on each other ideas instead

2/7
Open ASR Leaderboard

3/7
Text to Speech Arena

4/7
ESB Benchmark

5/7
Open TTS Tracker


6/7
On biases re: certain companies, organisations on our evals:

7/7
Good question. For open source, no—we encourage everyone to reach out to us, and in most cases, we reach out ourselves and try to help with the integration.

Example: Nvidia/ Speechbrain for Open ASR Leaderboard. We are also looking into an integration to add ESPNet models with…


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,785
Reputation
7,926
Daps
148,614


1/2
We are thrilled to announce that our paper "MOMENT: A Family of Open Time-series Foundation Models" has been accepted for the ICML 2024! On this occasion, we are open-sourcing it, together with the model weights and dataset!
Links to everything in the thread!

2/2
Page: …https://moment-timeseries-foundation-model.github.io
Paper: [2402.03885] MOMENT: A Family of Open Time-series Foundation Models
Code: GitHub - moment-timeseries-foundation-model/moment
Weights: AutonLab/MOMENT-1-large · Hugging Face
Dataset:

To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GNPdP0BXYAAHjvJ.jpg

GNPdP0BXYAAHjvJ.jpg

GNScix0XkAAhS9c.png
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,785
Reputation
7,926
Daps
148,614

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,785
Reputation
7,926
Daps
148,614

1/1
OS-Copilot

Towards Generalist Computer Agents with Self-Improvement

Fantastic open-source repo for building tool-using agents to automate our daily tasks

Paper: https://arxiv.org/pdf/2402.07456
GitHub: GitHub - OS-Copilot/OS-Copilot: An self-improving embodied conversational agent seamlessly integrated into the operating system to automate our daily tasks.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GNTmVWaaYAEofBe.jpg
 
Top