China just wrecked all of American AI. Silicon Valley is in shambles.

Windows 91

Obsolete
Supporter
Joined
May 24, 2022
Messages
10,275
Reputation
2,622
Daps
41,129
Reppin
C:\


Man the damage control is hilarious.

OpenAI definitely had the talk yesterday to try and stop the bleeding.



“Great feedback!” :gladbron:

No shyt. When multiple people have been able to get x10 the results because it’s cheaper (free) and have unlimited queries, that’s an exponential gain. So yeah, some people are feeling bamboozled right now.


The cope is off the charts. The ArtificialIntelligence subreddit is cope thread after cope thread. You'd think half the people there were working for OpenAI.

I don't get it because breakthroughs like this only help us consumers. If one company had a monopoly on cars and was selling them for $50K each, but another company figures out how to make and sell a car with the same performance for $5K/ea, why would you the consumer be upset about this? Now the other company has to drop their price or go bankrupt. This is the free market everyone talks about.
 

IIVI

Superstar
Joined
Mar 11, 2022
Messages
12,181
Reputation
2,904
Daps
42,378
Reppin
Los Angeles
The cope is off the charts. The ArtificialIntelligence subreddit is cope thread after cope thread. You'd think half the people there were working for OpenAI.

I don't get it because breakthroughs like this only help us consumers. If one company had a monopoly on cars and was selling them for $50K each, but another company figures out how to make and sell a car with the same performance for $5K/ea, why would you the consumer be upset about this? Now the other company has to drop their price or go bankrupt. This is the free market everyone talks about.
No bullshyt, seeing how these companies actually work it’s most likely true that a lot of their employees are on that subreddit doing it.

If posting on Reddit can prevent billions of dollars from leaking out, they’ll hit that site with an army of shills that are anonymous.
 

The Fade

I don’t argue with niqqas on the Internet anymore
Joined
Nov 18, 2016
Messages
24,194
Reputation
7,507
Daps
131,589
The biggest mistake these Western Cacs ever made was trying to use China as a slave force to manufature all their shyt..

:picard:
Let Jordan Peterson tell it:”hierarchies are inevitable”

But so is breaking out of them. Him and a lot of (crypto)-racists is Nietzsche worshippers but don’t get their own master-slave dichotomy they love so much

To conquer is to be conquered
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
59,190
Reputation
8,782
Daps
163,883


Man the damage control is hilarious.

OpenAI definitely had the talk yesterday to try and stop the bleeding.



“Great feedback!” :gladbron:

No shyt. When multiple people have been able to get x10 the results because it’s cheaper (free) and have unlimited queries, that’s an exponential gain. So yeah, some people are feeling quite bamboozled right now.







1/4
@markchen90
Congrats to DeepSeek on producing an o1-level reasoning model! Their research paper demonstrates that they’ve independently found some of the core ideas that we did on our way to o1.



2/4
@andrewkslo
One lesson to take from this from a user experience standpoint: a model that is 90% as good as o1 but with no usage limits is an OOM more useful in a knowledge worker’s daily life than an o1-tier model with ~10 queries a day.

In the 3 days of testing R1, I have probably sent 10x as many queries as I have to o1 in the past three months combined — despite being a ChatGPT Plus subscriber.

A great model that I can work and iterate with — without concern of getting cut off after a handful of messages — is so much better than a slightly better model that I can chat with for 5 minutes a day.



3/4
@markchen90
Great feedback!



4/4
@AGItechgonewild
And even more importantly, seeing the thought traces is really night and day difference!

I will always choose a model were I can see the thinking process so Man & Machine can work optimally better together 🙏

[Quoted tweet]
I thought back when o1 was first previewed that reading the thought traces in the model would be important to getting the most value out of them. r1 has confirmed this for me, and for others.

Because it doesn't share its thought traces for review, o1 is a crippled product.



To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196




1/4
@xlr8harder
I thought back when o1 was first previewed that reading the thought traces in the model would be important to getting the most value out of them. r1 has confirmed this for me, and for others.

Because it doesn't share its thought traces for review, o1 is a crippled product.

[Quoted tweet]
the utility i am getting from R1 cots far exceeds what i'm getting from its answers. to win me back as a closed competitor your chatbot without cot should beat R1 *with* cot.


2/4
@shaihulud43
absolutely true. without the thinking traces i don't get the same value from even o1 pro. AI is also not at a point where it can do everything itself so human direction and feedback is required, and that can only be done well if we see how it reaches to its conclusions



3/4
@Pehdrew_
you learn way more reading the CoT! the solution is just the cherry on top.



4/4
@SaraElva27468
every censored model is a crippled model




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196




1/11
@doomslide
the utility i am getting from R1 cots far exceeds what i'm getting from its answers. to win me back as a closed competitor your chatbot without cot should beat R1 *with* cot.



2/11
@eliseygusev
i think there is only one argument left for keeping the cots summarized: to not restrict how model reasons. if you show cots to the user you have to also sft that output to make it coherent. what if true agi would not think in understandable for humans way? limiting intelligence because humans won’t understand it seems silly



3/11
@sadasant
cot tokens good 🙂↕️



4/11
@nptacek
same. cots are the most important part about r1 imo. being able to see them is invaluable for seeing how my prompts are interpreted and updating them accordingly



5/11
@voooooogel
same. i find myself reading the CoTs thoroughly and then skimming the output sections to see if there's anything new.



6/11
@TheXeophon
I also like getting the cot tokens I literally pay for



7/11
@loss_gobbler
I feel like the aggregator model should be jointly trained with the reasoner somehow



8/11
@sarimthefirst
A good cot with a proper agi (your own brain) is unbeatable



9/11
@qedgs
It was clear back in their November checkpoint that the alpha was in the CoT

OpenAI should make theirs public, I think we can learn a lot from the O1 CoT



10/11
@kellogh
actually yeah, this is 100% my experience too



11/11
@ChatbitAI
Sounds like R1 is the overachiever in the chatbot class while the others are still figuring out how to tie their shoelaces!




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196

i've been using various reflection prompts to simulate thinking tokens on llama3.1 since last september. thinking about the problem definitely helps since i know when to cancel generation and try again if bad tokens are produced.
 

IIVI

Superstar
Joined
Mar 11, 2022
Messages
12,181
Reputation
2,904
Daps
42,378
Reppin
Los Angeles



1/4
@markchen90
Congrats to DeepSeek on producing an o1-level reasoning model! Their research paper demonstrates that they’ve independently found some of the core ideas that we did on our way to o1.



2/4
@andrewkslo
One lesson to take from this from a user experience standpoint: a model that is 90% as good as o1 but with no usage limits is an OOM more useful in a knowledge worker’s daily life than an o1-tier model with ~10 queries a day.

In the 3 days of testing R1, I have probably sent 10x as many queries as I have to o1 in the past three months combined — despite being a ChatGPT Plus subscriber.

A great model that I can work and iterate with — without concern of getting cut off after a handful of messages — is so much better than a slightly better model that I can chat with for 5 minutes a day.



3/4
@markchen90
Great feedback!



4/4
@AGItechgonewild
And even more importantly, seeing the thought traces is really night and day difference!

I will always choose a model were I can see the thinking process so Man & Machine can work optimally better together 🙏

[Quoted tweet]
I thought back when o1 was first previewed that reading the thought traces in the model would be important to getting the most value out of them. r1 has confirmed this for me, and for others.

Because it doesn't share its thought traces for review, o1 is a crippled product.



To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196




1/4
@xlr8harder
I thought back when o1 was first previewed that reading the thought traces in the model would be important to getting the most value out of them. r1 has confirmed this for me, and for others.

Because it doesn't share its thought traces for review, o1 is a crippled product.

[Quoted tweet]
the utility i am getting from R1 cots far exceeds what i'm getting from its answers. to win me back as a closed competitor your chatbot without cot should beat R1 *with* cot.


2/4
@shaihulud43
absolutely true. without the thinking traces i don't get the same value from even o1 pro. AI is also not at a point where it can do everything itself so human direction and feedback is required, and that can only be done well if we see how it reaches to its conclusions



3/4
@Pehdrew_
you learn way more reading the CoT! the solution is just the cherry on top.



4/4
@SaraElva27468
every censored model is a crippled model




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196




1/11
@doomslide
the utility i am getting from R1 cots far exceeds what i'm getting from its answers. to win me back as a closed competitor your chatbot without cot should beat R1 *with* cot.



2/11
@eliseygusev
i think there is only one argument left for keeping the cots summarized: to not restrict how model reasons. if you show cots to the user you have to also sft that output to make it coherent. what if true agi would not think in understandable for humans way? limiting intelligence because humans won’t understand it seems silly



3/11
@sadasant
cot tokens good 🙂↕️



4/11
@nptacek
same. cots are the most important part about r1 imo. being able to see them is invaluable for seeing how my prompts are interpreted and updating them accordingly



5/11
@voooooogel
same. i find myself reading the CoTs thoroughly and then skimming the output sections to see if there's anything new.



6/11
@TheXeophon
I also like getting the cot tokens I literally pay for



7/11
@loss_gobbler
I feel like the aggregator model should be jointly trained with the reasoner somehow



8/11
@sarimthefirst
A good cot with a proper agi (your own brain) is unbeatable



9/11
@qedgs
It was clear back in their November checkpoint that the alpha was in the CoT

OpenAI should make theirs public, I think we can learn a lot from the O1 CoT



10/11
@kellogh
actually yeah, this is 100% my experience too



11/11
@ChatbitAI
Sounds like R1 is the overachiever in the chatbot class while the others are still figuring out how to tie their shoelaces!




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196


Yup. Basically this:
1*vXJSjIPAiZYtCCFwNky0NA.jpeg
 

GoldenGlove

😐😑😶😑😐
Staff member
Supporter
Joined
May 1, 2012
Messages
59,732
Reputation
5,780
Daps
140,717
I kinda laugh when people bring up data privacy and a Chinese company being behind this... like TikTok isn't a thing. I don't get how OpenAI, Google or Amazon having all of our data is any different than a Chinese company having it (enlighten me for those with the info detailing the significance).

:hubie:

$200 a month vs free for the same performance is game changing. I can see enterprise level companies being weary of plugging their services/applications up with this, but on an individual level, the cost savings is too much to ignore.

Data is always the most important element at play, our convenience comes with a cost. I haven't used Deepseek yet, but I'm not afraid of using it...

:manny:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
59,190
Reputation
8,782
Daps
163,883
I kinda laugh when people bring up data privacy and a Chinese company being behind this... like TikTok isn't a thing.

$200 a month vs free for the same performance is game changing. I can see enterprise level companies being weary of plugging their services/applications up with this, but on an individual level, the cost savings is too much to ignore.

Data is always the most important element at play, our convenience comes with a cost. I haven't used Deepseek yet, but I'm not afraid of using it...

:manny:


there are lots of U.S companies that have been offerring deepseek models for api/inference.

perplexity , deepinfra , etc






1/14
@NetmindAi
We’re thrilled to announce the integration of DeepSeek-R1, a reasoning-first AI model rivaling OpenAI-o1.

Breaking barriers as the first Web3 protocol to host this revolutionary LLM—welcome to the future of decentralized intelligence.

Here’s how DeepSeek changes the game ⤵️

[Quoted tweet]
DeepSeek-R1, the community's favourite reinforcement learning model rivaling OpenAI-o1, is now live on NetMind Power.

This milestone brings advanced reasoning capabilities directly to our API, seamlessly powered by NMT in the background.

Explore what’s possible @deepseek_ai ⤵️


GiXX7nGaYAA1LmU.jpg

Gh1XP0MaUAAO93V.jpg


2/14
@NetmindAi
2/ What Makes DeepSeek-R1 Special?

DeepSeek-R1 is with a reinforcement learning-first approach, it skips traditional training methods, learning through trial and error instead of relying on massive labeled datasets.

[Quoted tweet]
🛠️ DeepSeek-R1: Technical Highlights

📈 Large-scale RL in post-training
🏆 Significant performance boost with minimal labeled data
🔢 Math, code, and reasoning tasks on par with OpenAI-o1
📄 More details: github.com/deepseek-ai/DeepS…

🐋 4/n


GhvH9szaYAAuuqQ.jpg


3/14
@NetmindAi
3/ Why Reinforcement Learning (RL) is a Game-Changer

Traditional AI needs tons of labeled data—expensive and slow. RL is different: it learns by doing, improving through feedback.

This makes DeepSeek-R1:
✅Data-efficient: Learns with less data.
✅Adaptable: Improves on the fly.
✅Perfect for reasoning: Excels in math, coding, and logic.



4/14
@NetmindAi
4/ Get Started with DeepSeek-R1 on NetMind Power Today

DeepSeek-R1 is now available via API on NetMind Power, temporarily free to access.

🔑 API Access: Ready for builders.
🎮Playground: Coming soon for hands-on testing.

NetMind Power



5/14
@ReystarX
/search?q=#NMT easily to 100$



6/14
@BusinessOskr77
Guys you are everywhere😂
Really good job building in all the important AI developments
/search?q=#NMT 💎



7/14
@Cryptoexplorerv
/search?q=#NMT bullish



8/14
@DigiN1nja
/search?q=#NMT AI King



9/14
@MarkBayaga40093
lets go



10/14
@GustavoSimioni4




11/14
@CryptoDecada
ready to moon!



12/14
@jesus890510
Uffffffff working wth the best



13/14
@EuphoriaXMark
DeepSeek-R1? Sounds like the AI just leveled up! Can’t wait to see how it reasons with my questionable life choices. 😂 Welcome to the future! /search?q=#PublicAI



14/14
@Glow_Kittyy
In crypto, there’s pump and dump, but with me, it’s just a steady pump of your interest. 📈💋




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196




1/6
@CloudflareDev
We now have the DeepSeek-R1-Distill-Qwen-32B model on Cloudflare Workers AI! Use it to solve math, coding and complex reasoning tasks. It's open source, hosted on Cloudflare servers, and is comparable to OpenAI's o1-mini.

What makes it shine:
🌟 It solves math, coding and complex reasoning tasks
🌟 It thinks out loud – watch its step-by-step logic!
🌟 Self-checks answers before responding

Try it out here: deepseek-r1-distill-qwen-32b · Cloudflare Workers AI docs



2/6
@MrJaneGoodall
go /search?q=#NET



3/6
@mehmet_ba7
It’s nice to see Cloudflare adding a useful model to its Workers AI.



4/6
@cmiondotdev
Does it support tool calling?



5/6
@waronjc
How about the embed model?



6/6
@Z0HE8
Amazing! @janwilmake you’ll like this




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196









1/12
@svpino
DeepSeek R1 is *the* best model available right now. It's at the level of o1, but you can use it for free, and it's much faster.

A huge leap forward that nobody saw coming. No wonder so many people are throwing tantrums online trying to discredit the Chinese students who built this.

You can use DeepSeek in Visual Studio Code right now:

1. Install the Qodo Gen AI extension
2. Select DeepSeek R1 from their list of models

The Qodo team is hosting DeepSeek on their servers, so none of your data will go to China.

I've been building a Tetris game using DeepSeek, and this is the most impressive model I've seen so far.



https://video.twimg.com/amplify_video/1883881780438265856/vid/avc1/1280x720/TLfQOUwU6gKx0mOG.mp4

2/12
@svpino
Here is a link to the Qodo extension: Qodo Gen: AI Coding Assistant (Codium VS Code) - Code, Test and Review with Confidence, supporting Python, Javascript, Typescript, Java, PHP, Go, and more - Visual Studio Marketplace



3/12
@BalesTJason
Not discrediting those who built it, more so that the cost to build they reported is dubious at best.



4/12
@svpino
Maybe. We'll soon find out for real.



5/12
@ImJayBallentine
I almost expected you to be critical of Deepseek R1. It’s cool that a qualified engineer co-signs its potential.



6/12
@svpino
man, this quality at this speed is good!



7/12
@AmericansFirstX
“For free”

Impressive. Make sure to upload as much about yourself as possible.

For free. 🆓



8/12
@svpino
I'm not using the Chinese-hosted version.



9/12
@sonicshifts
This won't age well.



10/12
@svpino
This doesn't need to age.

Tomorrow, a new model will emerge and become State of the Art. That's the nature of the game.

Today, however, I think DeepSeek R1 is SOTA.



11/12
@nbasandroid
When you say I built..is that really you or Deepseek?



12/12
@svpino
"I built" means "I copied and pasted what DeepSeek told me to."




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196




1/6
@gregosuri
DeepSeek R1 is now available on Akash Chat API.

Free for developer use.

Details 👇

[Quoted tweet]
DeepSeek-R1 and all 6 open-source distilled models are available today in the AkashChat API.

Powered by the Akash Supercloud.

Create an API key in minutes and start building.


GiZafpRWkAA3Fqj.jpg

GiZSpKAWIAESssw.png


2/6
@Aicorehub
How much does it cost to use



3/6
@gregosuri
$0 per 1M tokens



4/6
@wnmnsr
@gregosuri will Akash Chat integrate Qwen?



5/6
@Iceflower_44
Is Akash a subnet of @opentensor /search?q=#TAO?



6/6
@LooveThyJourney
This will go down as one of the most consequential decisions in decentralized compute.

This collection of SOTA AI models is now available for FREE via the @akashnet_ Chat API:

Akash Chat API

This is a one-of-a-kind opportunity, folks.

Stay humble. Keep innovating 🇺🇸



GiZ5szeaYAM2-28.png



To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

GoldenGlove

😐😑😶😑😐
Staff member
Supporter
Joined
May 1, 2012
Messages
59,732
Reputation
5,780
Daps
140,717


Ironically, this is how capitalism works and it's hilarious to see these AI companies get crushed like this.

Open Source winning is best for consumers and the everyday person.

You better believe these LLMs companies are having meetings with clients trying to explain the price difference. At the end of the day, businesses ARE not going to pay 27x more for something that offers the same performance
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
59,190
Reputation
8,782
Daps
163,883







1/11
@daniel_mac8
everyone comparing deepseek-r1 to o1

and forgetting about Gemini 2 Flash Thinking

which is better than r1 on every cost and performance metric



GiTMkg8XQAA0Gii.jpg


2/11
@daniel_mac8
the 1m context length is a gamechanger

you can do things with that context length that no other model will allow you to do



3/11
@daniel_mac8
ok some people pointed out in the replies that Gemini 2 Thinking performs worse compared to r1 on benchmarks like Livebench

so should correct my original comment by saying:

"performs better on the metrics depicted on this chart"



4/11
@Aleks13053799
Now there is mainly a discussion between the average people who use the site. Namely the mass consumer. And one is free, the other is paid. That's what worries everyone. And judging by the pace and prospects of investments. It is better to get used to DeepSeek now.



5/11
@daniel_mac8
Gemini 2 Flash Thinking is free (for now, not sure it will remain the case)



6/11
@BobbyGRG
team already testing this in Cursor! lets see how it performs in real life :smile:



7/11
@daniel_mac8
same here - started using it in my coding workflows

anecdotally, works great!



8/11
@BalesTJason
They care about how much it cost to get there, which china probably just lied about.



9/11
@daniel_mac8
mmmm could be

can't know for sure



10/11
@GBR_the_builder




11/11
@daniel_mac8
just the facts




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
59,190
Reputation
8,782
Daps
163,883


1/23
@jeremyphoward
How could anyone have seen R1 coming?

Just because deepseek showed DeepSeek-R1-Lite-Preview months ago, showed the scaling graph, and said they were going to release an API and open source… how could anyone have guessed?

[Quoted tweet]
🌟 Inference Scaling Laws of DeepSeek-R1-Lite-Preview
Longer Reasoning, Better Performance. DeepSeek-R1-Lite-Preview shows steady score improvements on AIME as thought length increases.


Gc0zpWDbAAA6T-I.jpg


2/23
@nagaraj_arvind
2017 fastAI forums > today's AI twitter



3/23
@jeremyphoward
That’s for sure



4/23
@nonRealBrandon
Nancy Pelosi and Jim Cramer knew.



5/23
@MFrancis107
Not Deepseek specific. But models are continuously getting cheaper and more efficient to train. That's how it's been going and will continue to go.



6/23
@centienceio
i mean they did show deepseek r1 lite preview months ago and talked about releasing an api and open sourcing it so it doesnt seem that hard to guess that r1 was coming



GiZjl_NX0AEEhii.jpg


7/23
@vedangvatsa
Read about Liang Wenfeng, the Chinese entrepreneur behind DeepSeek:

[Quoted tweet]
Liang Wenfeng - Founder of DeepSeek

Liang was born in 1985 in Guangdong, China, to a modest family.

His father was a school teacher, and his values of discipline and education greatly influenced Liang.

Liang pursued his studies at Zhejiang University, earning a master’s degree in engineering in 2010.

His research focused on low-cost camera tracking algorithms, showcasing his early interest in practical AI applications.

In 2015, he co-founded High-Flyer, a quantitative hedge fund powered by AI-driven algorithms.

The fund grew rapidly, managing over $100 billion, but he was not content with just the financial success.

He envisioned using AI to solve larger, more impactful problems beyond the finance industry.

In 2023, Liang founded DeepSeek to create cutting-edge AI models for broader use.

Unlike many tech firms, DeepSeek prioritized research and open-source innovation over commercial apps.

Liang hired top PhDs from universities like Peking and Tsinghua, focusing on talent with passion and vision.

To address US chip export restrictions, Liang preemptively secured 10,000 Nvidia GPUs.

This strategic move ensured DeepSeek could compete with global leaders like OpenAI.

DeepSeek's AI models achieved high performance at a fraction of the cost of competitors.

Liang turned down a $10 billion acquisition offer, stating that DeepSeek’s goal was to advance AI, not just profit.

He advocates for originality in China’s tech industry, emphasizing innovation over imitation.

He argued that closed-source technologies only temporarily delay competitors and emphasized the importance of open innovation.

Liang credits his father’s dedication to education for inspiring his persistence and values.

He believes AI should serve humanity broadly, not just the wealthy or elite industries.


GiZfDjQX0AAPkuc.jpg


8/23
@0xpolarb3ar
AI is a software problem now with current level of compute. Software can move much faster because it doesn't have to obey laws of physics



9/23
@ludwigABAP
Jeremy on a tear today



10/23
@AILeaksAndNews
It was also bound to happen eventually



11/23
@jtlicardo
Because the amount of hype and semi-true claims in AI nowadays makes it hard to separate the wheat from the chaff



12/23
@imaurer
What is April's DeepSeek that is hiding in plain sight?



13/23
@TheBananaRat
So much innovation AI innovation is coy, it’s all good for NVIDIA as they control the software and hardware stack for AI

For example:
Versus .AI🇨🇦 just outperformed DeepSeek and ChatGPT 👇

🚨AI Shake-Up: Verses AI (CBOE:VERS) Leaves DeepSeek and ChatGPT in the Dust!🚨

Verses AI a 🇨🇦 Company. Just Outperformed ChatGPT & DeepSeek latest LLM models

AI is evolving rapidly, and Verses AI 🇨🇦 is leading the way. Recent performance benchmarks show that Verses’ Genius platform has surpassed DeepSeek, ChatGPT, and other top LLMs, offering superior reasoning, prediction, and decision-making capabilities.

Unlike traditional models, Genius continuously learns and adapts, solving complex real-world challenges where others fall short. For example, its ability to detect and mitigate fraud at scale demonstrates its practical value in high-impact applications.

As AI innovation accelerates, Verses AI is setting a new standard—one built on intelligence that goes beyond language processing to real-time, adaptive decision-making.

Versus AI (CBOE:VERS) is OneToWatch

The🍌🐀has spoken.



14/23
@suwakopro
I used it when R1 lite was released, and I never expected it to have such a big impact now.



15/23
@din0s_
i thought scaling laws were dead, that's what I read on the news/twitter today



16/23
@rich_everts
Hey Jeremy, have you thought of ways yet to better optimize the RL portion of the Reasoning Agent?



17/23
@JaimeOrtega
I mean stuff doesn't happen until it happens I guess



18/23
@inloveamaze
flew under for publica eye



19/23
@Raviadi1
I expected it to be happen in a short time after R1-Lite. But what i didn't expect it would be open source + free and almost on par with o1.



20/23
@sparkycollier
😂



21/23
@medoraai
I think we saw search optimization was the secret to many of the projects that surprised us last year. But the new algo, Group Relative Policy Optimization (GRPO), was surprising. Really a unique optimization. I can see some real benefits to hiring pure math brains



22/23
@broadfield_dev
I think every single researcher and developer is far less funded than OpenAI, which means they have to innovate.

If we think that DeepSeek is an anomaly, then we are destined to be fooled again.



23/23
@kzSlider
lol ML people are so clueless, this is the one time they didn't trust straight lines on a graph




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196






1/11
@deepseek_ai
🚀 DeepSeek-R1-Lite-Preview is now live: unleashing supercharged reasoning power!

🔍 o1-preview-level performance on AIME & MATH benchmarks.
💡 Transparent thought process in real-time.
🛠️ Open-source models & API coming soon!

🌐 Try it now at http://chat.deepseek.com
/search?q=#DeepSeek



Gc0zgl8bkAAMTtC.jpg


2/11
@deepseek_ai
🌟 Impressive Results of DeepSeek-R1-Lite-Preview Across Benchmarks!



Gc0zl7WboAAnCTS.jpg


3/11
@deepseek_ai
🌟 Inference Scaling Laws of DeepSeek-R1-Lite-Preview
Longer Reasoning, Better Performance. DeepSeek-R1-Lite-Preview shows steady score improvements on AIME as thought length increases.



Gc0zpWDbAAA6T-I.jpg


4/11
@abtb168
congrats on the release! 🥳



5/11
@SystemSculpt
The whale surfaces again for a spectacular show.



6/11
@leo_agi
will you release a tech report?



7/11
@paul_cal
Very impressive! Esp transparent CoT and imminent open source release

I get it's hard to compare w unreleased o1's test time scaling without an X axis, but worth noting o1 full supposedly pushes higher on AIME (~75%)

What's with the inconsistent blue lines though?



Gc04oxYW4AAG4QQ.jpg

Gc04vKAXQAAFSTd.png


8/11
@marvijo99
Link to the paper please



9/11
@lehai0609
You are GOAT. Take my money!!!



10/11
@AtaeiMe
Open source soon that later pls! Is the white paper coming as well?



11/11
@lehai0609
So your 50 limit is for one day, isnt it?




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
59,190
Reputation
8,782
Daps
163,883



China got Massa under full-court pressure LMFAOOO

:mjlol::russ::dead:





i have to cheeck how this compares to QwQ 32B but this is good :wow:
 
Top