bnew

Veteran
Joined
Nov 1, 2015
Messages
59,183
Reputation
8,772
Daps
163,824



1/31
@DrJimFan
That a *second* paper dropped with tons of RL flywheel secrets and *multimodal* o1-style reasoning is not on my bingo card today. Kimi's (another startup) and DeepSeek's papers remarkably converged on similar findings:

> No need for complex tree search like MCTS. Just linearize the thought trace and do good old autoregressive prediction;
> No need for value functions that require another expensive copy of the model;
> No need for dense reward modeling. Rely as much as possible on groundtruth, end result.

Differences:

> DeepSeek does AlphaZero approach - purely bootstrap through RL w/o human input, i.e. "cold start". Kimi does AlphaGo-Master approach: light SFT to warm up through prompt-engineered CoT traces.
> DeepSeek weights are MIT license (thought leadership!); Kimi does not have a model release yet.
> Kimi shows strong multimodal performance (!) on benchmarks like MathVista, which requires visual understanding of geometry, IQ tests, etc.
> Kimi paper has a LOT more details on the system design: RL infrastructure, hybrid cluster, code sandbox, parallelism strategies; and learning details: long context, CoT compression, curriculum, sampling strategy, test case generation, etc.

Upbeat reads on a holiday!



Ghv_a9_bgAAre6I.jpg


2/31
@DrJimFan
Whitepaper link: Kimi-k1.5/Kimi_k1.5.pdf at main · MoonshotAI/Kimi-k1.5



3/31
@brendanigraham
DeepSeek-R1 actually does use SFT btw - it's DeepSeek-R1-Zero that doesn't.

For DeepSeek-R1, they SFT on a small amount of nicely-formatted reasoning data (some cleaned up from DeepSeek-R1-Zero, some from few-shot prompting other models) then RLVR + a language consistency signal.

They then use this resulting model to generate a bunch of additional reasoning traces, gather a bunch of other kinds of data (e.g. creative writing), and do some filtering with DSv3.

They then SFT from base again using this data, then do a bunch of RL with signal for rule-based rewards and human preferences (via a reward model)



4/31
@DrJimFan
That’s right



5/31
@Kimi_Moonshot
Kudos! Thanks for sharing! Yep, we're *multimodal* o1-style reasoning ✌️



6/31
@roydanroy
What do you mean "no need for"? This model is definitely inferior to o1.



7/31
@JFPuget
R1 has no need for a value model because they do RL in domains where value can be computed via some rules (accuracy of math result, code that compiles and produces the right output).



8/31
@joenorton
Fascinating stuff



9/31
@mark_k
Very excited to hear about the multi-modal reasoning



10/31
@TheAIVeteran
Saves me a few minutes, thanks for the info!



11/31
@C_Quemeneur
Not gonna lie, I was expecting to see more MCTS in these models.



12/31
@vedangvatsa
Overcomplicating RL with tree search and value functions might actually be the smarter route. Simple predictions can’t capture the nuances of complex decisions. Relying solely on groundtruth limits adaptability. The richness of human input is invaluable; cold starts might miss critical context.



13/31
@elichen
This contains a much better description of the acceleration that's currently happening: "Reinforcement Learning with LLMs". Not sure why we are talking about test-time compute scaling..



14/31
@AILeaksAndNews
Loving the MLK Day presents



15/31
@phi_wxyz
which paper do you recommend reading first?



16/31
@rudiranck
Seems like China brought its A-game and elbows for 2025.



17/31
@nobody_qwert
Deepseek is good. I cancelled my 200$ GPT Pro plan.

[Quoted tweet]
ChatGPT o1 Pro vs. DeepSeek R1

Asked to implement a rotating triangle with red ball in it.
Left OpenAI right DeepSeek


https://video.twimg.com/ext_tw_video/1881619983639117824/pu/vid/avc1/1280x720/d9wsM0oT35AZXvUY.mp4

18/31
@sonicshifts
It's a knockoff. They even copied CoPilot's reasoning language.



GhwJnshXwAAVy7F.jpg


19/31
@the__sonik
@mirzaei_mani



20/31
@c__bir
I wonder if MCTS and Graph abstractions might become useful or even needed for higher capability levels? Intuitively it seems like it 🤔 who wouldn't want a system that generates proveably save long horizon task execution programs. Therefore the AI system needs to generate the symbolic abstraction by itself at Inference Time. Not only use symbolic abstractions, but even come up with helpful symbolic abstractions. Therefore the amount of symbolic abstractions should grow with inference time asymptotically. To a manageable size. Best thing: it's human readable. And one can create 100% accurate synthetic Data with it. Which means valuable signal to noise.

@karpathy @aidan_mclau
@WolframResearch @stephen_wolfram



21/31
@Bill_ZRK
amazing



22/31
@Extended_Brain
It cannot generate images, but can extract important features of images, such as from geometric problems



23/31
@GlobalFadi
Kimi's and DeepSeek's findings converging on linearization rather than MCTS is mind-blowing! This could massively streamline our AI and RL research.



24/31
@MarMa10134863




25/31
@monday_chen
this day would be perfect if this is open sourced too



26/31
@Qi2ji4Zhe1nni1
a second rl flywheel paper has hit the github



27/31
@wrhall
> No need for complex tree search like MCTS. Just linearize the thought trace and do good old autoregressive prediction

Can you apply this back to chess/go? I need to read paper to fully grok I think



28/31
@Ixin75293630175
> No need for complex tree search like MCTS. Just linearize the thought trace and do good old autoregressive prediction
Because when humans do MCTS they DO remember all discarded branches!



29/31
@2xehpa
Why does all the open models always converge to the same level as what is currently the most advanced *released* model? Why its good as O1 and not O3? I suspect everybody use output from frontier models in the training somehow



30/31
@llm_guruji
Fascinating insights! It's intriguing to see how both approaches converge on simpler methods. Looking forward to exploring the system design details in Kimi's paper.



31/31
@red_durag
first difference is not quite right. deepseek did alphazero approach for r1 zero, not for r1. r1 was fine-tuned on cot data before RL




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196





1/4
@deanwball
DeepSeek r1 takeaways for policy:
1. Chinese labs will likely continue to be fast followers in terms of reaching similar benchmark performance to US models.
2. The impressive performance of DeepSeek's distilled models (smaller versions of r1) means that very capable reasoners will continue to proliferate widely and be runnable on local hardware, far from the eyes of any top-down control regime (including the US diffusion rule).
3. Open models are going to have strategic value for the US and we need to figure out ways to get more frontier open models out to the world (we rely exclusively on meta for this right now, which, while great, is just one firm). Why do OpenAI/Anthropic not open-source their older models? What would be the harm?



2/4
@muchos_grande
because they're literally piggybacking off of American technology, as usual.



3/4
@RichardAGetz
@xai where are you in this game?

Thank you @AIatMeta for all your open source AI, and I hope you leapfrog R1 soon.



4/4
@MechanicalDao
What is the strategic value in releasing open models?




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
59,183
Reputation
8,772
Daps
163,824











1/12
@ryangrim
This doesn’t seem to be getting enough attention. The Silicon Valley social contract forced on the public by Obama and then Trump and then Biden (minus Lina Khan) and now Trump was straight forward: We will let these bros become the richest people in human history and in exchange they will develop a tech industry that makes the U.S. dominant for a century.

They did the first part, then built monopolies to try to keep out competition rather than continue to innovate at a top level, and then got out-competed by Chinese companies in both AI and social media. They are the losers we always thought they were — and now so are we.

[Quoted tweet]
For those confused why the US state and Silicon Valley are having a meltdown on twitter today.

China released multiple AI models that are 50x more efficient than the best American AI models and made them open source, ruining the AI market


2/12
@ryangrim
Now they’re gonna spend $500B gobbling up energy in Texas to build a product China is making for $5B and giving away basically free. Lemme know how that goes.

Meanwhile the Texas grid breaks when it gets cold.



3/12
@ryangrim
U.S. tech oligarchs and the actual president of the United States spending time and energy doing crypto nonsense while China releases an AI that kicks our ass is such a perfect metaphor while also being literally what’s happening



4/12
@ryangrim
Blockchain is incredible tech.

Bro, it’s garbage.



5/12
@ryangrim
Every empire fumbles the bag eventually so we shouldn’t be too hard on ourselves but yeah, we should sue for peace and coexistence while we have a decent hand still



6/12
@ryangrim
Here’s an explainer

[Quoted tweet]
🇨🇳🇺🇸 These AI models from China are amazing!

- Gianluca Mauro
#OpenAI
#Deepseek


https://video.twimg.com/ext_tw_video/1882791165025468416/pu/vid/avc1/576x1024/-h2N6dmj524GHJvb.mp4

7/12
@ryangrim
Sign up to get our free newsletter Drop Site News | Substack



8/12
@dfwaaronlayman
This is an excellent point. We have a lot of greedy, short-sighted tech and crypto shills infiltrating the Texas legislature to push their agenda.
It's already putting a strain on the grid and helping to push up rates.



9/12
@MichaelWatman
You have that right.



10/12
@githop
i agree with the larger point but it should be noted that deep seek is probably lying about their training budget.

allegedly they have on hand something like 50k H100s which they cannot admit as nvidia is banned from selling those chips to Chinese firms



11/12
@JamesNeilMeece
Hope this happens

[Quoted tweet]
Good. We desperately need this.


12/12
@RenaissanceLib
While I like you taking on the "tech bros", I feel like this oversimplifies the AI situation in an unproductive way.

China is not ahead of the US in Ai.

The focus should be that the $500B needs to be spent on improving the energy grid for everyone, not just private data centers




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196



if Deepseek really releases AGI as open source to the world, U.S hegemony is over.
 

Wargames

One Of The Last Real Ones To Do It
Joined
Apr 1, 2013
Messages
26,040
Reputation
4,838
Daps
98,222
Reppin
New York City
Sick work
The AI market low-key is looking like







1/12
@ryangrim
This doesn’t seem to be getting enough attention. The Silicon Valley social contract forced on the public by Obama and then Trump and then Biden (minus Lina Khan) and now Trump was straight forward: We will let these bros become the richest people in human history and in exchange they will develop a tech industry that makes the U.S. dominant for a century.

They did the first part, then built monopolies to try to keep out competition rather than continue to innovate at a top level, and then got out-competed by Chinese companies in both AI and social media. They are the losers we always thought they were — and now so are we.

[Quoted tweet]
For those confused why the US state and Silicon Valley are having a meltdown on twitter today.

China released multiple AI models that are 50x more efficient than the best American AI models and made them open source, ruining the AI market


2/12
@ryangrim
Now they’re gonna spend $500B gobbling up energy in Texas to build a product China is making for $5B and giving away basically free. Lemme know how that goes.

Meanwhile the Texas grid breaks when it gets cold.



3/12
@ryangrim
U.S. tech oligarchs and the actual president of the United States spending time and energy doing crypto nonsense while China releases an AI that kicks our ass is such a perfect metaphor while also being literally what’s happening



4/12
@ryangrim
Blockchain is incredible tech.

Bro, it’s garbage.



5/12
@ryangrim
Every empire fumbles the bag eventually so we shouldn’t be too hard on ourselves but yeah, we should sue for peace and coexistence while we have a decent hand still



6/12
@ryangrim
Here’s an explainer

[Quoted tweet]
🇨🇳🇺🇸 These AI models from China are amazing!

- Gianluca Mauro
#OpenAI
#Deepseek


https://video.twimg.com/ext_tw_video/1882791165025468416/pu/vid/avc1/576x1024/-h2N6dmj524GHJvb.mp4


7/12
@ryangrim
Sign up to get our free newsletter Drop Site News | Substack



8/12
@dfwaaronlayman
This is an excellent point. We have a lot of greedy, short-sighted tech and crypto shills infiltrating the Texas legislature to push their agenda.
It's already putting a strain on the grid and helping to push up rates.



9/12
@MichaelWatman
You have that right.



10/12
@githop
i agree with the larger point but it should be noted that deep seek is probably lying about their training budget.

allegedly they have on hand something like 50k H100s which they cannot admit as nvidia is banned from selling those chips to Chinese firms



11/12
@JamesNeilMeece
Hope this happens

[Quoted tweet]
Good. We desperately need this.


12/12
@RenaissanceLib
While I like you taking on the "tech bros", I feel like this oversimplifies the AI situation in an unproductive way.

China is not ahead of the US in Ai.

The focus should be that the $500B needs to be spent on improving the energy grid for everyone, not just private data centers




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196



if Deepseek really releases AGI as open source to the world, U.S hegemony is over.

i still think AGI is ducktales. What we have seen from AI fro me positions it as a replacement of Microsoft office and other programs where users had to know backend coding to manipulate the program.
 

Micky Mikey

Veteran
Supporter
Joined
Sep 27, 2013
Messages
16,204
Reputation
3,089
Daps
91,213







1/12
@ryangrim
This doesn’t seem to be getting enough attention. The Silicon Valley social contract forced on the public by Obama and then Trump and then Biden (minus Lina Khan) and now Trump was straight forward: We will let these bros become the richest people in human history and in exchange they will develop a tech industry that makes the U.S. dominant for a century.

They did the first part, then built monopolies to try to keep out competition rather than continue to innovate at a top level, and then got out-competed by Chinese companies in both AI and social media. They are the losers we always thought they were — and now so are we.

[Quoted tweet]
For those confused why the US state and Silicon Valley are having a meltdown on twitter today.

China released multiple AI models that are 50x more efficient than the best American AI models and made them open source, ruining the AI market


2/12
@ryangrim
Now they’re gonna spend $500B gobbling up energy in Texas to build a product China is making for $5B and giving away basically free. Lemme know how that goes.

Meanwhile the Texas grid breaks when it gets cold.



3/12
@ryangrim
U.S. tech oligarchs and the actual president of the United States spending time and energy doing crypto nonsense while China releases an AI that kicks our ass is such a perfect metaphor while also being literally what’s happening



4/12
@ryangrim
Blockchain is incredible tech.

Bro, it’s garbage.



5/12
@ryangrim
Every empire fumbles the bag eventually so we shouldn’t be too hard on ourselves but yeah, we should sue for peace and coexistence while we have a decent hand still



6/12
@ryangrim
Here’s an explainer

[Quoted tweet]
🇨🇳🇺🇸 These AI models from China are amazing!

- Gianluca Mauro
#OpenAI
#Deepseek


https://video.twimg.com/ext_tw_video/1882791165025468416/pu/vid/avc1/576x1024/-h2N6dmj524GHJvb.mp4


7/12
@ryangrim
Sign up to get our free newsletter Drop Site News | Substack



8/12
@dfwaaronlayman
This is an excellent point. We have a lot of greedy, short-sighted tech and crypto shills infiltrating the Texas legislature to push their agenda.
It's already putting a strain on the grid and helping to push up rates.



9/12
@MichaelWatman
You have that right.



10/12
@githop
i agree with the larger point but it should be noted that deep seek is probably lying about their training budget.

allegedly they have on hand something like 50k H100s which they cannot admit as nvidia is banned from selling those chips to Chinese firms



11/12
@JamesNeilMeece
Hope this happens

[Quoted tweet]
Good. We desperately need this.


12/12
@RenaissanceLib
While I like you taking on the "tech bros", I feel like this oversimplifies the AI situation in an unproductive way.

China is not ahead of the US in Ai.

The focus should be that the $500B needs to be spent on improving the energy grid for everyone, not just private data centers




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196



if Deepseek really releases AGI as open source to the world, U.S hegemony is over.

Its bound to happen at this rate. And watch them blame Wokeness.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
59,183
Reputation
8,772
Daps
163,824
The AI market low-key is looking like

i still think AGI is ducktales. What we have seen from AI fro me positions it as a replacement of Microsoft office and other programs where users had to know backend coding to manipulate the program.

you haven't sen examples of non-coders get useful code out of LLM's?
 

Wargames

One Of The Last Real Ones To Do It
Joined
Apr 1, 2013
Messages
26,040
Reputation
4,838
Daps
98,222
Reppin
New York City
you haven't sen examples of non-coders get useful code out of LLM's?
That’s great, but I still see it as a tool like Microsoft Suite. Even with that added functionality. The idea of AGI seems like to high of a burden. We don’t fully understand our own brains so how will we replicate it on a cloud server?
 

Micky Mikey

Veteran
Supporter
Joined
Sep 27, 2013
Messages
16,204
Reputation
3,089
Daps
91,213
How can they blame wokeness
They'll say US tech companies were too focused on wokeness and DEI and didn't hire the best people based on 'merit'. We know this is far from the truth but this is the right's playbook. If the US loses the AI race and thus its global hegemony there will have to be a scapegoat for it.
 

Micky Mikey

Veteran
Supporter
Joined
Sep 27, 2013
Messages
16,204
Reputation
3,089
Daps
91,213
That’s great, but I still see it as a tool like Microsoft Suite. Even with that added functionality. The idea of AGI seems like to high of a burden. We don’t fully understand our own brains so how will we replicate it on a cloud server?
AGI doesn't need to mimic human brains. And it probably won't.
It just needs to perform at a higher level than humans in most domains. I'm still on the fence about whether AGI will be acheived with LLMs but somewhere along the way A.I. will surpass humans in general intelligence.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
59,183
Reputation
8,772
Daps
163,824





1/64
@Kimi_Moonshot
🚀 Introducing Kimi k1.5 --- an o1-level multi-modal model

-Sota short-CoT performance, outperforming GPT-4o and Claude Sonnet 3.5 on 📐AIME, 📐MATH-500, 💻 LiveCodeBench by a large margin (up to +550%)
-Long-CoT performance matches o1 across multiple modalities (👀MathVista, 📐AIME, 💻Codeforces, etc)

Tech report: GitHub - MoonshotAI/Kimi-k1.5

Key ingredients of k1.5
-Long context scaling. Up to 128k tokens for RL generation. Efficient training with partial rollouts.
-Improved policy optimization: online mirror descent, sampling strategies, length penalty, and others.
-Multi modalities. Joint reasoning over text and vision.



GhvVbEKb0AAAvue.jpg

GhvVbeDbUAA00zp.jpg


2/64
@Cryp70
Chat replies are in English but the rest of your site is Chinese. Is there a full English version available?



3/64
@Kimi_Moonshot
Coming soon, stay tuned!



4/64
@AntDX316
Make an English version app for iOS and Android.



5/64
@winchest_stella
Just tried it. its impressive esp search and vision capabilities. Big congrats to the team!



6/64
@Cryp70
Impressive scores, nice benchmark for code. DeepSeek is my go to but I see it's time to give Kimi a test👍



7/64
@inikhil__
Seems like china is winning.



8/64
@Saboo_Shubham_
This is awesome progress. Keep it coming.



9/64
@McGee_noodle
🫡



10/64
@CallMeSam89
This is huge!

Please bench against r1 as well.



11/64
@mark_k
"Joint reasoning over text and vision."

OMG this is huge. I wonder if it could be extended to other modalities too, e.g. audio?



12/64
@4K_Everyday
What the 😭🫡



13/64
@jrabell0
Where is GPQA?



14/64
@itsPaulAi
Two Chinese o1 models released on the same day? It's speeding up!



15/64
@Grynn
Prices, param counts, open-source? open-weights?



16/64
@DrJimFan
Love it. Keep up the great work!



17/64
@senb0n22a
Very strong search capabilities. I forgot other AIs aren't allowed to search Twitter, otherwise it might have been the best search parser for now. Can process 100+ web pages in one query, compared to others capping at 25-50. Instruction following isn't as strong as Deepseek/Grok 2, but for web research, I could recommend this one.



18/64
@hasantoxr
Wow this is super cool



19/64
@iamfakhrealam
I have recently installed the @deepseek_ai application and found it to be exceptionally amazing…

Could you kindly provide me with the link to the @Kimi_ai_ application?



20/64
@nisten
wait wut, ok this i need to test



21/64
@CodeByPoonam
Wow another Chinese o1 models outperforming ChatGPT.



22/64
@amoussouvichris
Amazing results, your team did an amazing job !



23/64
@praveenjothi99
why the mobile verification? almost no one asks!



24/64
@rohanpaul_ai
Beautiful..



25/64
@acharya_aditya2
Will we get open weights ??



26/64
@SmartFlowAITeam
Great !!!



27/64
@rezkhere
That's a powerful model ✌️



28/64
@SkyBlueHarbor
english please, i'm excited to try it out



29/64
@jseles11
this and a Mac mini is all you really need



30/64
@AILeaksAndNews
What a day for Chinese AI



31/64
@TechByMarkandey
Seems amazing can we connect.

I cannot dm you



32/64
@Cory29565470
Where is @GoogleAI ?



33/64
@SaquibOptimusAI
Oh, bro. Another one.
"Make SOTA AI Cheap Again".
Awesome.



34/64
@DuckWithCup
I tried Kimi before and it’s amazing. Thank you Team.



35/64
@daily_ai_takes
Great work! Exciting times ahead



36/64
@CJ_Wolff
Is there an API



37/64
@DhruvmehtaRps
Where are the other benchmarks?



38/64
@MuchMore2It
Can you add it to @OpenRouterAI?



39/64
@Pedram_virus
When will it be possible to log in with Google? And when will full support for the English language be available? Because it is still in Chinese.



40/64
@Maeelk
Do you plan to open source as @deepseek_ai did ? 😊 @huggingface still has a few To available I guess.



41/64
@FoundTheCode
o1 models everywhere, we're soo back



42/64
@thecute_8
是国产AI官方账号,支持一波



43/64
@ArpanTripathi20
@untitled01ipynb “Mr president a second o1-level model has dropped” Sam A replaced on George Bush’s face



44/64
@wojtess
Where can I find weights?



45/64
@DavidSZDahan
for Kimi's team if this model will not become open source like this tweet or reply with a .



46/64
@JennyZhang6989
@Kimi_Moonshot Where can we use short CoT in Kimi?



47/64
@DavidSZDahan
Where we can use it ?



48/64
@txhno
it's christmas



49/64
@Ttkouhe
When release. I wanan try!!



50/64
@URUBONZ_
Is your google login coming anytime soon? I have been unable to get SMS to send a confirmation and Id love the try the new version



51/64
@realmrfakename
Cool! Any plans to open source?



52/64
@_HARVEY__DENT_
Good grief



53/64
@Angelov_Sta
Why o1 benchmarks are so low? In the deepseek r1 comparisons, o1 scores higher vs what shown here



54/64
@playfuldreamz
Read the room



55/64
@SenougaharA
Looks good tbh. Just bad timing maybe. Still all the best because it does look good



56/64
@rose567888
🔥



57/64
@FyruzOne
How does it do on gpqa diamond



58/64
@the__sonik
Why can't we sign up on the website using Google? Is access restricted only to people in China?



59/64
@dabobo0496
加油



60/64
@Jane1374555767
这条帖子下面应该有一条简体中文回复。



61/64
@SonyxEth
is there an english version



62/64
@TadiwaClyde
Open source?



63/64
@tenmillioncoins
can i download this on ollama search



64/64
@bruce_x_offi
Are you planning to open-source it?




1/22
@Kimi_Moonshot
Kimi k1.5: The Multimodal Reasoning Model
- Available now on Kimi.ai - 帮你看更大的世界 🦄

💡 What can Kimi k1.5 do?

🔹 Image to Code: Convert images into structured code and insights
🔹 GeoGuessr: Identify and pinpoint locations in geography games like a pro 🌍
🔹 Visual Confusion Identification: Distinguish between visually confusing objects (like muffins vs. Chihuahuas)
🔹 Color & Quantity Recognition: Detect colors and accurately count items in images.

🌐 Available now on Kimi.ai - 帮你看更大的世界! Experience it today!



GiOjaV7awAALrYp.jpg

GiOjblUaAAA09bp.jpg

GiOjkhfaIAAgQLT.jpg

GiOmVm6boAAzzxu.jpg


2/22
@Kimi_Moonshot
More to Discover with Kimi k1.5

🔹 Image to Chart: Transform visual data into clean, understandable charts
🔹 Brand Identification: Recognize and identify brands from logos or product images

🌐 Available now on Kimi.ai - 帮你看更大的世界



GiOjoHWbQAALoVV.jpg

GiOmYSCbQAAM_1R.jpg


3/22
@TypesDigital
Welcome to the AI park. Can we add email access for an easier login?



4/22
@ABKfettuccine
Waiting for you guys to finish fine tuning as stated in previous post



5/22
@bingzzy
@georainbolt coming straight for you!



6/22
@XIIIhellasad
This could be the next best thing but it needs something to run code like Claude’s artifacts!!!!



7/22
@Splendid_0823
It is indeed impressive, but there is a need for improvement in the UI. The mobile app and the Chrome extension should be at least in English. Additionally, the default language output for the extension should be in English to enhance its usability.



8/22
@LounasGana
Pretty cool, thanks!



9/22
@NecnoTv
Open source please



10/22
@Whatevercrypto
Is there or will there soon be an api?



11/22
@Soxlkfk
Model is great but UI is not good looking. You need a 10x better frontend engineer.



12/22
@asynchronope
Api test?



13/22
@YounesAka
Do you offer any APIs for devs?



14/22
@Ixin75293630175
guys, please provide API access to OpenRouter



15/22
@AstralPrime999
Any updates on the App?



16/22
@Kodurubhargav1
Keep shaking.



17/22
@Anmolspace
All is good but you need some explaining to do here while logging in. I got OTP from two different numbers on WhatsApp. The first OTP didn't work and the second one did. How can correct OTP don't work? There is also a link in one of those whatsapp profiles that looks suspicious.



GiO5MFiaIAAbbTd.jpg


18/22
@MJyy3777
When will it be launched on the app?



19/22
@f0rstman
Wow, Kimi k1.5 sounds like a multitasking wizard! Imagine if it could also help us identify which pizza toppings are worth the calories! 🍕😂 /search?q=#PublicAI



20/22
@AlekBiesaga
It appears site broke down



21/22
@The_Global_Soul
It’s fun to use, will get better. A native app or api will be great.

[Quoted tweet]
@Kimi_Moonshot is a fun product, gets somethings right and some are wrong (confidently). I uploaded this @ManUtd picture and asked it to identify the players. It reasoned and found right ones, also wrongly identified Ronaldo, Pogba etc. it will get better with time


GiOf-JFbYAAN9kb.jpg

GiOf-JMbEAAYAZN.jpg


22/22
@kisana0290
Kimi.ai - 帮你看更大的世界 good luck with this and I hope you succeed.





1/11
@_akhaliq
Introducing Kimi k1.5

an o1-level multi-modal model

-Sota short-CoT performance, outperforming GPT-4o and Claude Sonnet 3.5 on 📷AIME, 📷 LiveCodeBench by a large margin (up to +550%)

-Long-CoT performance matches o1 across multiple modalities (📷MathVista, 📷Codeforces, etc) Tech report: i-k1.5…

Key ingredients of k1.5 -Long context scaling. Up to 128k tokens for RL generation. Efficient training with partial rollouts.

-Improved policy optimization: online mirror descent, sampling strategies, length penalty, and others.

-Multi modalities. Joint reasoning over text and vision.



Ghv60JGWUAA7-vf.jpg


2/11
@_akhaliq
github: GitHub - MoonshotAI/Kimi-k1.5



3/11
@turbotardo
How many parameters?



4/11
@Gerry
If it is the one that is posted here (Kimi.ai - 帮你看更大的世界) then it is actually very good! I have this one test that gives me a pretty good idea of how useful an LLM will be for coding, logical reasoning and how much or little it hallucinates. Sonnet does ok, O1 (standard) did horrible. The model on the above site didn't get everything correct but was damn close and impressive.



5/11
@Gdgtify
very interesting though the online interface is a work in progress right now.



GhwRTz-XgAI8K2n.jpg


6/11
@WebstarDavid
too much awesome in one day cant keep up



7/11
@alamshafil
We got DeepSeek and now this!



8/11
@seo_leaders
Very nice! The new open source LLMs are coming so fast. Its amazing for us developers.



9/11
@risphereeditor
Looks good!



10/11
@AILeaksAndNews
Looks impressive



11/11
@David_Snoble
What !? this is r1 in the same day
 

cobra

Superstar
Supporter
Joined
Apr 24, 2017
Messages
11,438
Reputation
-1,213
Daps
50,288
Nǐ hǎo shìjiè

i didnt like deepseek's thinking - feels like spam - just give me the answer
 
Top