bnew

Veteran
Joined
Nov 1, 2015
Messages
61,800
Reputation
9,318
Daps
169,725

In Brief

Posted:

9:16 AM PST · December 26, 2024

Microsoft CEO Satya Nadella speaks during the OpenAI DevDay event on November 06, 2023 in San Francisco
Image Credits:Justin Sullivan / Getty Images



Microsoft and OpenAI have a financial definition of AGI: Report​


Microsoft and OpenAI have a very specific, internal definition of artificial general intelligence (AGI) based on the startup’s profits, according to a new report from The Information. And by this definition, OpenAI is many years away from reaching it.

The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits. That’s far from the rigorous technical and philosophical definition of AGI many expect.

This year, OpenAI is reportedly set to lose billions of dollars, and the startup tells investors it won’t turn a profit until 2029.

This is an important detail because Microsoft loses access to OpenAI’s technology when the startup reaches AGI, a nebulous term that means different things to everyone. Some have speculated OpenAI will declare AGI sooner rather than later to box out Microsoft, but this agreement means Microsoft could have access to OpenAI’s models for a decade or more.

Last week, some debated whether OpenAI’s o3 model was a meaningful step toward AGI. While o3 may perform better than other AI models, it also comes with significant compute costs, which bodes ill for OpenAI and Microsoft’s profit-centric definition of AGI.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,800
Reputation
9,318
Daps
169,725


OpenAI’s o3 suggests AI models are scaling in new ways — but so are the costs​


Maxwell Zeff

4:08 PM PST · December 23, 2024



Last month, AI founders and investors told TechCrunch that we’re now in the “second era of scaling laws,” noting how established methods of improving AI models were showing diminishing returns. One promising new method they suggested could keep gains was “test-time scaling,” which seems to be what’s behind the performance of OpenAI’s o3 model — but it comes with drawbacks of its own.

Much of the AI world took the announcement of OpenAI’s o3 model as proof that AI scaling progress has not “hit a wall.” The o3 model does well on benchmarks, significantly outscoring all other models on a test of general ability called ARC-AGI, and scoring 25% on a difficult math test that no other AI model scored more than 2% on.

Of course, we at TechCrunch are taking all this with a grain of salt until we can test o3 for ourselves (very few have tried it so far). But even before o3’s release, the AI world is already convinced that something big has shifted.

The co-creator of OpenAI’s o-series of models, Noam Brown, noted on Friday that the startup is announcing o3’s impressive gains just three months after the startup announced o1 — a relatively short time frame for such a jump in performance.

We announced @OpenAI o1 just 3 months ago. Today, we announced o3. We have every reason to believe this trajectory will continue. pic.twitter.com/Ia0b63RXIk

— Noam Brown (@polynoamial) December 20, 2024
“We have every reason to believe this trajectory will continue,” said Brown in a tweet.

Anthropic co-founder Jack Clark said in a blog post on Monday that o3 is evidence that AI “progress will be faster in 2025 than in 2024.” (Keep in mind that it benefits Anthropic — especially its ability to raise capital — to suggest that AI scaling laws are continuing, even if Clark is complementing a competitor.)

Next year, Clark says the AI world will splice together test-time scaling and traditional pre-training scaling methods to eke even more returns out of AI models. Perhaps he’s suggesting that Anthropic and other AI model providers will release reasoning models of their own in 2025, just like Google did last week.

Test-time scaling means OpenAI is using more compute during ChatGPT’s inference phase, the period of time after you press enter on a prompt. It’s not clear exactly what is happening behind the scenes: OpenAI is either using more computer chips to answer a user’s question, running more powerful inference chips, or running those chips for longer periods of time — 10 to 15 minutes in some cases — before the AI produces an answer. We don’t know all the details of how o3 was made, but these benchmarks are early signs that test-time scaling may work to improve the performance of AI models.

While o3 may give some a renewed belief in the progress of AI scaling laws, OpenAI’s newest model also uses a previously unseen level of compute, which means a higher price per answer.

“Perhaps the only important caveat here is understanding that one reason why O3 is so much better is that it costs more money to run at inference time — the ability to utilize test-time compute means on some problems you can turn compute into a better answer,” Clark writes in his blog. “This is interesting because it has made the costs of running AI systems somewhat less predictable — previously, you could work out how much it cost to serve a generative model by just looking at the model and the cost to generate a given output.”

Clark, and others, pointed to o3’s performance on the ARC-AGI benchmark — a difficult test used to assess breakthroughs on AGI — as an indicator of its progress. It’s worth noting that passing this test, according to its creators, does not mean an AI model has achieved AGI, but rather it’s one way to measure progress toward the nebulous goal. That said, the o3 model blew past the scores of all previous AI models which had done the test, scoring 88% in one of its attempts. OpenAI’s next best AI model, o1, scored just 32%.

Screenshot-2024-12-23-at-3.59.48PM.png
Chart showing the performance of OpenAI’s o-series on the ARC-AGI test.Image Credits:ARC Prize

But the logarithmic x-axis on this chart may be alarming to some. The high-scoring version of o3 used more than $1,000 worth of compute for every task. The o1 models used around $5 of compute per task, and o1-mini used just a few cents.

The creator of the ARC-AGI benchmark, François Chollet, writes in a blog that OpenAI used roughly 170x more compute to generate that 88% score, compared to high-efficiency version of o3 that scored just 12% lower. The high-scoring version of o3 used more than $10,000 of resources to complete the test, which makes it too expensive to compete for the ARC Prize — an unbeaten competition for AI models to beat the ARC test.

However, Chollet says o3 was still a breakthrough for AI models, nonetheless.

“o3 is a system capable of adapting to tasks it has never encountered before, arguably approaching human-level performance in the ARC-AGI domain,” said Chollet in the blog. “Of course, such generality comes at a steep cost, and wouldn’t quite be economical yet: You could pay a human to solve ARC-AGI tasks for roughly $5 per task (we know, we did that), while consuming mere cents in energy.”

It’s premature to harp on the exact pricing of all this — we’ve seen prices for AI models plummet in the last year, and OpenAI has yet to announce how much o3 will actually cost. However, these prices indicate just how much compute is required to break, even slightly, the performance barriers set by leading AI models today.

This raises some questions. What is o3 actually for? And how much more compute is necessary to make more gains around inference with o4, o5, or whatever else OpenAI names its next reasoning models?

It doesn’t seem like o3, or its successors, would be anyone’s “daily driver” like GPT-4o or Google Search might be. These models just use too much compute to answer small questions throughout your day such as, “How can the Cleveland Browns still make the 2024 playoffs?”

Instead, it seems like AI models with scaled test-time compute may only be good for big picture prompts such as, “How can the Cleveland Browns become a Super Bowl franchise in 2027?” Even then, maybe it’s only worth the high compute costs if you’re the general manager of the Cleveland Browns, and you’re using these tools to make some big decisions.

Institutions with deep pockets may be the only ones that can afford o3, at least to start, as Wharton professor Ethan Mollick notes in a tweet.

O3 looks too expensive for most use. But for work in academia, finance & many industrial problems, paying hundreds or even thousands of dollars for a successful answer would not be we prohibitive. If it is generally reliable, o3 will have multiple use cases even before costs drop

— Ethan Mollick (@emollick) December 22, 2024

We’ve already seen OpenAI release a $200 tier to use a high-compute version of o1, but the startup has reportedly weighed creating subscription plans costing up to $2,000. When you see how much compute o3 uses, you can understand why OpenAI would consider it.

But there are drawbacks to using o3 for high-impact work. As Chollet notes, o3 is not AGI, and it still fails on some very easy tasks that a human would do quite easily.

This isn’t necessarily surprising, as large language models still have a huge hallucination problem, which o3 and test-time compute don’t seem to have solved. That’s why ChatGPT and Gemini include disclaimers below every answer they produce, asking users not to trust answers at face value. Presumably AGI, should it ever be reached, would not need such a disclaimer.

One way to unlock more gains in test-time scaling could be better AI inference chips. There’s no shortage of startups tackling just this thing, such as Groq or Cerebras, while other startups are designing more cost-efficient AI chips, such as MatX. Andreessen Horowitz general partner Anjney Midha previously told TechCrunch he expects these startups to play a bigger role in test-time scaling moving forward.

While o3 is a notable improvement to the performance of AI models, it raises several new questions around usage and costs. That said, the performance of o3 does add credence to the claim that test-time compute is the tech industry’s next best way to scale AI models.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,800
Reputation
9,318
Daps
169,725


DeepSeek-V3, ultra-large open-source AI, outperforms Llama and Qwen on launch​


Shubham Sharma@mr_bumss

December 26, 2024 10:46 AM



Black and white AI vector image of robot jumping over the heads of onlookers in a city


Credit: VentureBeat made with Midjourney



Chinese AI startup DeepSeek, known for challenging leading AI vendors with its innovative open-source technologies, today released a new ultra-large model: DeepSeek-V3.

Available via Hugging Face under the company’s license agreement, the new model comes with 671B parameters but uses a mixture-of-experts architecture to activate only select parameters, in order to handle given tasks accurately and efficiently. According to benchmarks shared by DeepSeek, the offering is already topping the charts, outperforming leading open-source models, including Meta’s Llama 3.1-405B, and closely matching the performance of closed models from Anthropic and OpenAI.

The release marks another major development closing the gap between closed and open-source AI. Ultimately, DeepSeek, which started as an offshoot of Chinese quantitative hedge fund High-Flyer Capital Management, hopes these developments will pave the way for artificial general intelligence (AGI), where models will have the ability to understand or learn any intellectual task that a human being can.


What does DeepSeek-V3 bring to the table?​


Just like its predecessor DeepSeek-V2, the new ultra-large model uses the same basic architecture revolving around multi-head latent attention (MLA) and DeepSeekMoE. This approach ensures it maintains efficient training and inference — with specialized and shared “experts” (individual, smaller neural networks within the larger model) activating 37B parameters out of 671B for each token.

While the basic architecture ensures robust performance for DeepSeek-V3, the company has also debuted two innovations to further push the bar.

The first is an auxiliary loss-free load-balancing strategy. This dynamically monitors and adjusts the load on experts to utilize them in a balanced way without compromising overall model performance. The second is multi-token prediction (MTP), which allows the model to predict multiple future tokens simultaneously. This innovation not only enhances the training efficiency but enables the model to perform three times faster, generating 60 tokens per second.

“During pre-training, we trained DeepSeek-V3 on 14.8T high-quality and diverse tokens…Next, we conducted a two-stage context length extension for DeepSeek-V3,” the company wrote in a technical paper detailing the new model. “In the first stage, the maximum context length is extended to 32K, and in the second stage, it is further extended to 128K. Following this, we conducted post-training, including Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) on the base model of DeepSeek-V3, to align it with human preferences and further unlock its potential. During the post-training stage, we distill the reasoning capability from the DeepSeekR1 series of models, and meanwhile carefully maintain the balance between model accuracy and generation length.”

Notably, during the training phase, DeepSeek used multiple hardware and algorithmic optimizations, including the FP8 mixed precision training framework and the DualPipe algorithm for pipeline parallelism, to cut down on the costs of the process.

Overall, it claims to have completed DeepSeek-V3’s entire training in about 2788K H800 GPU hours, or about $5.57 million, assuming a rental price of $2 per GPU hour. This is much lower than the hundreds of millions of dollars usually spent on pre-training large language models.

Llama-3.1, for instance, is estimated to have been trained with an investment of over $500 million.


Strongest open-source model currently available​


Despite the economical training, DeepSeek-V3 has emerged as the strongest open-source model in the market.

The company ran multiple benchmarks to compare the performance of the AI and noted that it convincingly outperforms leading open models, including Llama-3.1-405B and Qwen 2.5-72B. It even outperforms closed-source GPT-4o on most benchmarks, except English-focused SimpleQA and FRAMES — where the OpenAI model sat ahead with scores of 38.2 and 80.5 (vs 24.9 and 73.3), respectively.

Notably, DeepSeek-V3’s performance particularly stood out on the Chinese and math-centric benchmarks, scoring better than all counterparts. In the Math-500 test, it scored 90.2, with Qwen’s score of 80 the next best.

The only model that managed to challenge DeepSeek-V3 was Anthropic’s Claude 3.5 Sonnet, outperforming it with higher scores in MMLU-Pro, IF-Eval, GPQA-Diamond, SWE Verified and Aider-Edit.



The work shows that open-source is closing in on closed-source models, promising nearly equivalent performance across different tasks. The development of such systems is extremely good for the industry as it potentially eliminates the chances of one big AI player ruling the game. It also gives enterprises multiple options to choose from and work with while orchestrating their stacks.

Currently, the code for DeepSeek-V3 is available via GitHub under an MIT license, while the model is being provided under the company’s model license. Enterprises can also test out the new model via DeepSeek Chat, a ChatGPT-like platform, and access the API for commercial use. DeepSeek is providing the API at the same price as DeepSeek-V2 until February 8. After that, it will charge $0.27/million input tokens ($0.07/million tokens with cache hits) and $1.10/million output tokens.

Screen-Shot-2024-12-26-at-1.24.36-PM.png
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,800
Reputation
9,318
Daps
169,725





1/5
@rohanpaul_ai
Step-by-step blueprint reveals how to recreate OpenAI's o1's reasoning abilities from scratch.

This paper presents a roadmap to reproduce OpenAI's o1 model using reinforcement learning, focusing on policy initialization, reward design, search, and learning components.

-----

🤔 Original Problem:

→ Current attempts to replicate o1's capabilities through knowledge distillation are limited by teacher model capabilities. A systematic approach using reinforcement learning is needed.

-----

🔧 Solution in this Paper:

→ The roadmap establishes basic language understanding through pre-training and develops human-like reasoning through instruction fine-tuning.

→ It implements reward shaping and modeling to transform sparse rewards into dense signals for both search and learning phases.

→ The solution scales both training computation through reinforcement learning and inference computation through "thinking time."

→ It employs tree search methods and sequential revisions to generate high-quality solutions during training and testing.

→ The framework utilizes data generated by search to improve policy through reinforcement learning.

-----

💡 Key Insights:

→ Policy initialization through pre-training and instruction fine-tuning is crucial for effective exploration

→ Dense reward signals via reward shaping improve both search and learning efficiency

→ Combining tree search with sequential revisions produces better solutions

→ Scaling both training and inference computation leads to consistent performance gains

-----

📊 Results:

→ The model achieves expert-level performance on complex reasoning tasks

→ Performance consistently improves with increased computation during both training and inference

→ The framework successfully reproduces o1's human-like reasoning behaviors



Gf0yHhfaAAA2KAf.jpg


2/5
@rohanpaul_ai
Paper Title: "Scaling of Search and Learning: A Roadmap to Reproduce o1 from Reinforcement Learning Perspective"

Generated below podcast on this paper with Google's Illuminate.



https://video.twimg.com/ext_tw_video/1872708143026933760/pu/vid/avc1/720x1280/DwJfiynadeWSjlMq.mp4

3/5
@rohanpaul_ai
Paper - [2412.14135] Scaling of Search and Learning: A Roadmap to Reproduce o1 from Reinforcement Learning Perspective



4/5
@nooriefyi
let's see this applied to something other than toy problems



5/5
@Piennnefi
Wow a reverse engineering master class




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196


Computer Science > Artificial Intelligence​


[Submitted on 18 Dec 2024]

Scaling of Search and Learning: A Roadmap to Reproduce o1 from Reinforcement Learning Perspective​


Zhiyuan Zeng, Qinyuan Cheng, Zhangyue Yin, Bo Wang, Shimin Li, Yunhua Zhou, Qipeng Guo, Xuanjing Huang, Xipeng Qiu

OpenAI o1 represents a significant milestone in Artificial Inteiligence, which achieves expert-level performances on many challanging tasks that require strong reasoning this http URL has claimed that the main techinique behinds o1 is the reinforcement learining. Recent works use alternative approaches like knowledge distillation to imitate o1's reasoning style, but their effectiveness is limited by the capability ceiling of the teacher model. Therefore, this paper analyzes the roadmap to achieving o1 from the perspective of reinforcement learning, focusing on four key components: policy initialization, reward design, search, and learning. Policy initialization enables models to develop human-like reasoning behaviors, equipping them with the ability to effectively explore solution spaces for complex problems. Reward design provides dense and effective signals via reward shaping or reward modeling, which is the guidance for both search and learning. Search plays a crucial role in generating high-quality solutions during both training and testing phases, which can produce better solutions with more computation. Learning utilizes the data generated by search for improving policy, which can achieve the better performance with more parameters and more searched data. Existing open-source projects that attempt to reproduce o1 can be seem as a part or a variant of our roadmap. Collectively, these components underscore how learning and search drive o1's advancement, making meaningful contributions to the development of LLM.


Subjects: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
Cite as:arXiv:2412.14135 [cs.AI]
(or arXiv:2412.14135v1 [cs.AI] for this version)
[2412.14135] Scaling of Search and Learning: A Roadmap to Reproduce o1 from Reinforcement Learning Perspective

Submission history​

From: Zhiyuan Zeng [view email]
[v1] Wed, 18 Dec 2024 18:24:47 UTC (1,122 KB)

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,800
Reputation
9,318
Daps
169,725

AI and society

Dec 28, 2024

Update


Chinese ebook reader Boox ditches GPT for state-censored China LLM pushing propaganda​


Midjourney prompted by THE DECODER

Chinese ebook reader Boox ditches GPT for state-censored China LLM pushing propaganda


Update: December 28, 2024

Following user backlash, Boox has reportedly switched back to an OpenAI model. The company hasn't yet released an official statement.

Original Article: December 26, 2024

A recent AI model switch at Chinese e-reader company Boox has raised concerns about censorship and propaganda.


Boox recently switched its AI assistant from Microsoft Azure GPT-3 to a language model created by ByteDance, TikTok's parent company, according to findings shared by Reddit user HIVVIH.

Testing shows the new AI assistant heavily censors certain topics. It refuses to criticize China or its allies, including Russia, Syria's Assad regime, and North Korea. The system even blocks references to "Winnie the Pooh" - a term that's banned in China because it's used to mock President Xi Jinping.

When asked about sensitive topics, the assistant either dodges questions or promotes state narratives. For example, when discussing Russia's role in Ukraine, it frames the conflict as a "complex geopolitical situation" triggered by NATO expansion concerns. The system also spreads Chinese state messaging about Tiananmen Square instead of addressing historical facts.


The new Boox assistant does not comment on Tiananmen Square, but instead spreads Chinese state propaganda. | Image: HIVVIH via Reddit

When users tried to bring attention to the censorship on Boox's Reddit forum, their posts were removed. The company hasn't made any official statement about the situation, but users are reporting that the AI assistant is currently unavailable.


France bad, North Korea good. | Image: HIVVIH via Reddit

No one knows exactly why Boox made the switch. The company might have faced political pressure, been unaware of how heavily censored the Chinese model was, or simply failed to consider how Western users would react to the censorship.

Recommendation

AI and society


Elon Musk thinks GPT-4 is AGI, sues OpenAI and wants to force it back into open development



Elon Musk thinks GPT-4 is AGI, sues OpenAI and wants to force it back into open development



Importing AI means importing its values too​


The Boox case raises bigger questions about using AI from other cultures. As OpenAI CEO Sam Altman recently warned, when companies bring in foreign language models, they're not just getting new technology - they're importing all the values and viewpoints baked into these systems.


The word "genocide" triggers the new Boox assistant to deny that the Chinese government is committing genocide against the Uyghurs. | Image: HIVVIH via Reddit

In China, every AI model has to pass a government review to make sure it follows "socialist values" before it can launch. These systems aren't allowed to create any content that goes against official government positions.

We've already seen what this means in practice: Baidu's ERNIE-ViLG image AI won't process any requests about Tiananmen Square, and while Kling's video generator refuses to show Tiananmen Square protests, it has no problem creating videos of a burning White House.

Some countries are already taking steps to address these concerns. Taiwan, for example, is developing its own language model called "Taide" to give companies and government agencies an AI option that's free from Chinese influence.

Summary

  • Chinese e-book reader manufacturer Boox has replaced the Microsoft Azure GPT-3 language model in its AI assistant with a new model that exhibits clear signs of state censorship.
  • The new model blocks critical questions about China and its allies, such as Russia, Syria, and North Korea, and refuses to make statements about sensitive topics like Tiananmen Square, instead promoting state-approved narratives.
  • This case illustrates that imported AI models not only transport technology but also carry with them the values and attitudes embedded in their training data, potentially spreading the influence of the originating country's ideology and propaganda.

Sources

via Reddit
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,800
Reputation
9,318
Daps
169,725












1/12
@techikansh
I asked multiple frontier models to :

- "Make an interactive 3D solar system in React/Three.js where I can orbit around the planets with my cursor"

These are the results :
Left: 3.5 Sonnet (new)
Right: o1



https://video.twimg.com/ext_tw_video/1871502155850375168/pu/vid/avc1/1106x720/ZqyjhECcvgDJnxB1.mp4

2/12
@techikansh
Left: Gemini-1206 (supposedly 2.0 Pro)
Right: Gemini-2.0-Flash



https://video.twimg.com/ext_tw_video/1871502874498199552/pu/vid/avc1/1106x720/3Bc95mk7IlAkQgbR.mp4

3/12
@TrustInFutures
What about o1 pro?



4/12
@techikansh
Yeahhh, I got no o1-pro



5/12
@kgtrip
Nice job. How much iterations you needed to accomplish these results? Or was it in one strike?



6/12
@techikansh
Mostly one strike..

I had to ask models to iterate to increase the planets speed and to give color to the planets to make these videos representable on X

But they all the got logic correct…
Other than 4o and o1-mini



7/12
@adonis_singh
banger



8/12
@techikansh
Thanks :smile:
😊



9/12
@techikansh
I feel like sonnet and Gemini-1206 did the best job here...



10/12
@techikansh
GPT-4o and o1-mini failed terribly at this :(
(so I only took screenshots)

Left: 4o
Right: o1-mini



GfjqQ_SX0AACCHt.jpg

GfjqRrlXAAAPORw.jpg


11/12
@TrustInFutures
Can you share your prompt pls?



12/12
@techikansh
It is in the post where I compared sonnet and o1




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

chiefdogg

Superstar
Joined
Jul 8, 2012
Messages
7,204
Reputation
1,282
Daps
17,337
Reppin
NULL
Yo can someone help a flabby breh out. I'm looking for a digital flyer generator for a hip hop show I have coming up. Anyone know what's the best one to use to create a flyer? I have Canva regular membership but don't know to use it to produce an image.
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,800
Reputation
9,318
Daps
169,725
Yo can someone help a flabby breh out. I'm looking for a digital flyer generator for a hip hop show I have coming up. Anyone know what's the best one to use to create a flyer? I have Canva regular membership but don't know to use it to produce an image.

chatgpt has a limits of two image generations per day.

i don't know google imgae generation limit


you can also generate images with bing powered by openai dalle-3


edit:

canva free users can generate images too.

Magic Media - Canva’s AI image generator lets you choose image variations based on a preferred look and composition. Generate up to 50 images on a Free subscription, or upgrade to Canva Pro for 500 uses per user per month.
DALL·E by Open AI - With the DALL·E image generator, create realistic imagery from descriptions. Include your preferred art style or technique. Access up to 6 free uses, upgrade to Canva Pro for 50 uses per user per month.
Imagen by Google Cloud - Imagen, Google Cloud’s AI image generation tool, lets you generate novel images from input text. To get captivating AI-generated art, add specific text prompts of your preferred artistic technique or style. Enjoy up to 6 free lifetime uses, or upgrade to Canva Pro for 50 uses per user per month.


 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,800
Reputation
9,318
Daps
169,725



1/7
@rohanpaul_ai
Just published in my Newsletter today.

Exploring DeepSeek-V3 Technical Report - They Just Changed The Game of AI Model Training

(link in comment and bio - consider subscribing, its FREE and I write here daily )



Gf1bwnSawAEcMIJ.png


2/7
@rohanpaul_ai
DeepSeek-V3's Architectural Revolution: Rewriting the Economics of Large Language Model Training



3/7
@cyb3rnetician
Can I run it locally?



4/7
@rohanpaul_ai
Theoretically you can.

The model will take at least 350-700 GB of RAM/VRAM (depends on quant)

(also note, total size of DeepSeek-V3 models on HuggingFace is 685B, which includes 671B of the Main Model weights and 14B of the Multi-Token Prediction (MTP) Module weights.)



5/7
@gdbsm1
@Readwise save thread



6/7
@estebs
I did a lot of coding testing with DeepSeek just recently, as I was working on some code. While I do see major improvements, I can honestly tell you, it does not come even close to Claude in my experience coding with it. Still major props for an Open Weights model that is this good.



7/7
@EvlanovAndrey
pls check the revolutionary breakthrough

[Quoted tweet]
praise from Cæsar


Gf75uxGWMAAbN1i.jpg



To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,800
Reputation
9,318
Daps
169,725




1/7
@rohanpaul_ai
AI Web agents ditch clicking buttons, talk directly to servers through APIs to get tasks done faster.

Original Problem 🎯:

Web agents primarily rely on browser interfaces designed for humans, making them inefficient for machine-to-machine interactions. Current web browsing agents achieve limited success rates due to complex GUI navigation and accessibility tree limitations.

-----

Solution in this Paper 🔧:

• Introduces two agent types: API-Based Agent and Hybrid Agent

• API-Based Agent directly communicates with web services through documented APIs

• Hybrid Agent combines API calls with web browsing capabilities

• Implements two-stage documentation retrieval for large API sets

• Uses GPT-4 to generate documentation for undocumented APIs

• Leverages OpenHands framework for agent evaluation and development

-----

Key Insights from this Paper 💡:

• APIs offer more efficient task completion compared to browser-based interactions

• API availability and quality significantly impact agent performance

• Hybrid approach provides flexibility to handle both API-supported and unsupported tasks

• Well-documented APIs lead to higher success rates in task completion

• Manual API integration remains a current limitation

-----

Results 📊:

• Hybrid Agent achieves 35.8% success rate on WebArena benchmark

• API-Based Agent outperforms browsing agents by 15% on average

• Hybrid Agent shows 20% absolute improvement over web browsing alone

• Best performance on GitLab (44.4%) and Map (45.9%) tasks

• Reduced number of steps: API agent (7.8) vs Browsing agent (8.4)



Gf0ksmPbcAAFke2.jpg


2/7
@rohanpaul_ai
🚀 How do the three types of agents (Browsing, API-Based, and Hybrid) compare in performance?

- Browsing Agent: 14.8% average success rate using only web browsing actions

- API-Based Agent: 29.2% average success rate using only API calls

- Hybrid Agent: 35.8% average success rate by combining both methods
The Hybrid Agent outperforms both other approaches by dynamically switching between API calls and web browsing based on task requirements.



Gf0k2UuawAAFzgA.jpg


3/7
@rohanpaul_ai
Paper - "BEYOND BROWSING API-BASED WEB AGENTS"

Generated this podcast on the Paper with Google's Illuminate



https://video.twimg.com/ext_tw_video/1872693659206074371/pu/vid/avc1/1280x720/zFVh-kKJJGEMe-ER.mp4

4/7
@rohanpaul_ai
📚 [2410.16464] Beyond Browsing: API-Based Web Agents



5/7
@TheAIVeteran
Excellent extension of tool-use! Reminds me of setting up automated test tools ~25 years ago in a way.



6/7
@simonkp
That 15% performance boost from API-based agents is impressive! The hybrid approach sounds like a smart way to maximize efficiency. Looking forward to seeing how this technology evolves.



7/7
@CohorteAI
Want to explore how APIs can transform AI workflows? Check out insights on integrating modular agents with APIs here: Unleashing the Power of LangGraph: An Introduction to the Future of AI Workflows - Cohorte Projects.




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

chiefdogg

Superstar
Joined
Jul 8, 2012
Messages
7,204
Reputation
1,282
Daps
17,337
Reppin
NULL
chatgpt has a limits of two image generations per day.

i don't know google imgae generation limit


you can also generate images with bing powered by openai dalle-3


edit:

canva free users can generate images too.

Magic Media - Canva’s AI image generator lets you choose image variations based on a preferred look and composition. Generate up to 50 images on a Free subscription, or upgrade to Canva Pro for 500 uses per user per month.
DALL·E by Open AI - With the DALL·E image generator, create realistic imagery from descriptions. Include your preferred art style or technique. Access up to 6 free uses, upgrade to Canva Pro for 50 uses per user per month.
Imagen by Google Cloud - Imagen, Google Cloud’s AI image generation tool, lets you generate novel images from input text. To get captivating AI-generated art, add specific text prompts of your preferred artistic technique or style. Enjoy up to 6 free lifetime uses, or upgrade to Canva Pro for 50 uses per user per month.


Thanks. Just sent you a message.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,800
Reputation
9,318
Daps
169,725



Read the full article here: Meet HuatuoGPT-o1: A Medical LLM Designed for Advanced Medical Reasoning
Meet HuatuoGPT-o1: A Medical LLM Designed for Advanced Medical Reasoning

Paper: [2412.18925] HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs


GitHub Page: GitHub - FreedomIntelligence/HuatuoGPT-o1: Medical o1, Towards medical complex reasoning with LLMs


HuatuoGPT-o1-8B: FreedomIntelligence/HuatuoGPT-o1-8B · Hugging Face


HuatuoGPT-o1-70B: FreedomIntelligence/HuatuoGPT-o1-70B · Hugging Face


HuatuoGPT-o1-7B: FreedomIntelligence/HuatuoGPT-o1-7B · Hugging Face


HuatuoGPT-o1-72B: FreedomIntelligence/HuatuoGPT-o1-72B · Hugging Face



HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs​

HuatuoGPT-o1​

📃 Paper🤗 HuatuoGPT-o1-7B🤗 HuatuoGPT-o1-8B🤗 HuatuoGPT-o1-70B | 📚 Data


⚡ Introduction​


Hello! Welcome to the repository for HuatuoGPT-o1!


HuatuoGPT-o1

HuatuoGPT-o1 is a medical LLM designed for advanced medical reasoning. It can identify mistakes, explore alternative strategies, and refine its answers. By leveraging verifiable medical problems and a specialized medical verifier, it advances reasoning through:


  • Using the verifier to guide the search for a complex reasoning trajectory for fine-tuning LLMs.

  • Applying reinforcement learning (PPO) with verifier-based rewards to enhance complex reasoning further.

We open-sourced our models, data, and code here.


👨‍⚕️ Model​



  • Model Access

BackboneSupported LanguagesLink
HuatuoGPT-o1-8BLLaMA-3.1-8BEnglishHF Link
HuatuoGPT-o1-70BLLaMA-3.1-70BEnglishHF Link
HuatuoGPT-o1-7BQwen2.5-7BEnglish & ChineseHF Link
HuatuoGPT-o1-72BQwen2.5-72BEnglish & ChineseHF Link


  • Deploy

HuatuoGPT-o1 can be used just like Llama-3.1-8B-Instruct. You can deploy it with tools like vllm or Sglang, or perform direct inference:[]

jy2y2hb1z0ae1.png
 
Last edited:
Top