1/17
@marcslove
LLM companies like Anthropic & OpenAI are really caught in a catch 22.
Their models are commoditized so quickly that they likely can’t even recover the cost of training them.
The answer would seem to be to build defensible product around those models.
The problem is their products are intrinsically linked to their models. The commoditization of the model thus devalues the product.
That leaves an opening for model agnostic products to “cross their product moat.”
2/17
@mshonle
What do people in SV even mean when they say "moat"? I've only heard it described in the absence of one, so what company is an example of having a moat? Does Uber have a moat? Does Walmart? (If so, what are they, if not then does any company have a moat?)
3/17
@mark_r_vickers
That's true, and I think they are building decent products based on those models and releasing them quickly. And, they are also the engines behind countless wrappers that need to pay them. That's a low margin business but important because eventually AIs will become too powerful to open source. I think that eventually, like IBM in many companies before them, they may become consultants.
4/17
@marcslove
They're caught in a rapid cycle of innovation though and they aren't increasing their moat, they're losing it. If innovation slows down, it gives competitors a chance to catch up. If innovation speeds up, they're spending even more money for a rapidly depreciating asset.
They may go into consulting eventually. Less so like IBM and moreso like Hashicorp, Snowflake, Databricks, etc. as a platform with a large professional services business on top.
5/17
@mark_r_vickers
Yes, those are good analogies. On the other hand, this isn't just a business to them. It's a kind of calling, for better or for worse.
6/17
@marcslove
Hah! Well, depends on who you’re talking about.

These companies have people all along the spectrum from researcher who cares almost exclusively about their research and craft to avaricious capitalists.
7/17
@tsean_k
The estimated valuations on these companies seem out of line, even for the internet hyperbole era.
I can see some rational value to Nvidia.
But estimates of billions for these AI startups seems like hype to make the stocks go volcanic at IPO.
8/17
@marcslove
It’s a long term bet that AI products become as essential to running a business as buying a computer for each of your employees.
I think that’s still several years away before they become that widespread and essential.
I’m also skeptical that companies will be able to capture and protect that market long-term, especially if the product is really just compute + knowledge + math.
It’s a very different business than designing, building, and selling hardware.
9/17
@tsean_k
But what’s the revenue model? Cloud based subscription services? License buys with updates?
I lived through the era of ERPs and CRMs in business, so I can see that revenue models.
I still think the more predictable bet is the hardware guys, Nvidia is the long term play.
10/17
@marcslove
Oh I totally agree. Hardware's a far better investment in my opinion. No matter how commoditized LLMs or other GPU-accelerated models become, deep learning is here, incredibly valuable, and is going to "eat the word" like software did. Whether OpenAI becomes the largest company in the world or goes bankrupt, Nvidia's going to be trying to catch up to demand for years, if not decades. And their business is far more defensible.
As for the revenue model for LLMs long term…good question.
11/17
@marcslove
They face a challenge that's much more like software. They'll have to provide enough value above and beyond using free and open source solutions that enterprises would rather buy them "off the shelf" than build it themselves. My first instinct that it has to do with MLOps/LLMOps and managing the complexity of orchestration, monitoring, data pipelines, security, and compliance.
12/17
@joenandez
Dude we are so on the same wavelength some times.
Starting to convince myself that the future winning AI product(s) abstract away the underlying model and just figures out how to use the best model for the customer's use case, while rapidly innovating on the user experience itself.
Customers can trust they are always getting the optimal level of intelligence for the task, and don't have to tune it themselves.
Model providers relying on only their models have a strategic vulnerability.
[Quoted post]
joenandez
Joe Fernandez (@joenandez) on Threads
13/17
@joenandez
Right now, every new model that comes out developers are comparing/contrasting, and expecting their favorite AI Dev Tool to add access immediately.
And as we've seen from Deepseek's Appstore ranking, even consumers are not immune to model hopping,
This is a temporary phase in the AI Revolution ... so what's next?
14/17
@ociubotaru
I would gladly pay for a great voice assistant powered by sonnet 3.5
15/17
@jwynia
The Eleven Labs voice agents can be configured to use Sonnet as the LLM
16/17
@mark_r_vickers
It must be annoying to have one of the best and safest AIs and then watch all this hype and app downloads for a newby model where safety wasn't prioritized
techcrunch.com/2025…
Anthropic CEO says DeepSeek was 'the worst' on a critical bioweapons data safety test | TechCrunch
Anthropic CEO says DeepSeek was 'the worst' on a critical bioweapons data safety test | TechCrunch
17/17
@marcslove
Congestion pricing is already a huge success.
I wonder what % of people complaining about it would have happily forked over $9 to use an express lane that would cut their commute time in half.
[Quoted post]
alangbrake
alangbrake (@alangbrake) on Threads
https://www.fastcompany.com/9127243...is-like-after-one-month-of-congestion-pricing
We need to start trumpeting this, so even if it’s halted, we can revive it if we make it to the other side.
www.threads.net
18/19
@alangbrake
We need to start trumpeting this, so even if it’s halted, we can revive it if we make it to the other side.
https://www.fastcompany.com/9127243...is-like-after-one-month-of-congestion-pricing
19/19
@carnage4life
Censorship is relative. Many people were quick to point out DeepSeek won’t talk about “Tank man” and Tiananmen Square, now there are similar complaints that DeepSeek doesn’t censor content that American AI models do.
From hate speech to how to make weapons, DeepSeek will tell you things ChatGPT won’t.
https://www.wsj.com/tech/ai/china-deepseek-ai-dangerous-information-e8eb31a8
To post threads in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196