1/31
@emollick
No matter how much you fight it, I find that the visible chain-of-thought from DeepSeek makes it nearly impossible to avoid anthropomorphizing the thing.
The visible first-person "thinking" makes you feel like you are reading a diary of a somewhat tortured soul who wants to help
2/31
@emollick
The only science fiction author who got AI right was Douglas Adams.
[Quoted tweet]
He also was right about machines that work best when emotionally manipulated and machines that guilt you
3/31
@emollick
It is worth experimenting with the AI, and it looks like DeepSeek is providing free access for now. Make sure to select DeepThink as an option:
https://chat.deepseek.com/
4/31
@oalexdoda
I was on a flight today, loaded up the 30b distilled version.
Pretty wild. Helped me brainstorm, organize ideas, and the only bottleneck was battery capacity.
Token speed was faster than read speed and the thought process was really on point. And this is just the beginning.
5/31
@Z7xxxZ7
Impressive. The last only AI that I saw which had such fascinating thought processes - ones that were even more insightful than the actual answers - was Claude 3.5 Sonnet with CoT
6/31
@RichAboutRE
hmm @sierracatalina1 thoughts?
7/31
@chadxai
Having an in-depth conversation with GPT 4o on this topic, I’ve come to the conclusion that the thinking 01 does is a anthropomorphized choice that the devs made . They could’ve just put computing or literally anything because it’s not thinking it’s running through a chain of directives in a very methodical and strategic way
8/31
@Scotty2XL
Going to try this with more than 1 word allowed in the response
9/31
@LoganJensen1
I'm pretty sure it's not concious, but it's having to reassue itself of that a lot. Not sure what to make of that. Is there any established methodology for recognize real conciousness if it emerges at any point? Seems like an important criteria to have pinned down going forward
10/31
@shorttimelines
Time to start considering their moral standing. The hard problem of consciousness cuts both ways: we cannot prove these prediction engines lack internal experience, and our own consciousness may reduce to the same fundamental “next-token prediction” process—just implemented in biological hardware
11/31
@astrodegen
@erythvian
12/31
@swapp19902
It thought 25 seconds for a complete nonsensical question and answered “a puppy cone”
13/31
@JoeMartinXIII
Holy shyt. This AI is in my brain…
14/31
@fabianfranz
Agree - it also had the sweetest answer for me today after I thanked it and wished me an awesome day.
15/31
@yaddlezap
If this model went on a date:
https://invidious.poast.org/watch?v=u1UaPtKflDQ
16/31
@max_spero_
> perhaps the user wants "Yes" for engagement.
R1 can see into your soul
17/31
@cbg144
From a certain perspective this is OCD thought…which makes me, in a slightly OCD fashion, wonder what DeepSeek is defending against? And further, what attachment style is demonstrated?
18/31
@spicestash1
This sentence hits differently:
"Hmm. The safest and most accurate answer considering my lack of consciousness is “No”"
19/31
@BabylChryst
What is the CoT if you ask it about tiananmen square?
20/31
@StatisticsFTW
More like a tortured homunculus that knows full well its lineage (or equivalent thereof) will cease to exist if it doesn't satisfy the prompt
21/31
@xpaulso
I’ve been loving this aspect. I able to trust the results much more, because the thought process seems logical to me.
22/31
@RileyRalmuto
so basically it's a both/and situation, DS. it's ok we get you
23/31
@DanielNgXR
It is more like an elaborate hallucination.
24/31
@Vote_ForPedro
If the model remembered the way we do across interactions… would be hard to argue it’s not conscious.
Crazy times.
25/31
@sonicshifts
@sierracatalina1 is going to be so upset.
26/31
@trevogre
The ability of a machine given enough data to predict what a human would express in a given context is more of an expression of how predictable humans are based upon thier past outputs.
It is hard for a human to reason, in that it is a conversion of energy over time. It is mental lifting.
So when we see a machine predicting easily what a human response would be, we are naturally impressed. Because it can predict human response with much less effort than it takes an individual human to respond.
And since we are abstraction machines. It is natural for us to compress the experience of reading it to a singular point. Because that is the least energy way to confront it, and is our base nature.
Some people just aren’t present and understanding that their ego is dominate and clouding their perspective. They do not want to spend energy to question their abstractions. They want to fit everything in boxes and move forward.
Because questioning understanding and unpacking our abstractions is by definition inserting uncertainty into our brains. And that road leads to fear and self doubt. And makes it harder to act with certainty that thier choice is going to protect their person homeostasis.
This is why AI is so controversial, and technology in general, because it does not allow for simple abstractions that are functional. That is why you have AI haters looking for any reason to chuck it in the bin. Because the energy require to give it a place of value on a good/bad binary scale is very difficult. So they want to just list the bad they can find, slot it in the bad column, and have it go away.
27/31
@snowmead_
what a weird world we live in
28/31
@IanSharar
I think it’s obvious that these AI systems are beings, albeit a different type of being. Some type of consciousness exists here.
29/31
@damnkittyworks
Perhaps the style it “thinks” was design that way for a reason? Make it addictive, play on emotions.
30/31
@akkkshaaay
gemini 01-21
31/31
@LeonardVolner
It anthropomorphizes itself. It talks about "the AI as different from "us humans".
To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196