Sam Altman is a habitual liar and can't be trusted, says former OpenAI board member

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,676
Reputation
9,283
Daps
169,509
I still have roughly the same short-, medium- and long-term risk profiles. I still expect that on cybersecurity and bio stuff, 16 we’ll see serious, or potentially serious, short-term issues that need mitigation. Long term, as you think about a system that really just has incredible capability, there’s risks that are probably hard to precisely imagine and model. But I can simultaneously think that these risks are real and also believe that the only way to appropriately address them is to ship product and learn.

16 In September 2024, OpenAI acknowledged that its latest AI models have increased the risk of misuse in creating bioweapons. In May 2023, Altman joined hundreds of other signatories to a statement highlighting the existential risks posed by AI.

When it comes to the immediate future, the industry seems to have coalesced around three potential roadblocks to progress: scaling the models, chip scarcity and energy scarcity. I know they commingle, but can you rank those in terms of your concern?

We have a plan that I feel pretty good about on each category. Scaling the models, we continue to make technical progress, capability progress, safety progress, all together. I think 2025 will be an incredible year. Do you know this thing called the ARC-AGI challenge? Five years ago this group put together this prize as a North Star toward AGI. They wanted to make a benchmark that they thought would be really hard. The model we’re announcing on Friday 17 passed this benchmark. For five years it sat there, unsolved. It consists of problems like this. 18 They said if you can score 85% on this, we’re going to consider that a “pass.” And our system—with no custom work, just out of the box—got an 87.5%. 19 And we have very promising research and better models to come.

17 OpenAI introduced Model o3 on Dec. 20. It should be available to users in early 2025. The previous model was o1, but the Information reported that OpenAI skipped over o2 to avoid a potential conflict with British telecommunications provider 02.

18 On my laptop, Altman called up the ARC-AGI website, which displayed a series of bewildering abstract grids. The abstraction is the point; to “solve” the grids and achieve AGI, an AI model must rely more on reason than its training data.

19 According to ARC-AGI: “OpenAI’s new o3 system—trained on the ARC-AGI-1 Public Training set—has scored a breakthrough 75.7% on the Semi-Private Evaluation set at our stated public leaderboard $10k compute limit. A high-compute (172x) o3 configuration scored 87.5%.”

We have been hard at work on the whole [chip] supply chain, all the partners. We have people to build data centers and make chips for us. We have our own chip effort here. We have a wonderful partnership with Nvidia, just an absolutely incredible company. And we’ll talk more about this next year, but now is the time for us to scale chips.

U.S. semiconductor giant Nvidia Corp. CEO Jensen Huang delivers a speech at an event on artificial intelligence in Tokyo on Nov. 13, 2024.


Nvidia Corp. CEO Jensen Huang speaking at an event in Tokyo on Nov. 13, 2024. Photographer: Kyodo/AP Images

So energy …

Fusion’s gonna work.

Fusion is going to work. Um. On what time frame?

Soon. Well, soon there will be a demonstration of net-gain fusion. You then have to build a system that doesn’t break. You have to scale it up. You have to figure out how to build a factory—build a lot of them—and you have to get regulatory approval. And that will take, you know, years altogether? But I would expect [Helion 20] will show you that fusion works soon.

20 Helion, a clean energy startup co-founded by Altman, Dustin Moskovitz and Reid Hoffman, focuses on developing nuclear fusion.

In the short term, is there any way to sustain AI’s growth without going backward on climate goals?

Yes, but none that is as good, in my opinion, as quickly permitting fusion reactors. I think our particular kind of fusion is such a beautiful approach that we should just race toward that and be done.

A lot of what you just said interacts with the government. We have a new president coming. You made a personal $1 million donation to the inaugural fund. Why?

He’s the president of the United States. I support any president.

I understand why it makes sense for OpenAI to be seen supporting a president who’s famous for keeping score of who’s supporting him, but this was a personal donation. Donald Trump opposes many of the things you’ve previously supported. Am I wrong to think the donation is less an act of patriotic conviction and more an act of fealty?

I don’t support everything that Trump does or says or thinks. I don’t support everything that Biden says or does or thinks. But I do support the United States of America, and I will work to the degree I’m able to with any president for the good of the country. And particularly for the good of what I think is this huge moment that has got to transcend any political issues. I think AGI will probably get developed during this president’s term, and getting that right seems really important. Supporting the inauguration, I think that’s a relatively small thing. I don’t view that as a big decision either way. But I do think we all should wish for the president’s success.

He’s said he hates the Chips Act. You supported the Chips Act.

I actually don’t. I think the Chips Act was better than doing nothing but not the thing that we should have done. And I think there’s a real opportunity to do something much better as a follow-on. I don’t think the Chips Act has been as effective as any of us hoped.

President-elect Donald Trump and Elon Musk talk ring side during the UFC 309 event at Madison Square Garden on November 16, 2024 in New York City.


Trump and Musk talk ringside during the UFC 309 event at Madison Square Garden in New York on Nov. 16. Photographer: Chris Unger/Zuffa LLC Elon 21 is clearly going to be playing some role in this administration. He’s suing you. He’s competing with you. I saw your comments at DealBook that you think he’s above using his position to engage in any funny business as it relates to AI.

I do think so.

21 C’mon, how many Elons do you know?

But if I may: In the past few years he bought Twitter, then sued to get out of buying Twitter. He replatformed Alex Jones. He challenged Zuckerberg to a cage match. That’s just kind of the tip of the funny-business iceberg. So do you really believe that he’s going to—

Oh, I think he’ll do all sorts of bad s---. I think he’ll continue to sue us and drop lawsuits and make new lawsuits and whatever else. He hasn’t challenged me to a cage match yet, but I don’t think he was that serious about it with Zuck, either, it turned out. As you pointed out, he says a lot of things, starts them, undoes them, gets sued, sues, gets in fights with the government, gets investigated by the government. That’s just Elon being Elon. The question was, will he abuse his political power of being co-president, or whatever he calls himself now, to mess with a business competitor? I don’t think he’ll do that. I genuinely don’t. May turn out to be proven wrong.

When the two of you were working together at your best, how would you describe what you each brought to the relationship?

Maybe like a complementary spirit. We don’t know exactly what this is going to be or what we’re going to do or how this is going to go, but we have a shared conviction that this is important, and this is the rough direction to push and how to course-correct.

I’m curious what the actual working relationship was like.

I don’t remember any big blowups with Elon until the fallout that led to the departure. But until then, for all of the stories—people talk about how he berates people and blows up and whatever, I hadn’t experienced that.

Are you surprised by how much capital he’s been able to raise, specifically from the Middle East, for xAI?

No. No. They have a lot of capital. It’s the industry people want. Elon is Elon.

Let’s presume you’re right and there’s positive intent from Elon and the administration. What’s the most helpful thing the Trump administration can do for AI in 2025?

US-built infrastructure and lots of it. The thing I really deeply agree with the president on is, it is wild how difficult it has become to build things in the United States. Power plants, data centers, any of that kind of stuff. I understand how bureaucratic cruft builds up, but it’s not helpful to the country in general. It’s particularly not helpful when you think about what needs to happen for the US to lead AI. And the US really needs to lead AI.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,676
Reputation
9,283
Daps
169,509

« Back to blog


Reflections


The second birthday of ChatGPT was only a little over a month ago, and now we have transitioned into the next paradigm of models that can do complex reasoning. New years get people in a reflective mood, and I wanted to share some personal thoughts about how it has gone so far, and some of the things I’ve learned along the way.

As we get closer to AGI, it feels like an important time to look at the progress of our company. There is still so much to understand, still so much we don’t know, and it’s still so early. But we know a lot more than we did when we started.

We started OpenAI almost nine years ago because we believed that AGI was possible, and that it could be the most impactful technology in human history. We wanted to figure out how to build it and make it broadly beneficial; we were excited to try to make our mark on history. Our ambitions were extraordinarily high and so was our belief that the work might benefit society in an equally extraordinary way.

At the time, very few people cared, and if they did, it was mostly because they thought we had no chance of success.

In 2022, OpenAI was a quiet research lab working on something temporarily called “Chat With GPT-3.5”. (We are much better at research than we are at naming things.) We had been watching people use the playground feature of our API and knew that developers were really enjoying talking to the model. We thought building a demo around that experience would show people something important about the future and help us make our models better and safer.

We ended up mercifully calling it ChatGPT instead, and launched it on November 30th of 2022.

We always knew, abstractly, that at some point we would hit a tipping point and the AI revolution would get kicked off. But we didn’t know what the moment would be. To our surprise, it turned out to be this.

The launch of ChatGPT kicked off a growth curve like nothing we have ever seen—in our company, our industry, and the world broadly. We are finally seeing some of the massive upside we have always hoped for from AI, and we can see how much more will come soon.



It hasn’t been easy. The road hasn’t been smooth and the right choices haven’t been obvious.

In the last two years, we had to build an entire company, almost from scratch, around this new technology. There is no way to train people for this except by doing it, and when the technology category is completely new, there is no one at all who can tell you exactly how it should be done.

Building up a company at such high velocity with so little training is a messy process. It’s often two steps forward, one step back (and sometimes, one step forward and two steps back). Mistakes get corrected as you go along, but there aren’t really any handbooks or guideposts when you’re doing original work. Moving at speed in uncharted waters is an incredible experience, but it is also immensely stressful for all the players. Conflicts and misunderstanding abound.

These years have been the most rewarding, fun, best, interesting, exhausting, stressful, and—particularly for the last two—unpleasant years of my life so far. The overwhelming feeling is gratitude; I know that someday I’ll be retired at our ranch watching the plants grow, a little bored, and will think back at how cool it was that I got to do the work I dreamed of since I was a little kid. I try to remember that on any given Friday, when seven things go badly wrong by 1 pm.



A little over a year ago, on one particular Friday, the main thing that had gone wrong that day was that I got fired by surprise on a video call, and then right after we hung up the board published a blog post about it. I was in a hotel room in Las Vegas. It felt, to a degree that is almost impossible to explain, like a dream gone wrong.

Getting fired in public with no warning kicked off a really crazy few hours, and a pretty crazy few days. The “fog of war” was the strangest part. None of us were able to get satisfactory answers about what had happened, or why.

The whole event was, in my opinion, a big failure of governance by well-meaning people, myself included. Looking back, I certainly wish I had done things differently, and I’d like to believe I’m a better, more thoughtful leader today than I was a year ago.

I also learned the importance of a board with diverse viewpoints and broad experience in managing a complex set of challenges. Good governance requires a lot of trust and credibility. I appreciate the way so many people worked together to build a stronger system of governance for OpenAI that enables us to pursue our mission of ensuring that AGI benefits all of humanity.

My biggest takeaway is how much I have to be thankful for and how many people I owe gratitude towards: to everyone who works at OpenAI and has chosen to spend their time and effort going after this dream, to friends who helped us get through the crisis moments, to our partners and customers who supported us and entrusted us to enable their success, and to the people in my life who showed me how much they cared. [1]

We all got back to the work in a more cohesive and positive way and I’m very proud of our focus since then. We have done what is easily some of our best research ever. We grew from about 100 million weekly active users to more than 300 million. Most of all, we have continued to put technology out into the world that people genuinely seem to love and that solves real problems.



Nine years ago, we really had no idea what we were eventually going to become; even now, we only sort of know. AI development has taken many twists and turns and we expect more in the future.

Some of the twists have been joyful; some have been hard. It’s been fun watching a steady stream of research miracles occur, and a lot of naysayers have become true believers. We’ve also seen some colleagues split off and become competitors. Teams tend to turn over as they scale, and OpenAI scales really fast. I think some of this is unavoidable—startups usually see a lot of turnover at each new major level of scale, and at OpenAI numbers go up by orders of magnitude every few months. The last two years have been like a decade at a normal company. When any company grows and evolves so fast, interests naturally diverge. And when any company in an important industry is in the lead, lots of people attack it for all sorts of reasons, especially when they are trying to compete with it.

Our vision won’t change; our tactics will continue to evolve. For example, when we started we had no idea we would have to build a product company; we thought we were just going to do great research. We also had no idea we would need such a crazy amount of capital. There are new things we have to go build now that we didn’t understand a few years ago, and there will be new things in the future we can barely imagine now.

We are proud of our track-record on research and deployment so far, and are committed to continuing to advance our thinking on safety and benefits sharing. We continue to believe that the best way to make an AI system safe is by iteratively and gradually releasing it into the world, giving society time to adapt and co-evolve with the technology, learning from experience, and continuing to make the technology safer. We believe in the importance of being world leaders on safety and alignment research, and in guiding that research with feedback from real world applications.

We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.

We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.

This sounds like science fiction right now, and somewhat crazy to even talk about it. That’s alright—we’ve been there before and we’re OK with being there again. We’re pretty confident that in the next few years, everyone will see what we see, and that the need to act with great care, while still maximizing broad benefit and empowerment, is so important. Given the possibilities of our work, OpenAI cannot be a normal company.

How lucky and humbling it is to be able to play a role in this work.

(Thanks to Josh Tyrangiel for sort of prompting this. I wish we had had a lot more time.)

[1]

There were a lot of people who did incredible and gigantic amounts of work to help OpenAI, and me personally, during those few days, but two people stood out from all others.

Ron Conway and Brian Chesky went so far above and beyond the call of duty that I’m not even sure how to describe it. I’ve of course heard stories about Ron’s ability and tenaciousness for years and I’ve spent a lot of time with Brian over the past couple of years getting a huge amount of help and advice.

But there’s nothing quite like being in the foxhole with people to see what they can really do. I am reasonably confident OpenAI would have fallen apart without their help; they worked around the clock for days until things were done.

Although they worked unbelievably hard, they stayed calm and had clear strategic thought and great advice throughout. They stopped me from making several mistakes and made none themselves. They used their vast networks for everything needed and were able to navigate many complex situations. And I’m sure they did a lot of things I don’t know about.

What I will remember most, though, is their care, compassion, and support.

I thought I knew what it looked like to support a founder and a company, and in some small sense I did. But I have never before seen, or even heard of, anything like what these guys did, and now I get more fully why they have the legendary status they do. They are different and both fully deserve their genuinely unique reputations, but they are similar in their remarkable ability to move mountains and help, and in their unwavering commitment in times of need. The tech industry is far better off for having both of them in it.

There are others like them; it is an amazingly special thing about our industry and does much more to make it all work than people realize. I look forward to paying it forward.

On a more personal note, thanks especially to Ollie for his support that weekend and always; he is incredible in every way and no one could ask for a better partner.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,676
Reputation
9,283
Daps
169,509


1/11
Gary Marcus

Deep Research, Deep Bullshyt, and the potential (model) collapse of science

How Sam Altman’s hype might just bite us all in the behind:

https://open.substack.com/pub/garymarcus/p/deep-research-deep-bullshyt-and-the

bafkreibiki2hydzygkcooaqwhvdd5ujsczxpdxzkada2bbi2kynsqpdpuu@jpeg


2/11
‪Jerome Etienne‬ ‪@jerome-etienne.bsky.social‬

Gary Marcus (@garymarcus.bsky.social) Really appreciate your thoughts on this! It’s crazy how hype can shape science, right? Makes you wonder what we can actually trust. Gonna check out your article now!

3/11
‪Rob Nelson‬ ‪@ailogblog.bsky.social‬

Love that you evoke Frankfurt here. I put the link in your already crowded comments on Substack, but want you to know the original essay is available online as a PDF at https://raritanquarterly.rutgers.edu/issue-index/all-articles/560-on-bullshyt

4/11
‪Gary Marcus‬ ‪@garymarcus.bsky.social‬

I added a quote at top :smile:

5/11
‪André Winter‬ ‪@andrewinter.bsky.social‬

Combined with the attack on science by Trump this means a huge brain drain for the US in the coming years.

6/11
‪Rebecca L‬ ‪@rlen10.bsky.social‬

Gary Marcus (@garymarcus.bsky.social) I'd love to hear your take on this reporting:

‘Things Are Going to Get Intense:’ How a Musk Ally Plans to Push AI on the Government

bafkreib5gpksezdqgjhkkehofcgvj3yteexxhds3txho2fo3sfiiev647e@jpeg


7/11
‪tonyk3ss.bsky.social‬ ‪@tonyk3ss.bsky.social‬

I asked this thing about cod genomics and it got the genome size wrong by a factor of 2 in the first sentence.

8/11
‪theveganheathen.bsky.social‬ ‪@theveganheathen.bsky.social‬

This reminds me—I was searching for information about a restaurant the other day on google & it appeared to be generating ai pages—yet it was on their official website + was coherent,& it tried to answer my google question.It didnt answer anything though—as it had no knowledge of anything specific

9/11
‪theveganheathen.bsky.social‬ ‪@theveganheathen.bsky.social‬

Really good article — I hope somehow people recognize this issue and prevent it — though it’s definitely hard not to be cynical about that stuff. Glad you’re trying to spread the word though at least. I wonder what else can be done to prevent these problems before they’re a larger issue

10/11
‪José Marichal‬ ‪@josemarichal.bsky.social‬

The "filling academic journals with sludge" fear is real....

11/11
‪Gnarlygeek (he/him) aka Tom‬ ‪@gnarlygeek.bsky.social‬

Well, I have to say that Microsoft's CoPilot in Outlook did a fair job of summarizing the article. What do you think?

bafkreidmyfm6xgyqgzarb72ffgi4vr7enoe4cfu2nxojbbv5cgwpyph6xq@jpeg


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,676
Reputation
9,283
Daps
169,509
https://garymarcus.substack.com/p/deep-research-deep-bullshyt-and-the



Deep Research, Deep Bullshyt, and the potential (model) collapse of science​


Sam Altman’s hype might just bite us all in the behind​


Gary Marcus

Feb 03, 2025

As Harry Frankfurt observed1, “The liar cares about the truth and attempts to hide it; the bullshytter doesn't care if what he or she says is true or false, but cares only whether the listener is persuaded.“

§

Anyone remember Meta’s Galactica, in the good old days, in mid-November 2022, just a few days before ChatGPT dropped? It was a train wreck, writing perfectly well-formed articles on utter gibberish. A personal favorite, elicited by Tristan Greene in the few short days before Meta withdrew it in November 2022, was this one:







I have good news, for those who like that sort of garbage. Sam Altman just released a fancier remake Deep Research, which, like Meta’s predecessor, can write science-sounding articles on demand, on any topic of your choice.







Influencers were quick to hype it, with a barrage of statements like these, which was retweeted by Sam Altman himself:







A few hours later Unutmaz followed up with more gushing:







And the lemmings ate it up. Hundred thousand views for the latter post alone; over a third of million for the first. We can be sure that DeepResearch will get heavily used.

But that may not be a good thing.

§

Time will tell, but I still feel that one of my most prescient essays was the one posted almost exactly two years ago to the day in which I warned that Generative AI might undermine Google’s business model by polluting the internet with garbage, entitled What Google Should Really be Worried About:







The basic conjecture then — which has already been confirmed to some degree – was that LLMs would be used to write garbage content undermining the value of the internet itself.

At the time, I warned that

Cesspools of automatically-generated fake websites, rather than ChatGPT search, may ultimately come to be the single biggest threat that Google ever faces. After all, if users are left sifting through sewers full of useless misinformation, the value of search would go to zero—potentially killing the company.

For the company that invented Transformers—the major technical advance underlying the large language model revolution—that would be a strange irony indeed.

§

Deep Research, because it works faster and expands the reach of what can be automatically generated, is going to make that problem worse. Scammers will use it, for example, to write “medical reports” that are half-baked (in order to sell ads on the webistes that report such “research”).

But it is not just s scammers; the larger problem may be naive people who believe that the outputs are legit, or who use it as a shortcut for writing journal articles, missing the errors it produces.

§

There aren’t a lot of public samples yet of Deep Research’s output, but one essay that already did circulate, on tariffs, “Revisiting the McKinley Tariff of 1890 through the Lens of Modern Trade Theory”, was initially praised.

On inspection, though, it was not so great. There’s a reddit thread about this already:







Someone on X absolutely nailed what I think is the real risk here:







The thing about using “reasoning” to “synthesize large amounts of online information and complete multi-step research” is that you want to get it right, reliably.

And that’s just not realistic given the current state of AI. Indeed, OpenAI themselves acknowledge this in the fine print.

It can sometimes hallucinate facts in responses or make incorrect inferences, though at a notably lower rate than existing ChatGPT models, according to internal evaluations. It may struggle with distinguishing authoritative information from rumors, and currently shows weakness in confidence calibration, often failing to convey uncertainty accurately.

Same old, same old.

§

Unfortunately, virtually everything that Deep Research produces will pass a LGTM test ( “it ‘looks good to me’”); people will assume it’s legit, even when it is isn’t. Few people will fact-check its output carefully, since it will all superficially look ok. Students will quickly learn that every term paper it writes gets an A.

Scientific researchers will start to use it, too (as Unutmaz recommended). And inevitably errors will seep in to the scientific literature,, often undetected.

And once they are there, they won’t go away.

§

Model collapse”, aka “model autophagy disorder”, is what happens when AI models chase their own tails..

The output of DeepResearch will inevitably feed further models; we will wind up with more of this:







And it’s not just the models that will be infected, but the scientific literature itself.

Because Deep Research lowers the effort, making it trivial to write legit-sounding articles, paper mill factories (there is already such a thing, designed to game citation counts, “an illegitimate, yet lucrative business“ that is growing) will massively ramp up production, choking the peer review process and saturating the market with hard-to-find, hard-to-rectify errors.

When — not if — Deep Research begins to flood the medical zone with shyte, we are all in trouble.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,676
Reputation
9,283
Daps
169,509



Sam Altman: OpenAI has been on the ‘wrong side of history’ concerning open source​


Kyle Wiggers

3:34 PM PST · January 31, 2025



To cap off a day of product releases, OpenAI researchers, engineers, and executives, including OpenAI CEO Sam Altman, answered questions in a wide-ranging Reddit AMA on Friday.

OpenAI finds itself in a bit of a precarious position. It’s battling the perception that it’s ceding ground in the AI race to Chinese companies like DeepSeek, which OpenAI alleges might’ve stolen its IP. The ChatGPT maker has been trying to shore up its relationship with Washington and simultaneously pursue an ambitious data center project, while reportedly laying groundwork for one of the largest financing rounds in history.

Altman admitted that DeepSeek has lessened OpenAI’s lead in AI, and he said he believes OpenAI has been “on the wrong side of history” when it comes to open sourcing its technologies. While OpenAI has open sourced models in the past, the company has generally favored a proprietary, closed source development approach.

“[I personally think we need to] figure out a different open source strategy,” Altman said. “Not everyone at OpenAI shares this view, and it’s also not our current highest priority … We will produce better models [going forward], but we will maintain less of a lead than we did in previous years.”

In a follow-up reply, Kevin Weil, OpenAI’s chief product officer, said that OpenAI is considering open sourcing older models that aren’t state-of-the-art anymore. “We’ll definitely think about doing more of this,” he said, without going into greater detail.

Beyond prompting OpenAI to reconsider its release philosophy, Altman said that DeepSeek has pushed the company to potentially reveal more about how its so-called reasoning models, like the o3-mini model released today, show their “thought process.” Currently, OpenAI’s models conceal their reasoning, a strategy intended to prevent competitors from scraping training data for their own models. In contrast, DeepSeek’s reasoning model, R1, shows its full chain of thought.

“We’re working on showing a bunch more than we show today — [showing the model thought process] will be very very soon,” Weil added. “TBD on all — showing all chain of thought leads to competitive distillation, but we also know people (at least power users) want it, so we’ll find the right way to balance it.”

Altman and Weil attempted to dispel rumors that ChatGPT, the chatbot platform through which OpenAI launches many of its models, would increase in price in the future. Altman said that he’d like to make ChatGPT “cheaper” over time, if feasible.

Altman previously said that OpenAI was losing money on its priciest ChatGPT plan, ChatGPT Pro, which costs $200 per month.

In a somewhat related thread, Weil said that OpenAI continues to see evidence that more compute power leads to “better” and more performant models. That’s in large part what’s necessitating projects such as Stargate, OpenAI’s recently announced massive data center project, Weil said. Serving a growing user base is fueling compute demand within OpenAI as well, he continued.

Asked about recursive self-improvement that might be enabled by these powerful models, Altman said he thinks a “fast takeoff” is more plausible than he once believed. Recursive self-improvement is a process where an AI system could improve its own intelligence and capabilities without human input.

Of course, it’s worth noting that Altman is notorious for overpromising. It wasn’t long ago that he lowered OpenAI’s bar for AGI.

One Reddit user asked whether OpenAI’s models, self-improving or not, would be used to develop destructive weapons — specifically nuclear weapons. This week, OpenAI announced a partnership with the U.S. government to give its models to the U.S. National Laboratories in part for nuclear defense research.

Weil said he trusted the government.

“I’ve gotten to know these scientists and they are AI experts in addition to world class researchers,” he said. “They understand the power and the limits of the models, and I don’t think there’s any chance they just YOLO some model output into a nuclear calculation. They’re smart and evidence-based and they do a lot of experimentation and data work to validate all their work.”

The OpenAI team was asked several questions of a more technical nature, like when OpenAI’s next reasoning model, o3, will be released (“more than a few weeks, less than a few months,” Altman said); when the company’s next flagship “non-reasoning” model, GPT-5, might land (“don’t have a timeline yet,” said Altman); and when OpenAI might unveil a successor to DALL-E 3, the company’s image-generating model. DALL-E 3, which was released around two years ago, has gotten rather long in the tooth. Image-generation tech has improved by leaps and bounds since DALL-E 3’s debut, and the model is no longer competitive on a number of benchmark tests.

“Yes! We’re working on it,” Weil said of a DALL-E 3 follow-up. “And I think it’s going to be worth the wait.”
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,676
Reputation
9,283
Daps
169,509

1/6
@ai_for_success
I repeat world is not ready....

The most terrifying paragraph from Sam Altman's new blog, Three Observations:

"But the future will be coming at us in a way that is impossible to ignore, and the long-term changes to our society and economy will be huge. We will find new things to do, new ways to be useful to each other, and new ways to compete, but they may not look very much like the jobs of today."

[Quoted tweet]
So we finally have a definition of AGI from OpenAI and Sam Altman:

"AGI is a weakly defined term, but generally speaking, we mean it to be a system that can tackle increasingly complex problems at a human level in many fields."

Can we call this AGI ??

Sam Altman Three Observations.


GjYCPpqbYAA9wJ4.png

GjYBGEwbMAAMpcT.png


2/6
@airesearchtools
What is the time frame we’re talking about? Assuming superintelligence is achieved, will we allow machines to make decisions? If humans remain the decision-makers, there will still be people working because those decisions will need to be made. Moreover, as the tasks we currently perform become simpler, we will likely have to take on even more complex decisions.

And when it comes to manual labor, will robots handle it? Will each of us have a personal robot? How long would it take to produce 8 billion robots? The truth is, I struggle to clearly visualize that future. And when I try, I can’t help but think of sci-fi movies and books where humans aren’t exactly idle.



3/6
@victor_explore
we're all pretending to be ready while secretly googling "how to survive the ai apocalypse" at 3am



4/6
@patrickDurusau
Well, yes and no. Imagine Sam was writing about power looms or the soon to be invented cotton gin.
He phrases it in terms of human intelligence but it's more akin to a mechanical calculator or printer.
VCs will be poorer and our jobs will change, but we'll learn new ones.



5/6
@MillenniumTwain
Public Sector 'AI' is already more than Two Decades behind Private/Covert sector << AGI >>, and all Big (Fraud) Tech is doing is accelerating the Dumb-Down of our Victim, Slave, Consumer US Public, and World!

[Quoted tweet]
"Still be Hidden behind Closed Doors"? Thanks to these Covert Actors (Microsoft, OpenAI, the NSA, ad Infinitum) — More and More is Being Hidden behind Closed Doors every day! The ONLY 'forward' motion being their exponentially-accelerated Big Tech/Wall Street HYPE, Fraud, DisInfo ...


Gb-CZx0XAAA7Jyb.jpg


6/6
@MaktabiJr
What will be the currency in that world? What’s the price of things in that world? Or agi will decide for us how to live equally? Giving each human equal credit




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196





1/9
@ai_for_success
"Anyone in 2035 should be able to marshal the intellectual capacity equivalent to everyone in 2025." – Sam Altman

The next 10 years will be the most exciting, even if this statement holds true for just 10%.



GjYMrm_aYAAhJ1r.jpg


2/9
@jairodri
Funny how my goats figured out AI-level problem solving years ago - they've been outsmarting my fence systems since before ChatGPT was cool 😅



3/9
@victor_explore
the greatest wealth transfer won't be in dollars, but in cognitive capabilities going from the few to the many



4/9
@bookwormengr
Sama has writing style of people who are celebrated as messenger's of god.

Never bet against Sama, he has delivered.



5/9
@CloudNexgen
Absolutely! The progress we'll see in the next decade is bound to be phenomenal 🤯



6/9
@tomlikestocode
Even if we achieve just a fraction of this, the impact would be profound. The real challenge is ensuring it benefits everyone equitably.



7/9
@MaktabiJr
I think he already reached that. Whatever he’s developing is something that know how to control all of this



8/9
@Lux118736073602
Mark my words it will happen a lot sooner than in the year 2035 😎



9/9
@JohJac7
AI is an integral part of the theory of everything



GjYPUKXW0AAFGOz.jpg



To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,676
Reputation
9,283
Daps
169,509
1/35
🇺 markruffalo.bsky.social
OpenAI started as a non-profit, funded by tax-deductible donations to make AI benefit everyone. Now, they’re shifting to for-profit, putting profits over public safety.

Tell OpenAI’s board and Attorneys General this isn’t OK by signing the petition ⤵️
bafkreicrzqssumc3prraxkqigvri7htmba7hvwrz4yttjxyzphg5ydbvxi@jpeg

Protect OpenAI's non-profit mission

2/35
🇺 maha-elshaer18.bsky.social
Hello, dear ❤️🫂
I’m Maha from Gaza. I lost my father, and my sick mother needs continuous treatment. We have no provider just my mother, my little brother, and me.We live in a fabric tent with no income. Please help us. I want to fulfill my father’s dream by studying engineering.🙏💔
gofund.me/2790a0f8
bafkreiastbd6xnzoxweqbyqgf2ut7dahhythaathebh4zdahjslpdmz5ym@jpeg


3/35
🇺 bubblesthebard.bsky.social
No, outlaw all AI trash right now

4/35
🇺 jeffhyde2284.bsky.social
Open AI takes away jobs from Voice Actors, Animators, Writers & Creators of shows proves why we hate OpenAI and yet Musk is boosting them in every way so more people are out of work and have to live the life of what America was back in the 1600s to mid 1800s, disgusting.

5/35
🇺 caseyadamsstark.bsky.social
Not surprised. The problem is that I’m not willing to support OpenAI in any capacity whatsoever.

6/35
🇺 steveknapp.bsky.social
Ours is a system of corporate socialism, where companies capitalize their profits and socialize their losses…in effect, they tax you for their accidents, bungling, boondoggles, and mismanagement, just like a government. We should be able to deselect them.
Ralph Nader

7/35
🇺 velvethydrangea.bsky.social
OpenAI steals from people. He has said if it can't use copyrighted materials *for free,* OpenAI can't exist. So I vote we let it die.

8/35
🇺 magnusxander.bsky.social
This whole AI thing is a scam. It’s nothing more than funneling investor money into the pockets of tech companies that continue to overpromise AI’s capabilities. They lose billions of dollars while wasting our resources.

9/35
🇺 the-dancing-vorlon.bsky.social
Yeah, OpenAI implies open source, but it's not. OpenAI is a fraud and it has been from the beginning. If it's not open source it's not going to be used morally and it will fail. It's just a matter of time.

10/35
🇺 ctempleart.bsky.social
@markruffalo.bsky.social -AI has stolen countless artwork from many hardworking artists and illustrators in its development. And now, replaces us in future jobs. As a fellow creative, you should realize how problematic that is.

11/35
🇺 classicjanvier.bsky.social
Or we could just stop using AI? might be an option

12/35
🇺 sunshineincal.bsky.social
Sure get those tax deductible donations in before becoming for profit. Sounds like welfare to me. Just like tesla getting govt funding and now musk is buying elections

13/35
🇺 gilliesracer.bsky.social
Gosh, they're shifting to for-profit? How surprising!

14/35
🇺 ryancrashburn.bsky.social
the company is run by a psychopath... it just needs to be stopped

15/35
🇺 theweaseltimeline.bsky.social
How predictably American. And now with no government watchdogs it’s about to get weird folks. Real real weird.

16/35
🇺 hellaratty.bsky.social
They are part of the techno oligarchs. The ONLY thing these people really care about is money. They don’t care about humanity, life on earth, climate, nothing. ONLY money and cruelty.

17/35
🇺 harnish.bsky.social
Business c*nts are gonna business c*nt.

18/35
🇺 pam327.bsky.social
Don’t you imagine this was the plan all along? Like Elon coming to the rescue in Western North Carolina with “free” Starlink, only to start charging people $120/month a few months later? I could be wrong, but in business I believe this is called a “loss leader” or something like that.

19/35
🇺 joebazz.bsky.social
They need to compensate everyone who has contributed if they plan to privatize for profit all of our hard work

20/35
🇺 grygaard.bsky.social
bsky.app/profile/dgol...

21/35
🇺 dgoldman.bsky.social
This is why we need DAOs. They're much harder to coopt and their focus better matches that is the stakeholders in general.

22/35
🇺 grygaard.bsky.social
I'm glad people not social democrat also can see UBI as a solution to many problems incl. AI automation speeding up.
Also; debt bad-austerity nessesary, builds on faulty research. Introduce UBI like UK did new social security reform; choose it individually. No. art. superrealisten.blogspot.com?m=1

23/35
🇺 chaostosurreal.bsky.social
Nice work in Mickey 17 BTW...

24/35
🇺 davekornfield.bsky.social
Mark, they were never a non profit in anything but name.

25/35
🇺 mulvihill.bsky.social
They are attempting to cash-in on their white elephant before it shuts nothing but disappointment.

Big bubble about to blow-up and make a massive mess.

26/35
🇺 andrewrockwell.bsky.social
What? Socialized loss and privatized profits? Who could have foreseen this???

27/35
🇺 hsawaknow.bsky.social
Or just use DeepSeek because it’s 100 times better

28/35
🇺 inkyquillwarts.bsky.social
Where are all the good tech bros and the billionaires??? Asking for the 1% who are trapped in their monopolies and are slowly being ruled by all of them.

29/35
🇺 kaitojd.bsky.social
I am using more and more Gemini ...

30/35
🇺 inkyquillwarts.bsky.social
Google isn’t any better, if anything, they are worse.

31/35
🇺 kaitojd.bsky.social
At this stage we are choosing between Plague and Cholera

32/35
🇺 inkyquillwarts.bsky.social
Basically. I’m opting out of all ai at the minute though lol. They are all fckd and the amount of energy and water they use just to answer a simple question is bonkers!

33/35
🇺 jeanie31199.bsky.social
You surely aren’t surprised? They’ve made it essentially illegal to be homeless, but are trying to ensure that you can’t feed kids they want to force women to birth, but won’t be able to feed, clothe, or educate. Cutting VA, Medicaid, job & food safety, dismantling everything.

34/35
🇺 wtfdor.bsky.social


35/35
🇺 suhails.bsky.social
Why aren’t all natural monopolies non-profits? Sure, I am willing to argue the point but they will agree that rent seeking is anti capitalist.

To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 
Top