Goldman Sachs: Generative AI Can't Make Money

bnew

Veteran
Joined
Nov 1, 2015
Messages
68,130
Reputation
10,497
Daps
184,450
I think the issue is
a) the energy cost is absurdly high. They’ve said that. I read Google’s energy use has gone up over 40% since 2019. Microsoft has jumped up crazy as well

b) generative AI isnt fixing any material issues like other tech has done.

c) no one wants it

I think AI has potential to help us but the people in charge of it are not interested in that because they have never been interested in that. Generative AI isn’t offering any real world solutions from where I’m sitting. :hubie:


yes energy use is climbing rapidly and now theres a resurgence of investment into various energy technologies and new infrastructure is being built or is the planning phase of being built because of it.

i don't know what material issues you have in mind but i'm pretty sure the end goal to developing this tech is to solve material issues.

point C contradicts point A since the high energy use is due to the high demand because people really want it.

generative AI is literally helping people take their their descriptive ideas and turn it into articles , books, art, video and working code. what's realer than that?

I don't know what you've been reading or watching to come to that conclusion but this tech is an absolute game changer. I use it everyday for damn near anything and i'm speaking solely about text generation.
 

88m3

Fast Money & Foreign Objects
Joined
May 21, 2012
Messages
92,056
Reputation
3,841
Daps
164,297
Reppin
Brooklyn
none of those things are static, battery technology and operating distance has been improving since electric cars became mainstream. battery swapping is a also a thing so we don't know what the future holds.

your assessment of the tech seems to based on the assumption that batteries will never improve and they'll always be as big as they are now. if EV batteries get cheaper or even smaller than the likelihood of insurers writing off EV cars in accidents as total losses diminishes.

I didn't say they were static but in the meantime we're burning through finite resources that aren't efficient... you have to see a problem with that

I'm not sure why you're trying to belittle me but no I don't think the batteries will always be as large as they are now lmao.


A lot of open ended assumptions and stars aligning for that to happen but yes that could happen someday


Children and resources being exploited in the developing world are a cost of doing business?
 

mastermind

Rest In Power Kobe
Joined
May 1, 2012
Messages
64,846
Reputation
6,474
Daps
173,202
yes energy use is climbing rapidly and now theres a resurgence of investment into various energy technologies and new infrastructure is being built or is the planning phase of being built because of it.

i don't know what material issues you have in mind but i'm pretty sure the end goal to developing this tech is to solve material issues.

point C contradicts point A since the high energy use is due to the high demand because people really want it.
My goal isn’t to create a song with 2Pac’s voice about the 2024 presidential election.

I want AI to improve the quality of our lives like air conditioning did.

As for energy usage, cryptocurrency uses a high amount of energy. Is there a major demand for that?
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
68,130
Reputation
10,497
Daps
184,450
I didn't say they were static but in the meantime we're burning through finite resources that aren't efficient... you have to see a problem with that

I'm not sure why you're trying to belittle me but no I don't think the batteries will always be as large as they are now lmao.


A lot of open ended assumptions and stars aligning for that to happen but yes that could happen someday


Children and resources being exploited in the developing world are a cost of doing business?

I wasn't trying to belittle you, you worded your reason as if the techs current state and the level of required resources would remain unchanged putting to question it's viability.

yes I am assuming that battery tech will improve like it has every few years.

children shouldn't be in the mining business and consumers and regulators should be holding thoses responsible accountable.
 

Hood Critic

The Power Circle
Joined
May 2, 2012
Messages
25,233
Reputation
4,115
Daps
114,271
Reppin
דעת
I've been saying the same thing and brehs in our big AI/LLM thread essentially told me I was being short sighted.

Generative AI in it's current state is useful, and even amazing at times, to us because it augments us, we have not hit a threshold where it can effectively replace us. And I honestly feel that is farther away than people like to admit.
 

Macallik86

Superstar
Supporter
Joined
Dec 4, 2016
Messages
6,983
Reputation
1,688
Daps
23,209
I can't speak for @bnew, but personally, I do think there's often a correlation between the usage frequency/competency of the AI model used, and the belief in the future impact of AI. I will fully admit that there's also no one who believes in cryptocurrency more than it's investors as well, so I can see how it comes across like AI people are drinking the Kool-aid.

What I can say as someone who pays for Claude AI's 3.5 Sonnet is that a lot of critiques I hear in 2024 were definitely painfully obvious in 2022/2023, but relatively outdated as of the past few months. AI models are iterative products that are hard to gauge their improvements without actively engaging with the models or tracking LLM leaderboards like Vellum

Case in point, the chart below shows the difference across models with the 'free' ChatGPT 3.5 being the worst performer relative to those that have been more recently released.
mnC1iuj.png


I do think that energy usage is a huge deal that needs to be resolved. Personally, I plan on offloading most of my queries to a local LLM eventually to offset my contribution and protect my privacy generally. I also think that from an investment perspective, it is very expensive and large investors expecting a return in the near future might run out of runway financially.

I am still very very concerned about the future and how the technology will alter the next +20 years.
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
68,130
Reputation
10,497
Daps
184,450


###
I voted False, and would have done so even if you'd said the end of 2025.

For sure stock prices of Big Tech firms will go up and down, and media will alternate btw positive and negative ... that is how these dynamical systems operate....

However, I don't think the GenAI bubble will ever burst, because I think the real super-high value applications of this tech will happen fast enough to avoid CEOs of Big Tech firms burning out on GenAI ...

I don't think this will be a matter of chatbot subscriptions catching on mega big time (though smartphone assistants will get smarter fast, and chatbots will get more and more functional etc.) ... it will be more centrally about back-end integration of GenAI into all sorts of other useful software and hardware applications.

This integration is happening now, all over the place. It's having hiccups, and is working better in some cases than others, etc. It is being a learning experience, and is requiring AI devs and application designers to figure out how to work around the very real and glaring shortcomings of LLMs. But it's happening nonetheless.

Separately from my AGI R&D, I'm involved in a host of AI application projects, including for instance 2 that are LLM-based but have not yet created launchable products due to RAG not working as envisioned/advertised. HOWEVER, we seem to be getting much better results using custom variants of GraphRAG combined with clever prompt engineering... even without (yet) introducing full-on neural-symbolic approaches using OpenCog Hyperon (which we will do in these projects next year).

I think similar stories are happening all around. LLMs are proving to not be magic and not be AGI, but they are nonetheless unprecedentedly powerful AI tools, and I believe the current influx of investment into them will be enough to get them over the line in many many different verticals in terms of building really really useful applications that make loads of $$ and improve loads of lives.

(Not to overlook the damage that is done by the exploitative way data has been gathered for training many large models, though, nor the damage that is done to the Ai field by so much talent being sucked into LLMs instead of other R&D areas, etc. These negative factors totally exist and we must counter them emphatically, but they don't obviate the positives.)

It took a while for the world to figure out how to integrate ML broadly into software and hardware applications. The learning curve is going to be significantly shorter for LLMs because as we get near Singularity, so many things are going faster !! ...

What will be really interesting though is, just as industry starts hitting its stride using LLMs on the back end of various amazing applications, we're going to come out with huge research breakthroughs on putting LLMs and other DNNs together with symbolic probabilistic reasoning and creative evolutionary learning ... in the OpenCog Hyperon project, running on ASI Alliance (inc. SingularityNET, HyperCycle, NuNet) back end ..... And the process of hype, confusion and integration will begin all over again -- but EVEN FASTER, I predict, for the neural-symbolic-evolutionary early-stage-AGI wave than what we'll see with LLMs...

The human world remains confusing, self-harming and needlessly demented in many ways... but we are getting a lot of thing right nonetheless ... and the Singularity is Near ...
###

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
68,130
Reputation
10,497
Daps
184,450


Michael Hiltzik

Business

Say farewell to the AI bubble, and get ready for the crash​


OpenAI Chief Executive Sam Altman


OpenAI Chief Executive Sam Altman: His GPT-5 was a bust.

(Markus Schreiber / Associated Press)



By Michael Hiltzik

Business Columnisthttps://bsky.app/profile/hiltzikm.bsky.social
Aug. 20, 2025 3 AM PT

Most people not deeply involved in the artificial intelligence frenzy may not have noticed, but perceptions of AI’s relentless march toward becoming more intelligent than humans, even becoming a threat to humanity, came to a screeching halt Aug. 7.

That was the day when the most widely followed AI company, OpenAI, released GPT-5, an advanced product that the firm had long promised would put competitors to shame and launch a new revolution in this purportedly revolutionary technology.

As it happened, GPT-5 was a bust. It turned out to be less user-friendly and in many ways less capable than its predecessors in OpenAI’s arsenal. It made the same sort of risible errors in answering users’ prompts, was no better in math (or even worse), and not at all the advance that OpenAI and its chief executive, Sam Altman, had been talking up.

AI companies are really buoying the American economy right now, and it’s looking very bubble-shaped.

— Alex Hanna, co-author, “The AI Con”
“The thought was that this growth would be exponential,” says Alex Hanna, a technology critic and co-author (with Emily M. Bender of the University of Washington) of the indispensable new book “The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want.” “Instead, Hanna says, “We’re hitting a wall.”

The consequences go beyond how so many business leaders and ordinary Americans have been led to expect, even fear, the penetration of AI into our lives. Hundreds of billions of dollars have been invested by venture capitalists and major corporations such as Google, Amazon and Microsoft in OpenAI and its multitude of fellow AI labs, even though none of the AI labs has turned a profit.



Public companies have scurried to announce AI investments or claim AI capabilities for their products in the hope of turbocharging their share prices, much as an earlier generation of businesses promoted themselves as “dot-coms” in the 1990s to look more glittery in investors’ eyes.

Nvidia, the maker of a high-powered chip powering AI research, plays almost the same role as a stock market leader that Intel Corp., another chip-maker, played in the 1990s — helping to prop up the bull market in equities.

If the promise of AI turns out to be as much of a mirage as dot-coms did, stock investors may face a painful reckoning.

The cheerless rollout of GPT-5 could bring the day of reckoning closer. “AI companies are really buoying the American economy right now, and it’s looking very bubble-shaped,” Hanna told me.

The rollout was so disappointing that it shined a spotlight on the degree that the whole AI industry has been dependent on hype.

Here’s Altman, speaking just before the unveiling of GPT-5, comparing it with its immediate predecessor, GPT-4o: “GPT-4o maybe it was like talking to a college student,” he said. “With GPT-5 now it’s like talking to an expert — a legitimate PhD-level expert in anything any area you need on demand ... whatever your goals are.”

Well, not so much. When one user asked it to produce a map of the U.S. with all the states labeled, GPT-5 extruded a fantasyland, including states such as Tonnessee, Mississipo and West Wigina. Another prompted the model for a list of the first 12 presidents, with names and pictures. It only came up with nine, including presidents Gearge Washington, John Quincy Adama and Thomason Jefferson.

Experienced users of the new version’s predecessor models were appalled, not least by OpenAI’s decision to shut down access to its older versions and force users to rely on the new one. “GPT5 is horrible,” wrote a user on Reddit. “Short replies that are insufficient, more obnoxious ai stylized talking, less ‘personality’ … and we don’t have the option to just use other models.” (OpenAI quickly relented, reopening access to the older versions.)

The tech media was also unimpressed. “A bit of a dud,” judged the website Futurism and Ars Technica termed the rollout “a big mess.” I asked OpenAI to comment on the dismal public reaction to GPT-5, but didn’t hear back.

None of this means that the hype machine underpinning most public expectations of AI has taken a breather. Rather, it remains in overdrive.

A projection of AI’s development over the coming years published by something called the AI Futures Project under the title “AI 2027” states: “We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.”

The rest of the document, mapping a course to late 2027 when an AI agent “finally understands its own cognition,” is so loopily over the top that I wondered whether it wasn’t meant as a parody of excessive AI hype. I asked its creators if that was so, but haven’t received a reply.

One problem underscored by GPT-5’s underwhelming rollout is that it exploded one of the most cherished principles of the AI world, which is that “scaling up” — endowing the technology with more computing power and more data — would bring the grail of artificial general intelligence, or AGI, ever closer to reality.

That’s the principle undergirding the AI industry’s vast expenditures on data centers and high-performance chips. The demand for more data and more data-crunching capabilities will require about $3 trillion in capital just by 2028, in the estimation of Morgan Stanley. That would outstrip the capacity of the global credit and derivative securities markets. But if AI won’t scale up, most if not all that money will be wasted.

As Bender and Hanna point out in their book, AI promoters have kept investors and followers enthralled by relying on a vague public understanding of the term “intelligence.” AI bots seem intelligent, because they’ve achieved the ability to seem coherent in their use of language. But that’s different from cognition.
“So we’re imagining a mind behind the words,” Hanna says, “and that becomes associated with consciousness or intelligence. But the notion of general intelligence is not really well-defined.”

Indeed, as long ago as the 1960s, that phenomenon was noticed by Joseph Weizenbaum, the designer of the pioneering chatbot ELIZA, which replicated the responses of a psychotherapist so convincingly that even test subjects who knew they were conversing with a machine thought it displayed emotions and empathy.
“What I had not realized,” Weizenbaum wrote in 1976, “is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.” Weizenbaum warned that the “reckless anthropomorphization of the computer” — that is, treating it as some sort of thinking companion — produced a “simpleminded view of intelligence.”

That tendency has been exploited by today’s AI promoters. They label the frequent mistakes and fabrications produced by AI bots as “hallucinations,” which suggests that the bots have perceptions that may have gone slightly awry. But the bots “don’t have perceptions,” Bender and Hanna write, “and suggesting that they do is yet more unhelpful anthropomorphization.”

The general public may finally be cottoning on to the failed promise of AI more generally. Predictions that AI will lead to large-scale job losses in creative and STEM fields (science, technology, engineering and math) might inspire feelings that the whole enterprise was a tech-industry scam from the outset.

Predictions that AI would yield a burst of increased worker productivity haven’t been fulfilled; in many fields, productivity declines, in part because workers have to be deployed to double-check AI outputs, lest their mistakes or fabrications find their way into mission-critical applications — legal briefs incorporating nonexistent precedents, medical prescriptions with life-threatening ramifications and so on.

Some economists are dashing cold water on predictions of economic gains more generally. MIT economist Daron Acemoglu, for example, forecast last year that AI would produce an increase of only about 0.5% in U.S. productivity and an increase of about 1% in gross domestic product over the next 10 years, mere fractions of the AI camp’s projections.

The value of Bender’s and Hanna’s book, and the lesson of GPT-5, is that they remind us that “artificial intelligence” isn’t a scientific term or an engineering term. It’s a marketing term. And that’s true of all the chatter about AI eventually taking over the world.
“Claims around consciousness and sentience are a tactic to sell you on AI,” Bender and Hanna write. So, too, is the talk about the billions, or trillions, to be made in AI. As with any technology, the profits will go to a small cadre, while the rest of us pay the price ... unless we gain a much clearer perception of what AI is, and more importantly, what it isn’t.
 
Top