Goldman Sachs: Generative AI Can't Make Money

mastermind

Rest In Power Kobe
Joined
May 1, 2012
Messages
63,033
Reputation
6,096
Daps
166,653

We first speak with Daron Acemoglu, Institute Professor at MIT, who’s skeptical. He estimates that only a quarter of AI- exposed tasks will be cost-effective to automate within the next 10 years, implying that AI will impact less than 5% of all tasks. And he doesn’t take much comfort from history that shows technologies improving and becoming less costly over time, arguing that AI model advances likely won’t occur nearly as quickly—or be nearly as impressive—as many believe. He also questions whether AI adoption will create new tasks and products, saying these impacts are “not a law of nature.” So, he forecasts AI will increase US productivity by only 0.5% and GDP growth by only 0.9% cumulatively over the next decade.

Many people attempt to compare AI today to the early days of the internet. But even in its infancy, the internet was a low-cost technology solution that enabled e-commerce to replace costly incumbent solutions. Amazon could sell books at a lower cost than Barnes & Noble because it didn’t have to maintain costly brick-and-mortar locations. Fast forward three decades, and Web 2.0 is still providing cheaper solutions that are disrupting more expensive solutions, such as Uber displacing limousine services. While the question of whether AI technology will ever deliver on the promise many people are excited about today is certainly debatable, the less debatable point is that AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do.

The report also says replacing the entire US power grid is the only way to run it effectively.


You can read the report here:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,368
Reputation
8,499
Daps
160,083

Goldman Sachs: ‘Limited Economic Upside From AI’​

Goldman Sachs is throwing cold water on the artificial intelligence industry, saying its ability to solve problems doesn't justify its cost....

Artificial-Intelligence-Image-by-Tung-Nguyen.jpg

Goldman Sachs: ‘Limited Economic Upside From AI’

Written by Matt Milano


Monday, July 8, 2024


Goldman Sachs is throwing cold water on the artificial intelligence industry, saying its ability to solve problems doesn’t justify its cost.

Companies large and small are investing billions in generative AI, with the financial firm saying those investments will top $1 trillion in the coming years. Despite the heavy cost, however, Goldman Sachs is warning in a report (via the Internet Archive)that “the technology isn’t designed to solve the complex problems that would justify the costs, which may not decline as many expect.”

Much of the issue stems from the high cost and the limited potential for AI to be capable enough to be cost-effective any time in the near future, as Goldman Sachs points out:

We first speak with Daron Acemoglu, Institute Professor at MIT, who’s skeptical. He estimates that only a quarter of AI- exposed tasks will be cost-effective to automate within the next 10 years, implying that AI will impact less than 5% of all tasks. And he doesn’t take much comfort from history that shows technologies improving and becoming less costly over time, arguing that AI model advances likely won’t occur nearly as quickly—or be nearly as impressive—as many believe. He also questions whether AI adoption will create new tasks and products, saying these impacts are “not a law of nature.” So, he forecasts AI will increase US productivity by only 0.5% and GDP growth by only 0.9% cumulatively over the next decade.

GS Head of Global Equity Research Jim Covello was even more pessimistic, contrasting AI’s high cost to other industry-disrupting technologies that started out far more cost-effective:

Many people attempt to compare AI today to the early days of the internet. But even in its infancy, the internet was a low-cost technology solution that enabled e-commerce to replace costly incumbent solutions. Amazon could sell books at a lower cost than Barnes & Noble because it didn’t have to maintain costly brick-and-mortar locations. Fast forward three decades, and Web 2.0 is still providing cheaper solutions that are disrupting more expensive solutions, such as Uber displacing limousine services. While the question of whether AI technology will ever deliver on the promise many people are excited about today is certainly debatable, the less debatable point is that AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do.

Covello goes on to tell Goldman Sachs that he’s skeptical that AI can achieve the transformative effect that the internet, cell phones, and laptops have achieved.

More broadly, people generally substantially overestimate what the technology is capable of today. In our experience, even basic summarization tasks often yield illegible and nonsensical results. This is not a matter of just some tweaks being required here and there; despite its expensive price tag, the technology is nowhere near where it needs to be in order to be useful for even such basic tasks. And I struggle to believe that the technology will ever achieve the cognitive reasoning required to substantially augment or replace human interactions. Humans add the most value to complex tasks by identifying and understanding outliers and nuance in a way that it is difficult to imagine a model trained on historical data would ever be able to do.

Others—such as GS senior global economist Joseph Briggs and internet analyst Eric Sheridan—were slightly more optimistic. Briggs sees AI automating as much as 25% of work in the next decade, and raising US productivity by 9% and GDP by 6.1%.

Ultimately, it may be the industries that support AI to be the ones that most benefit from it before a potential bust, according to Goldman Sachs:

Although Covello believes AI’s fundamental story is unlikely to hold up, he cautions that the AI bubble could take a long time to burst, with the “picks and shovels” AI infrastructure providers continuing to benefit in the meantime. GS senior US equity strategist Ryan Hammond also sees more room for the AI theme to run and expects AI beneficiaries to broaden out beyond just Nvidia, and particularly to what looks set to be the next big winner: Utilities.

That said, looking at the bigger picture, GS senior multi-asset strategist Christian Mueller-Glissmann finds that only the most favorable AI scenario, in which AI significantly boosts trend growth and corporate profitability without raising inflation, would result in above-average long-term S&P 500 returns, making AI’s ability to deliver on its oft-touted potential even more crucial.

Goldman Sach’s full report is well worth a read and illustrates in detail the challenges facing the AI industry. As the report highlights, companies will need to demonstrate that AI can deliver on its promise if the vast investments are to be justified. Otherwise utilities and chip companies may be the only industries to see a return that’s worth it.
 

No1

Retired.
Supporter
Joined
Apr 30, 2012
Messages
30,323
Reputation
4,793
Daps
67,789

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,368
Reputation
8,499
Daps
160,083
While the question of whether AI technology will ever deliver on the promise many people are excited about today is certainly debatable, the less debatable point is that AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do.

but they are designing them to solve complex problems, that's literally the end goal via AGI etc:skip:

they have to be able to get the fundamentals right in general models first.
 

88m3

Fast Money & Foreign Objects
Joined
May 21, 2012
Messages
88,624
Reputation
3,636
Daps
157,835
Reppin
Brooklyn
another resource sucking grift like electric cars

:mjlol:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,368
Reputation
8,499
Daps
160,083
We first speak with Daron Acemoglu, Institute Professor at MIT, who’s skeptical. He estimates that only a quarter of AI- exposed tasks will be cost-effective to automate within the next 10 years, implying that AI will impact less than 5% of all tasks. And he doesn’t take much comfort from history that shows technologies improving and becoming less costly over time, arguing that AI model advances likely won’t occur nearly as quickly—or be nearly as impressive—as many believe. He also questions whether AI adoption will create new tasks and products, saying these impacts are “not a law of nature.” So, he forecasts AI will increase US productivity by only 0.5% and GDP growth by only 0.9% cumulatively over the next decade.
only a 25% of these jobs will be cost effective to use AI with in 10 years?:rudy:

data entry, bookkeeping, accounting, customer service, telemarketing, transcription, translation, medical diagnosis, image analysis, manufacturing, assembly line work, self-driving cars, trucking, delivery, warehouse management, inventory management, supply chain management, administrative tasks, scheduling, email management, data processing, data analysis, reporting, bookkeeping, accounting, financial analysis, financial planning, tax preparation, auditing, compliance, legal research, document review, contract review, content moderation, social media management, marketing automation, lead generation, sales forecasting, customer segmentation, chatbots, virtual assistants, language translation, content creation, writing, editing, proofreading, research, data scraping, web development, software testing, quality assurance, network administration, cybersecurity, vulnerability assessment, penetration testing, predictive maintenance, quality control, inspection, monitoring, surveillance, healthcare diagnosis, medical imaging, clinical decision support, medical research, pharmaceutical research, material science, genomics, proteomics, robotics, process automation, industrial automation, building automation, home automation, drones, farming, forestry, environmental monitoring, climate modeling, weather forecasting, traffic management, urban planning, architecture, engineering, design, animation, video production, audio production, music composition, art generation, graphic design, fashion design, interior design.

More broadly, people generally substantially overestimate what the technology is capable of today. In our experience, even basic summarization tasks often yield illegible and nonsensical results. This is not a matter of just some tweaks being required here and there; despite its expensive price tag, the technology is nowhere near where it needs to be in order to be useful for even such basic tasks. And I struggle to believe that the technology will ever achieve the cognitive reasoning required to substantially augment or replace human interactions. Humans add the most value to complex tasks by identifying and understanding outliers and nuance in a way that it is difficult to imagine a model trained on historical data would ever be able to do.

people have to start understanding, Garbage In, Garbage out. his assessment of summarization would help if they provided examples that includes how it Large Language Model was prompted and what LLM was used. I don't doubt they got nonsensical results but I wonder about the input. there are private evaluation tests that companies and other organizations are using to test for the outliers that AI models wouldn't be able to train on to see how wel they can reason.

I don't think this paper will age well.
 

morris

Superstar
Joined
Oct 8, 2014
Messages
16,519
Reputation
4,925
Daps
36,238
When AI FINALLY resolves phone and chat issues (you'd think by now and the MILLIONS of complaints it'd be fixed), then I will believe.

Issues resolved via chat and FAQ are still very cumbersome. This is STILL an issue and getting support is actually worse now than 2 decades ago
 

88m3

Fast Money & Foreign Objects
Joined
May 21, 2012
Messages
88,624
Reputation
3,636
Daps
157,835
Reppin
Brooklyn
what makes electric cars a grift?

the amount of resources and energy required to make them operate, the limited operating distance, the fact a small accident can total them

I'm sure there are children in a mines in Africa and Asia getting their hands cut off so cac tech bros can drive an oversized electric suv around Manhattan or the Bay

probably more, I haven't had my coffee yet today

it really disgusts me when I think about it
 

mastermind

Rest In Power Kobe
Joined
May 1, 2012
Messages
63,033
Reputation
6,096
Daps
166,653
only a 25% of these jobs will be cost effective to use AI with in 10 years?:rudy:

data entry, bookkeeping, accounting, customer service, telemarketing, transcription, translation, medical diagnosis, image analysis, manufacturing, assembly line work, self-driving cars, trucking, delivery, warehouse management, inventory management, supply chain management, administrative tasks, scheduling, email management, data processing, data analysis, reporting, bookkeeping, accounting, financial analysis, financial planning, tax preparation, auditing, compliance, legal research, document review, contract review, content moderation, social media management, marketing automation, lead generation, sales forecasting, customer segmentation, chatbots, virtual assistants, language translation, content creation, writing, editing, proofreading, research, data scraping, web development, software testing, quality assurance, network administration, cybersecurity, vulnerability assessment, penetration testing, predictive maintenance, quality control, inspection, monitoring, surveillance, healthcare diagnosis, medical imaging, clinical decision support, medical research, pharmaceutical research, material science, genomics, proteomics, robotics, process automation, industrial automation, building automation, home automation, drones, farming, forestry, environmental monitoring, climate modeling, weather forecasting, traffic management, urban planning, architecture, engineering, design, animation, video production, audio production, music composition, art generation, graphic design, fashion design, interior design.



people have to start understanding, Garbage In, Garbage out. his assessment of summarization would help if they provided examples that includes how it Large Language Model was prompted and what LLM was used. I don't doubt they got nonsensical results but I wonder about the input. there are private evaluation tests that companies and other organizations are using to test for the outliers that AI models wouldn't be able to train on to see how wel they can reason.

I don't think this paper will age well.
Damn, you all in on generative AI :ehh:


I think the big issue is the cost to operate it. Idk man, this shyt smells like the final grift.
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,368
Reputation
8,499
Daps
160,083
the amount of resources and energy required to make them operate, the limited operating distance, the fact a small accident can total them

probably more, I haven't had my coffee yet today

it really disgusts me when I think about it

none of those things are static, battery technology and operating distance has been improving since electric cars became mainstream. battery swapping is a also a thing so we don't know what the future holds.

your assessment of the tech seems to based on the assumption that batteries will never improve and they'll always be as big as they are now. if EV batteries get cheaper or even smaller than the likelihood of insurers writing off EV cars in accidents as total losses diminishes.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,368
Reputation
8,499
Daps
160,083
Damn, you all on on generative AI :ehh:


I think the big issue is the cost to operate it. Idk man, this shyt smells like the final grift.

the tech is in it's infancy so I don't know why theres wide spread sentiment that it can't possibly become cheaper to train and operate.




Google claims new AI training tech is 13 times faster and 10 times more power efficient — DeepMind's new JEST optimizes training data for impressive gains​

Dallin Grimm

Sun, July 7, 2024 at 3:40 PM EDT·3 min read

 .

Credit:

Google DeepMind, Google's AI research lab, has published new research on training AI models that claims to greatly accelerate both training speed and energy efficiency by an order of magnitude, yielding 13 times more performance and ten times higher power efficiency than other methods. The new JEST training method comes in a timely fashion as conversations about the environmental impact of AI data centers are heating up.

DeepMind's method, dubbed JEST or joint example selection, breaks apart from traditional AI model training techniques in a simple fashion. Typical training methods focus on individual data points for training and learning, while JEST trains based on entire batches. The JEST method first creates a smaller AI model that will grade data quality from extremely high-quality sources, ranking the batches by quality. Then it compares that grading to a larger, lower-quality set. The small JEST model determines the batches most fit for training, and a large model is then trained from the findings of the smaller model.

The paper itself, available here, provides a more thorough explanation of the processes used in the study and the future of the research.

DeepMind researchers make it clear in their paper that this "ability to steer the data selection process towards the distribution of smaller, well-curated datasets" is essential to the success of the JEST method. Success is the correct word for this research; DeepMind claims that "our approach surpasses state-of-the-art models with up to 13× fewer iterations and 10× less computation."

Graphs displaying efficiency and speed gains over traditional AI training methods.

Graphs displaying efficiency and speed gains over traditional AI training methods.

Of course, this system relies entirely on the quality of its training data, as the bootstrapping technique falls apart without a human-curated data set of the highest possible quality. Nowhere is the mantra "garbage in, garbage out" truer than this method, which attempts to "skip ahead" in its training process. This makes the JEST method much more difficult for hobbyists or amateur AI developers to match than most others, as expert-level research skills are likely required to curate the initial highest-grade training data.

The JEST research comes not a moment too soon, as the tech industry and world governments are beginning discussions on artificial intelligence's extreme power demands. AI workloads took up about 4.3 GW in 2023, almost matching the annual power consumption of the nation of Cyprus. And things are definitely not slowing down: a single ChatGPT request costs 10x more than a Google search in power, and Arm's CEO estimates that AI will take up a quarter of the United States' power grid by 2030.

If and how JEST methods are adopted by major players in the AI space remains to be seen. GPT-4o reportedly cost $100 million to train, and future larger models may soon hit the billion-dollar mark, so firms are likely hunting for ways to save their wallets in this department. Hopefuls think that JEST methods will be used to keep current training productivity rates at much lower power draws, easing the costs of AI and helping the planet. However, much more likely is that the machine of capital will keep the pedal to the metal, using JEST methods to keep power draw at maximum for hyper-fast training output. Cost savings versus output scale, who will win?





1/1
In April we published a paper on a new training approach for better & faster LLMs using multi-token prediction. To enable further exploration by researchers, we’ve released pre-trained models for code completion using this approach on
@HuggingFace



To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196



1/1
Welcome Multi Token Prediction: Get up to 3-5x tokens/ sec from your llamas!

Kudos to Meta for continuing its commitment to open science


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196



1/1
Meta proposed a new approach to build better and faster LLMs by using multi-token prediction.

Using this approach, they trained language models to predict multiple future words at once—instead of the old one-at-a-time approach.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

mastermind

Rest In Power Kobe
Joined
May 1, 2012
Messages
63,033
Reputation
6,096
Daps
166,653
the tech is in it's infancy so I don't know why theres wide spread sentiment that it can't possibly become cheaper to train and operate.




Google claims new AI training tech is 13 times faster and 10 times more power efficient — DeepMind's new JEST optimizes training data for impressive gains​

Dallin Grimm

Sun, July 7, 2024 at 3:40 PM EDT·3 min read

 .

Credit:

Google DeepMind, Google's AI research lab, has published new research on training AI models that claims to greatly accelerate both training speed and energy efficiency by an order of magnitude, yielding 13 times more performance and ten times higher power efficiency than other methods. The new JEST training method comes in a timely fashion as conversations about the environmental impact of AI data centers are heating up.

DeepMind's method, dubbed JEST or joint example selection, breaks apart from traditional AI model training techniques in a simple fashion. Typical training methods focus on individual data points for training and learning, while JEST trains based on entire batches. The JEST method first creates a smaller AI model that will grade data quality from extremely high-quality sources, ranking the batches by quality. Then it compares that grading to a larger, lower-quality set. The small JEST model determines the batches most fit for training, and a large model is then trained from the findings of the smaller model.

The paper itself, available here, provides a more thorough explanation of the processes used in the study and the future of the research.

DeepMind researchers make it clear in their paper that this "ability to steer the data selection process towards the distribution of smaller, well-curated datasets" is essential to the success of the JEST method. Success is the correct word for this research; DeepMind claims that "our approach surpasses state-of-the-art models with up to 13× fewer iterations and 10× less computation."

Graphs displaying efficiency and speed gains over traditional AI training methods.

Graphs displaying efficiency and speed gains over traditional AI training methods.

Of course, this system relies entirely on the quality of its training data, as the bootstrapping technique falls apart without a human-curated data set of the highest possible quality. Nowhere is the mantra "garbage in, garbage out" truer than this method, which attempts to "skip ahead" in its training process. This makes the JEST method much more difficult for hobbyists or amateur AI developers to match than most others, as expert-level research skills are likely required to curate the initial highest-grade training data.

The JEST research comes not a moment too soon, as the tech industry and world governments are beginning discussions on artificial intelligence's extreme power demands. AI workloads took up about 4.3 GW in 2023, almost matching the annual power consumption of the nation of Cyprus. And things are definitely not slowing down: a single ChatGPT request costs 10x more than a Google search in power, and Arm's CEO estimates that AI will take up a quarter of the United States' power grid by 2030.

If and how JEST methods are adopted by major players in the AI space remains to be seen. GPT-4o reportedly cost $100 million to train, and future larger models may soon hit the billion-dollar mark, so firms are likely hunting for ways to save their wallets in this department. Hopefuls think that JEST methods will be used to keep current training productivity rates at much lower power draws, easing the costs of AI and helping the planet. However, much more likely is that the machine of capital will keep the pedal to the metal, using JEST methods to keep power draw at maximum for hyper-fast training output. Cost savings versus output scale, who will win?





1/1
In April we published a paper on a new training approach for better & faster LLMs using multi-token prediction. To enable further exploration by researchers, we’ve released pre-trained models for code completion using this approach on
@HuggingFace



To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196



1/1
Welcome Multi Token Prediction: Get up to 3-5x tokens/ sec from your llamas!

Kudos to Meta for continuing its commitment to open science


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196



1/1
Meta proposed a new approach to build better and faster LLMs by using multi-token prediction.

Using this approach, they trained language models to predict multiple future words at once—instead of the old one-at-a-time approach.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196

I think the issue is
a) the energy cost is absurdly high. They’ve said that. I read Google’s energy use has gone up over 40% since 2019. Microsoft has jumped up crazy as well

b) generative AI isnt fixing any material issues like other tech has done.

c) no one wants it

I think AI has potential to help us but the people in charge of it are not interested in that because they have never been interested in that. Generative AI isn’t offering any real world solutions from where I’m sitting. :hubie:
 
Top