Artificial intelligence is losing hype

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,111
Reputation
8,239
Daps
157,797

Artificial intelligence is losing hype​


For some, that is proof the tech will in time succeed. Are they right?​

An illustration of a robotic hand with crossed fingers on a solid red background.
Illustration: Alberto Miranda

Aug 19th 2024

Silicon Valley’s tech bros are having a difficult few weeks. A growing number of investors worry that artificial intelligence (AI) will not deliver the vast profits they seek. Since peaking last month the share prices of Western firms driving the ai revolution have dropped by 15%. A growing number of observers now question the limitations of large language models, which power services such as ChatGPT. Big tech firms have spent tens of billions of dollars on ai models, with even more extravagant promises of future outlays. Yet according to the latest data from the Census Bureau, only 4.8% of American companies use ai to produce goods and services, down from a high of 5.4% early this year. Roughly the same share intend to do so within the next year.

Gently raise these issues with a technologist and they will look at you with a mixture of disappointment and pity. Haven’t you heard of the “hype cycle”? This is a term popularised by Gartner, a research firm—and one that is common knowledge in the Valley. After an initial period of irrational euphoria and overinvestment, hot new technologies enter the “trough of disillusionment”, the argument goes, where sentiment sours. Everyone starts to worry that adoption of the technology is proceeding too slowly, while profits are hard to come by. However, as night follows day, the tech makes a comeback. Investment that had accompanied the wave of euphoria enables a huge build-out of infrastructure, in turn pushing the technology towards mainstream adoption. Is the hype cycle a useful guide to the world’s ai future?

It is certainly helpful in explaining the evolution of some older technologies. Trains are a classic example. Railway fever gripped 19th-century Britain. Hoping for healthy returns, everyone from Charles Darwin to John Stuart Mill ploughed money into railway stocks, creating a stockmarket bubble. A crash followed. Then the railway companies, using the capital they had raised during the mania, built the track out, connecting Britain from top to bottom and transforming the economy. The hype cycle was complete. More recently, the internet followed a similar evolution. There was euphoria over the technology in the 1990s, with futurologists predicting that within a couple of years everyone would do all their shopping online. In 2000 the market crashed, prompting the failure of 135 big dotcom companies, from garden.com to pets.com. The more important outcome, though, was that by then telecoms firms had invested billions in fibre-optic cables, which would go on to became the infrastructure for today’s internet.

Although ai has not experienced a bust on anywhere near the same scale as the railways or dotcom, the current anxiety is, according to some, nevertheless evidence of its coming global domination. “The future of ai is just going to be like every other technology. There’ll be a giant expensive build-out of infrastructure, followed by a huge bust when people realise they don’t really know how to use AI productively, followed by a slow revival as they figure it out,” says Noah Smith, an economics commentator.

Is this right? Perhaps not. For starters, versions of ai itself have for decades experienced periods of hype and despair, with an accompanying waxing and waning of academic engagement and investment, but without moving to the final stage of the hype cycle. There was lots of excitement over ai in the 1960s, including over eliza, an early chatbot. This was followed by ai winters in the 1970s and 1990s. As late as 2020 research interest in ai was declining, before zooming up again once generative ai came along.

It is also easy to think of many other influential technologies that have bucked the hype cycle. Cloud computing went from zero to hero in a pretty straight line, with no euphoria and no bust. Solar power seems to be behaving in the same way. Social media, too. Individual companies, such as Myspace, fell by the wayside, and there were concerns early on about whether it would make money, but consumer adoption increased monotonically. On the flip side, there are plenty of technologies for which the vibes went from euphoria to panic, but which have not (or at least not yet) come back in any meaningful sense. Remember Web3? For a time, people speculated that everyone would have a 3d printer at home. Carbon nanotubes were also a big deal.

Anecdotes only get you so far. Unfortunately, it is not easy to test whether a hype cycle is an empirical regularity. “Since it is vibe-based data, it is hard to say much about it definitively,” notes Ethan Mollick of the University of Pennsylvania. But we have had a go at saying something definitive, extending work by Michael Mullany, an investor, that he conducted in 2016. The Economist collected data from Gartner, which for decades has placed dozens of hot technologies where it believes they belong on the hype cycle. We then supplemented it with our own number-crunching.

Over the hill​

We find, in short, that the cycle is a rarity. Tracing breakthrough technologies over time, only a small share—perhaps a fifth—move from innovation to excitement to despondency to widespread adoption. Lots of tech becomes widely used without such a rollercoaster ride. Others go from boom to bust, but do not come back. We estimate that of all the forms of tech which fall into the trough of disillusionment, six in ten do not rise again. Our conclusions are similar to those of Mr Mullany: “An alarming number of technology trends are flashes in the pan.”

ai could still revolutionise the world. One of the big tech firms might make a breakthrough. Businesses could wake up to the benefits that the tech offers them. But for now the challenge for big tech is to prove that ai has something to offer the real economy. There is no guarantee of success. If you must turn to the history of technology for a sense of ai’s future, the hype cycle is an imperfect guide. A better one is “easy come, easy go”. ■

For more expert analysis of the biggest stories in economics, finance and markets, sign up to Money Talks, our weekly subscriber-only newsletter.
 

KushSkywalker

Walker Lexus Ranger
Joined
Jun 9, 2012
Messages
14,982
Reputation
3,535
Daps
32,748
It is great for somethings but is not quite the fix all people think it is.

People think you're just going to go "A.I. make me an Oscar-level movie, make me the best video game ever" Walk away and come back and it will be there.

There is a lot of guidance and human level problem fixing still happening at every level of A.I.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,111
Reputation
8,239
Daps
157,797
It is great for somethings but is not quite the fix all people think it is.

People think you're just going to go "A.I. make me an Oscar-level movie, make me the best video game ever" Walk away and come back and it will be there.

There is a lot of guidance and human level problem fixing still happening at every level of A.I.

patiently awaiting agentic AI, the script writing models will come first before the video generation.

edit:

1/1
LongWriter unlocks text generation up to 10k words! 🤯 can't wait to try it


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196




1/2
LongWriter-glm4-9b from @thukeg is capable of generating 10,000+ words at once!🚀

Paper identifies a problem with current long context LLMs -- they can process inputs up to 100,000 tokens, yet struggle to generate outputs exceeding lengths of 2,000 words.

Paper proposes that an LLM's effective generation length is inherently bounded by the sample it has seen during supervised fine-tuning😮

Demonstrates that existing long context LLMs already possess the potential for a larger output window--all you need is data with extended output during model alignment to unlock this capability.

Code & models are released under Apache License 2.0🧡

2/2
Model on 🤗 Hub: THUDM/LongWriter-glm4-9b · Hugging Face

Gradio demo available on the repo locally and linked on the project Readme: GitHub - THUDM/LongWriter: LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs

Clone the repo and launch the gradio demo: python trans_web_demo.py 🤠

Demo releasing soon on 🤗 Spaces, stay tuned!


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 
Last edited:

KushSkywalker

Walker Lexus Ranger
Joined
Jun 9, 2012
Messages
14,982
Reputation
3,535
Daps
32,748
patiently awaiting agentic AI, the script writing models will come first before the video generation.
It will happen eventually I'm sure, but the underlying language model construct inherently creates certain issues at the stage we're at now.

You have A.I. write you a program you spend more time debugging it and giving it prompts than you would have saved in the first place.
 

Micky Mikey

Veteran
Supporter
Joined
Sep 27, 2013
Messages
15,844
Reputation
2,830
Daps
88,165
Aren't all the top models (GPT4, Claude, Gemini) all using roughly the same amout of compute? I thought the models would continue to have new capabilities w the more compute you throw at it.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,111
Reputation
8,239
Daps
157,797
Aren't all the top models (GPT4, Claude, Gemini) all using roughly the same amout of compute? I thought the models would continue to have new capabilities w the more compute you throw at it.

compute isn't everything. training data, the model architecture and training techniques impacts what type of capabilities they improve upon. all of the main ones you mentioned are general models. it's been known for a long time now that fine-tuned models outperform general models for the tasks they were trained for.
 
Last edited:

the cac mamba

Veteran
Bushed
Joined
May 21, 2012
Messages
101,442
Reputation
13,396
Daps
296,647
Reppin
NULL
i always thought the "AI will do everything, and humans won't work" was overstated. that seems ridiculous

but i feel like it can't really be "losing hype". it's just getting started :dead:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,111
Reputation
8,239
Daps
157,797
i always thought the "AI will do everything, and humans won't work" was overstated. that seems ridiculous

but i feel like it can't really be "losing hype". it's just getting started :dead:

people will work, on the things that interest them and what we deem to critical or sensitive to entrust to A.I.
 

Micky Mikey

Veteran
Supporter
Joined
Sep 27, 2013
Messages
15,844
Reputation
2,830
Daps
88,165
compute isn't everything. training data, the model architecture and training techniques impacts what type of capabilities they improve upon. all of the main ones you mentioned are general models. it's been known for a long time now that fine-tuned models outperform general models for the tasks they were trained for.
What's stopping these fine tuned models from taking off? Is it an issue of compute?
 

greenvale

Superstar
Supporter
Joined
Aug 1, 2017
Messages
6,056
Reputation
1,920
Daps
23,628
Reppin
Delaware
Almost all big companies have terrible data infrastructures nevermind the black box problem with AI. This isn’t really surprising although individual productivity will shoot through the roof.
 
Top