###
I voted False, and would have done so even if you'd said the end of 2025.
For sure stock prices of Big Tech firms will go up and down, and media will alternate btw positive and negative ... that is how these dynamical systems operate....
However, I don't think the GenAI bubble will ever burst, because I think the real super-high value applications of this tech will happen fast enough to avoid CEOs of Big Tech firms burning out on GenAI ...
I don't think this will be a matter of chatbot subscriptions catching on mega big time (though smartphone assistants will get smarter fast, and chatbots will get more and more functional etc.) ... it will be more centrally about back-end integration of GenAI into all sorts of other useful software and hardware applications.
This integration is happening now, all over the place. It's having hiccups, and is working better in some cases than others, etc. It is being a learning experience, and is requiring AI devs and application designers to figure out how to work around the very real and glaring shortcomings of LLMs. But it's happening nonetheless.
Separately from my AGI R&D, I'm involved in a host of AI application projects, including for instance 2 that are LLM-based but have not yet created launchable products due to RAG not working as envisioned/advertised. HOWEVER, we seem to be getting much better results using custom variants of GraphRAG combined with clever prompt engineering... even without (yet) introducing full-on neural-symbolic approaches using OpenCog Hyperon (which we will do in these projects next year).
I think similar stories are happening all around. LLMs are proving to not be magic and not be AGI, but they are nonetheless unprecedentedly powerful AI tools, and I believe the current influx of investment into them will be enough to get them over the line in many many different verticals in terms of building really really useful applications that make loads of $$ and improve loads of lives.
(Not to overlook the damage that is done by the exploitative way data has been gathered for training many large models, though, nor the damage that is done to the Ai field by so much talent being sucked into LLMs instead of other R&D areas, etc. These negative factors totally exist and we must counter them emphatically, but they don't obviate the positives.)
It took a while for the world to figure out how to integrate ML broadly into software and hardware applications. The learning curve is going to be significantly shorter for LLMs because as we get near Singularity, so many things are going faster !! ...
What will be really interesting though is, just as industry starts hitting its stride using LLMs on the back end of various amazing applications, we're going to come out with huge research breakthroughs on putting LLMs and other DNNs together with symbolic probabilistic reasoning and creative evolutionary learning ... in the OpenCog Hyperon project, running on ASI Alliance (inc. SingularityNET, HyperCycle, NuNet) back end ..... And the process of hype, confusion and integration will begin all over again -- but EVEN FASTER, I predict, for the neural-symbolic-evolutionary early-stage-AGI wave than what we'll see with LLMs...
The human world remains confusing, self-harming and needlessly demented in many ways... but we are getting a lot of thing right nonetheless ... and the Singularity is Near ...
###