‘Jobs may disappear’: Nearly 40% of global employment could be disrupted by AI, IMF says

O.T.I.S.

Veteran
Joined
Sep 15, 2013
Messages
71,244
Reputation
14,932
Daps
275,262
Reppin
The Truth
they have thousands, possibly hundreds of thousands of hours of recorded phone calls, email correspondences, documentation and other digitized work product to train from.

a lot of companies are sitting on a ton of data they can train AI models on to tailor to their specific needs.
Good for them

Doesn't mean that shyt will still work as people with this AI doom and gloom expects

It’s funny because AI has BEEN around.. why didnt it take over tech jobs decades ago?
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,796
Reputation
7,926
Daps
148,647
Good for them

Doesn't mean that shyt will still work as people with this AI doom and gloom expects

It’s funny because AI has BEEN around.. why didnt it take over tech jobs decades ago?

:what:

cars been around for over a hundred years, a car made in 2024 is superior to a car made in 1920.

AI models made today are way better than models made just a year ago. do you think this technology has peaked?
 
Last edited:

TM101

All Star
Joined
May 1, 2012
Messages
2,591
Reputation
206
Daps
6,829
Reppin
NULL
:what:

cars been around for over a hundred years, a car made in 2024 is superior to a car made in 1920.

AI models made today are way btter than models made just a year ago. do you think this technology has peaked?
I wouldn't say it's peaked, more so the transformer model will never be intelligent enough to do work on its own. They've fed it all the data we have on the Internet and it still hallucinates. It's a black box that spouts out random stuff that can't be fixed by adding more CPU and chips.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,796
Reputation
7,926
Daps
148,647
I wouldn't say it's peaked, more so the transformer model will never be intelligent enough to do work on its own. They've fed it all the data we have on the Internet and it still hallucinates. It's a black box that spouts out random stuff that can't be fixed by adding more CPU and chips.

it hasn't been fed nearly all the information on the internet, not even close. there are sites and forums thats aren't indexable by the web like parts of thecoli that are not available to non-registered users.

A.I is increasingly being trained on high quality textbooks generated by A.I.

edit:

the transformer architecture isn't the only one thats been developed...


Mamba: Linear-Time Sequence Modeling with Selective State Spaces​

Albert Gu, Tri Dao
Foundation models, now powering most of the exciting applications in deep learning, are almost universally based on the Transformer architecture and its core attention module. Many subquadratic-time architectures such as linear attention, gated convolution and recurrent models, and structured state space models (SSMs) have been developed to address Transformers' computational inefficiency on long sequences, but they have not performed as well as attention on important modalities such as language. We identify that a key weakness of such models is their inability to perform content-based reasoning, and make several improvements. First, simply letting the SSM parameters be functions of the input addresses their weakness with discrete modalities, allowing the model to selectively propagate or forget information along the sequence length dimension depending on the current token. Second, even though this change prevents the use of efficient convolutions, we design a hardware-aware parallel algorithm in recurrent mode. We integrate these selective SSMs into a simplified end-to-end neural network architecture without attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5× higher throughput than Transformers) and linear scaling in sequence length, and its performance improves on real data up to million-length sequences. As a general sequence model backbone, Mamba achieves state-of-the-art performance across several modalities such as language, audio, and genomics. On language modeling, our Mamba-3B model outperforms Transformers of the same size and matches Transformers twice its size, both in pretraining and downstream evaluation.






Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture​

Daniel Y. Fu, Simran Arora, Jessica Grogan, Isys Johnson, Sabri Eyuboglu, Armin W. Thomas, Benjamin Spector, Michael Poli, Atri Rudra, Christopher Ré
Machine learning models are increasingly being scaled in both sequence length and model dimension to reach longer contexts and better performance. However, existing architectures such as Transformers scale quadratically along both these axes. We ask: are there performant architectures that can scale sub-quadratically along sequence length and model dimension? We introduce Monarch Mixer (M2), a new architecture that uses the same sub-quadratic primitive along both sequence length and model dimension: Monarch matrices, a simple class of expressive structured matrices that captures many linear transforms, achieves high hardware efficiency on GPUs, and scales sub-quadratically. As a proof of concept, we explore the performance of M2 in three domains: non-causal BERT-style language modeling, ViT-style image classification, and causal GPT-style language modeling. For non-causal BERT-style modeling, M2 matches BERT-base and BERT-large in downstream GLUE quality with up to 27% fewer parameters, and achieves up to 9.1× higher throughput at sequence length 4K. On ImageNet, M2 outperforms ViT-b by 1% in accuracy, with only half the parameters. Causal GPT-style models introduce a technical challenge: enforcing causality via masking introduces a quadratic bottleneck. To alleviate this bottleneck, we develop a novel theoretical view of Monarch matrices based on multivariate polynomial evaluation and interpolation, which lets us parameterize M2 to be causal while remaining sub-quadratic. Using this parameterization, M2 matches GPT-style Transformers at 360M parameters in pretraining perplexity on The PILE--showing for the first time that it may be possible to match Transformer quality without attention or MLPs.


RWKV (pronounced RwaKuv) is an RNN with GPT-level LLM performance, and can also be directly trained like a GPT transformer (parallelizable). We are at RWKV v6.
So it's combining the best of RNN and transformer - great performance, fast inference, fast training, saves VRAM, "infinite" ctxlen, and free text embedding. Moreover it's 100% attention-free, and a LFAI project.


 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,796
Reputation
7,926
Daps
148,647
I’ll be dead way way way before some AI powered android can rewire a house or troubleshoot some bullshyt in an attic. Y’all enlightened desk individuals needa watch out tho

6iyzJsi.png
 

3rdWorld

Veteran
Bushed
Joined
Mar 24, 2014
Messages
41,838
Reputation
3,205
Daps
122,685
Cacs screaming that ethnic minorities will not replace them, but created androids to replace them instead :mjlol:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,796
Reputation
7,926
Daps
148,647
Everytime new technology becomes prominent people get concerned about it taking jobs.
And often it does do that, but what people seem to forget, is that it usually leads to the creation of new jobs as well.


I read it and I think the trainers, explainers and sustainers it talks about can all inevitably be replace by AI. I mean people are using AI today to explain a myriad of subjects and Nvidia has already stated how their last few chips wouldn't be possible without AI.
 

BaggerofTea

Veteran
Bushed
Supporter
Joined
Sep 15, 2014
Messages
48,937
Reputation
-2,558
Daps
235,502
I’ll be dead way way way before some AI powered android can rewire a house or troubleshoot some bullshyt in an attic. Y’all enlightened desk individuals needa watch out tho

with infrared camera and ai reading imagines, humans would be able to do most repairs themselves with maybe some robotic assistance
 

Luke Cage

Coffee Lover
Supporter
Joined
Jul 18, 2012
Messages
47,787
Reputation
17,399
Daps
245,734
Reppin
Harlem

I read it and I think the trainers, explainers and sustainers it talks about can all inevitably be replace by AI. I mean people are using AI today to explain a myriad of subjects and Nvidia has already stated how their last few chips wouldn't be possible without AI.
I work in accounting, and for every feature that enables us to automated the work we do, there is a need for someone to regulate and manage the automation. Expand the scope, or shrink it depending on the goals you have for a particular fiscal period. This evitably happens.
Not to mention giving birth to entirely new industries as a side effect. Newpapers didn't didn't disappear, they were replaced by billion dollar social media industries and the like.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,796
Reputation
7,926
Daps
148,647
Oh you think they’re gonna be using androids in the next 30 years more cost effective than flesh and blood humans? You nikkas are unhinged. :laff:

back in 2000 a compaq PC with 256MB ram and 20GB HDD cost $2400, today you can go on amazon and find a mobile device with way better specs for $50.

a lot can happen in 30 years. :manny:
 
Top