2024 UPDATE!! Altman: prepare for AI to be "uncomfortable" 33% US jobs gone..SKYNET, AI medical advances? BASIC INCOME? 1st AI MOVIE! AI MAYOR!!

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,219
Reputation
8,195
Daps
156,117

AI worse than humans in every way at summarising information, government trial finds​


A test of AI for Australia's corporate regulator found that the technology might actually make more work for people, not less.

Cam Wilson

Sep 03, 2024

9

UPDATED: 9.18AM, Sep 04

AdobeStock_205110625.jpeg
(Image: Adobe)

Artificial intelligence is worse than humans in every way at summarising documents and might actually create additional work for people, a government trial of the technology has found.

Amazon conducted the test earlier this year for Australia’s corporate regulator the Securities and Investments Commission (ASIC) using submissions made to an inquiry. The outcome of the trial was revealed in an answer to a questions on notice at the Senate select committee on adopting artificial intelligence.

The test involved testing generative AI models before selecting one to ingest five submissions from a parliamentary inquiry into audit and consultancy firms. The most promising model, Meta’s open source model Llama2-70B, was prompted to summarise the submissions with a focus on ASIC mentions, recommendations, references to more regulation, and to include the page references and context.

An AI-generated campaign ad for councillor Sean O'Reilly (Image: Facebook/Councillor Sean O'Reilly )

A Labor councillor is using artificial intelligence to write his campaign ads


Read More

Ten ASIC staff, of varying levels of seniority, were also given the same task with similar prompts. Then, a group of reviewers blindly assessed the summaries produced by both humans and AI for coherency, length, ASIC references, regulation references and for identifying recommendations. They were unaware that this exercise involved AI at all.

These reviewers overwhelmingly found that the human summaries beat out their AI competitors on every criteria and on every submission, scoring an 81% on an internal rubric compared with the machine’s 47%.

Human summaries ran up the score by significantly outperforming on identifying references to ASIC documents in the long document, a type of task that the report notes is a “notoriously hard task” for this type of AI. But humans still beat the technology across the board.

Reviewers told the report’s authors that AI summaries often missed emphasis, nuance and context; included incorrect information or missed relevant information; and sometimes focused on auxiliary points or introduced irrelevant information. Three of the five reviewers said they guessed that they were reviewing AI content.

The reviewers’ overall feedback was that they felt AI summaries may be counterproductive and create further work because of the need to fact-check and refer to original submissions which communicated the message better and more concisely.

The report mentions some limitations and context to this study: the model used has already been superseded by one with further capabilities which may improve its ability to summarise information, and that Amazon increased the model’s performance by refining its prompts and inputs, suggesting that there are further improvements that are possible. It includes optimism that this task may one day be competently undertaken by machines.

But until then, the trial showed that a human’s ability to parse and critically analyse information is unparalleled by AI, the report said.

“This finding also supports the view that GenAI should be positioned as a tool to augment and not replace human tasks,” the report concluded.

Greens Senator David Shoebridge, whose question to ASIC prompted the publishing of the report, said that it was “hardly surprising” that humans were better than AI at this task. He also said it raised questions about how the public might feel about using AI to read their inquiry submissions.

“This of course doesn’t mean there is never a role for AI in assessing submissions, but if it has a role it must be transparent and supportive of human assessments and not stand-alone,” he said.

“It’s good to see government departments undertaking considered exercises like this for AI use, but it would be better if it was then proactively and routinely disclosed rather than needing to be requested in Senate committee hearings.”
 

GnauzBookOfRhymes

Superstar
Joined
May 7, 2012
Messages
12,465
Reputation
2,832
Daps
47,818
Reppin
NULL


Breh, did a human write this headline?

The report mentions some limitations and context to this study: the model used has already been superseded by one with further capabilities which may improve its ability to summarise information, and that Amazon increased the model’s performance by refining its prompts and inputs, suggesting that there are further improvements that are possible. It includes optimism that this task may one day be competently undertaken by machines.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,219
Reputation
8,195
Daps
156,117







1/11
@DeryaTR_
This is the final warning for those considering careers as physicians: AI is becoming so advanced that the demand for human doctors will significantly decrease, especially in roles involving standard diagnostics and routine treatments, which will be increasingly replaced by AI.

This is underscored by the massive performance leap of OpenAI’s o-1 model, also known as the “Strawberry” model, which was released as a preview yesterday. The model performs exceptionally well on a specialized medical dataset (AgentClinic-MedQA), greatly outperforming GPT-4o. The rapid advancements in AI’s ability to process complex medical information, deliver accurate diagnoses, provide medical advice, and recommend treatments will only accelerate.

Medical tasks like diagnosing illnesses, interpreting medical imaging, and formulating treatment plans will soon be handled by AI systems with greater speed and consistency than human practitioners. As the healthcare landscape evolves in the coming years, the number of doctors needed will drastically shrink, with more reliance on AI-assisted healthcare systems.

While human empathy, critical thinking, and decision-making will still play an important role in certain areas of medicine, even these may eventually be supplanted by future iterations of models like o-1.

Consequently, medicine is becoming a less appealing career path for the next generation of doctors—unless they specialize in intervention-focused areas (such as surgery, emergency medicine, and other interventional specialties), though these, too, may eventually be overtaken by robotic systems…maybe within a decade or so.

2/11
@DeryaTR_
Doktorluk kariyeri düşünenler için son uyarımdır: Yapay zeka o kadar hızlı gelişiyor ki, insan doktorlara olan talep önemli ölçüde azalacak ve özellikle standart teşhis ve rutin tedavilerde yapay zeka ile yer değiştirilecek.

Bu, dün önizleme olarak yayımlanan OpenAI’nin o-1 modeli, diğer adıyla “Çilek” modelindeki büyük performans sıçramasıyla daha da netleşti. Bu model, GPT-4o’yu büyük ölçüde geride bırakarak, özel bir tıbbi veri seti olan AgentClinic-MedQA’da son derece başarılı sonuçlar verdi. Yapay zekanın karmaşık tıbbi bilgileri işleyebilme, doğru teşhisler koyabilme, tıbbi tavsiyeler verebilme ve tedavi önerileri sunabilme yeteneğindeki bu hızlı ilerleme devam edecek.

Hastalıkları teşhis etme, tıbbi görüntülemeyi yorumlama ve tedavi planları oluşturma gibi tıbbi görevler yakında yapay zeka sistemleri tarafından insan doktorlardan daha hızlı ve tutarlı bir şekilde gerçekleştirilecek. Önümüzdeki yıllarda sağlık sisteminin dönüşmesiyle birlikte doktorlara olan ihtiyaç büyük ölçüde azalacak ve yapay zeka destekli sağlık sistemlerine daha fazla güvenilecektir.

İnsan empatisi, eleştirel düşünme ve karar verme, tıbbın bazı alanlarında hala önemli bir rol oynasa da, bunlar bile gelecekte o-1 benzeri modellerin sonraki versiyonları tarafından devralınacaktır. Bu nedenle, tıp alanı gelecek nesiller için çok daha az cazip bir kariyer haline gelecek—sadece müdahale odaklı alanlar (cerrahi, acil tıp ve diğer müdahaleci uzmanlıklar gibi) bir süre daha değerli olabilir. Ancak bu alanlar da önümüzdeki 10-15 yıl içinde robot doktorlar tarafından devralınabilir.

3/11
@okaythenme
I remember twenty years ago when it was a popular prediction to say radiologists would be completely automated out of existence by the 2020s,

Now radiology salaries have hit new highs in most markets, including the U.S.

If you’re interested in medicine, get your medical or nursing degree. AI will never fully automate the long human tradition of medicine.

Doctors will stay the safest job on the planet.

4/11
@DeryaTR_
I strongly disagree. Nursing jobs will be safer than doctor jobs in a decade.

5/11
@DeryaTR_
It seems some people didn’t quite understand my post here. Let me clarify: I didn’t say the medical profession will completely disappear. However, I pointed out that there will be a need for far fewer doctors a decade from now. Therefore, only the top 10-20% of physicians who are truly dedicated and outstanding will continue to have fulfilling jobs. If you are passionate about medicine, you should still pursue it as a career, but bear in mind that it will no longer be a high-paying and secure job in the future.

In a separate post, I will explain why, in the near future, patients will welcome doctors working with AI or may even begin to prefer AI doctors. In fact it will become unethical and even malpractice not to use AI in diagnostics and treatment.

I realize this is disconcerting for many, and it may be difficult to imagine or accept. I empathize, having spent years working hard to train as a physician. Although, I followed my passion in science instead of pursuing a more secure, high-income job as a doctor.

In the end, what matters is the value you provide for the greater good of humanity. I strongly believe that AI will bring unimaginable benefits, saving lives and helping people live long, healthy lives.

6/11
@Michael_D_Moor
As the original poster of @SRSchmidgall's figure I respectully disagree that this is a final warning for clinicians to become obsolete.

I believe clinicians will be empowered by being augmented with up-to-date knowledge, guidelines and DDx ideas at their fingers' tip with increasingly better UIs.

I agree though that certain abusive healthcare economies (with bad incentives) may find ways to provide cheaper & human-free care delivery that is not necessarily better for patients. It's up to us not to let this happen.

7/11
@DeryaTR_
But I didn’t say clinicians will become completely obsolete. Please see the follow-up for further clarification on what I meant. It’s not too different from your point. Also, please consider the advances we’ll see in the next 5-10 years-this technology will progress exponentially

8/11
@BorisMPower
How does this performance compare to MDs?

9/11
@DeryaTR_
Let me just give one statistic:

“An estimated 795 000 Americans become permanently disabled or die annually across care settings because dangerous diseases are misdiagnosed.”

Extrapolating worldwide that’s millions of people and most are not even complicated cases.

10/11
@malmaladei
We have a shortage of doctors because they have to do so much paperwork and handle a lot of routine work. If anything I expect AI to make the healthcare sector better

11/11
@DeryaTR_
Will definitely make it better.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GXXqlX3agAAFO3V.jpg

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,219
Reputation
8,195
Daps
156,117






1/6
If OpenAI's o1 can pass OpenAI's research engineer hiring interview for coding -- 90% to 100% rate...

......then why would they continue to hire actual human engineers for this position?

Every company is about to ask this question.

2/6
Yes. I've been saying the age of AI coding is here for the last year...... this just takes it to another level

3/6


4/6
Hahaha

5/6
I very much agree with this, with the caveat that understanding how programming **works** is still valuable.

This, however, is different from "learning programming" the way it's been learned the last four decades.

6/6
Ah, and if there's one thing AI agents are bad at, it's understanding data systems. ....wait.....


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GXT5g85W0AAn7kQ.png

GXVU2yfWsAABOpP.png
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,219
Reputation
8,195
Daps
156,117















1/18
@iam_chonchol
Google just dropped NotebookLM.

It generates podcasts with two speakers discussing content from research papers, articles, and more.

Here are 12 mind-blowing examples: 🤯



2/18
@iam_chonchol
1.

[Quoted tweet]
Googles NotebookLM's new podcast feature is wild

This is made from a 90min lecture I held on Monday

It condensed it into a 16 minute talkshow

Some hallucinations here and there, but overall this is a new paradigm for learning.

Link to try it below, no waitlist


3/18
@iam_chonchol
2.

[Quoted tweet]
tried out the new NotebookLM from @labsdotgoogle to create a podcast based on a reddit thread on @kentcdodds ‘ course. pretty impressive results


4/18
@iam_chonchol
3.

[Quoted tweet]
So cool. Turned a blogpost about "Ducking" (a technique used in audio engineering) into a conversation with Google NotebookLM and used Tuneform te generate a video of it.

Here's the original blog: noiseengineering.us/blogs/lo…


5/18
@iam_chonchol
Learn the latest AI developments in 3 minutes a day, Subscribe to The 8020AI it's FREE.

Get 1k mega prompts & 30+ AI guides today for FREE: 80/20 AI



6/18
@iam_chonchol
4.

[Quoted tweet]
Just had my 3rd wow moment in AI... this time through AI Overview by NotebookLM 🤯


7/18
@iam_chonchol
5.

[Quoted tweet]
This AI service is so impressive! Google's NotebookLM is now capable of generating an audio overview based on documents uploaded and links to online resources.

I uploaded my bachelors thesis, my resume, and a link to my online course website and it created this really cool podcast like format.

It didn't get everything right but its so funny because NotebookLM actually drew great conclusions that I didn’t think about while writing this thesis myself.

Which AI tool could create a video for this audio file?

@labsdotgoogle #RenewableEnergy #offgridpower #batterystorage #SolarEnergy #AI


8/18
@iam_chonchol
6.

[Quoted tweet]
Estuve probando NotebookLM de @Google y quedé sorprendida.

Convertí uno de mis artículos de Substack en un podcast, y hasta tiene conversaciones entre IA sobre el tema.

Ahora puedo escuchar mi contenido en lugar de leerlo, y me encanta. Súper fluido:


9/18
@iam_chonchol
7.

[Quoted tweet]
A podcast by Google Notebook LM from YouTube videos uploaded on YouTube from Sept 9-13th. #ai #highered #notebooklm #google

How was this produced?

1. Searched YouTube for “Artificial Intelligence in Higher Education”
2. Used filters to limit videos to uploaded this week that are 20 mins or longer.
3. For each video, shared with “Summarify” an iPhone app that summarizes YouTube videos given URL. Download the summary as pdf on iPhone.
4. Upload PDFs (20 files) to Notebook LM
5. Generate Podcast audio in Notebook LM. Then download .wav file.
6. Generate image using ideogram.ai (prompt is “YouTube videos of artificial intelligence in higher education”. Download image.
6. Upload .wav file to iPhone app (Headliner) to convert .wav to waveform. Use the image in number 6 as the background for the waveform.

And you have below.


10/18
@iam_chonchol
8.

[Quoted tweet]
Gave Google NotebookLM the transcript for my Fluxgym video and it created this podcast type discussion of it. Video is audio only. This is wild. 😂


11/18
@iam_chonchol
9.

[Quoted tweet]
Do you know what’s even more interesting than OpenAI’s o1 🍓?

A podcast generated directly from the information provided by @openai by NotebookLLM from @GoogleAI.

So cool! @OfficialLoganK


12/18
@iam_chonchol
10.

[Quoted tweet]
It's never been easier to create a faceless channel.

You could use Google's new NotebookLM to create engaging, short form content channel with such minimal effort

Here is an example where I fed it ONE URL - /r/StableDiffusion


13/18
@iam_chonchol
11.

[Quoted tweet]
🪄Want to see some AI magic? You can now “record” an engaging, studio quality, 12 min podcast on any topic in under 5 min. Yup, you read that correctly.

Here’s how 👇

1) I used NotebookLM by Google to synthesize a few content sources on scaling a product post MVP.
2) NotebookLM now offers a “Generate Audio” option, which creates an incredibly engaging script and audio that sounds indistinguishable from actual podcast hosts.
3) Upload to Spotify
4) Profit?


14/18
@iam_chonchol
12.

[Quoted tweet]
Longtime followers may remember that a couple months ago, I was trying to auto-generate a podcast every day based on HN articles.

I got OK results, but you could still tell it was fake. I gave up.

ANYWAY here's what you can do with Google's new NotebookLM. It's so good!


15/18
@iam_chonchol
I hope you've found this thread helpful.

Follow me @iam_chonchol for more.

Like/Repost the quote below if you can:

[Quoted tweet]
Google just dropped NotebookLM.

It generates podcasts with two speakers discussing content from research papers, articles, and more.

Here are 12 mind-blowing examples: 🤯


16/18
@ashok_hey
Speakers having fun with articles? Can't wait to hear the one about my grocery list!



17/18
@HeyToha
This is really wild 🤯



18/18
@pattcola
sounds interesting, I'd love to give it a try too.




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196

 

yseJ

Empire strikes back
Joined
Apr 30, 2012
Messages
44,053
Reputation
2,506
Daps
63,215
Reppin
The Yay






1/6
If OpenAI's o1 can pass OpenAI's research engineer hiring interview for coding -- 90% to 100% rate...

......then why would they continue to hire actual human engineers for this position?

Every company is about to ask this question.

2/6
Yes. I've been saying the age of AI coding is here for the last year...... this just takes it to another level

3/6


4/6
Hahaha

5/6
I very much agree with this, with the caveat that understanding how programming **works** is still valuable.

This, however, is different from "learning programming" the way it's been learned the last four decades.

6/6
Ah, and if there's one thing AI agents are bad at, it's understanding data systems. ....wait.....


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GXT5g85W0AAn7kQ.png

GXVU2yfWsAABOpP.png

because preparing an AI with training data to pass interview exams/questions doesn't create an AI worker capable to create, maintain, update, support and iterate on a software product of even small-size codebase.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,219
Reputation
8,195
Daps
156,117
because preparing an AI with training data to pass interview exams/questions doesn't create an AI worker capable to create, maintain, update, support and iterate on a software product of even small-size codebase.

thsi doesn't make sense because thats basically data contamination and would make it a useless benchmark. I seriously doubt they trained the model on their own questions and then tested it. :comeon:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,219
Reputation
8,195
Daps
156,117

ARTIFICIAL INTELLIGENCE

Artificial intelligence will affect 60 million US and Mexican jobs within the year​



IDB study shows the impact that AI will have on the labor market. Women and low-skilled workers are more vulnerable to being replaced​


U7TP2DSXBJEIBKGSXPA5NW57PA.jpeg

Workers sort packages at an Amazon distribution facility in Tepotzotlan, Mexico state.Alejandro Cegarra (Bloomberg)

Patricia Caro

Washington - SEP 15, 2024 - 00:00 EDT

Artificial intelligence’s impact on our lives
is already well-known, but now, its most feared consequences related to the job marketare setting off alarms. An innovative index created by the Inter-American Development Bank shows that in the United States, 43 million jobs will be affected by the introduction of AI in the space of just one year. In Mexico, that figure stands at 16 million jobs. Over the next five years, those numbers will rise to 60 and 22 million, respectively. In the space of a decade, 70 million jobs in the U.S. will suffer, along with 26 million in its neighboring country.

“These estimates do not directly correspond to job losses, but they do indicate that a large proportion of occupations are vulnerable, and that there is an opportunity to leverage the jobs that will be most affected. We must have a plan for the impact that AI could have,” says Eric Parrado, chief economist at the IDB and co-author of the index.

The AI-Generated Index of Occupational Exposure calculates the potential impact of artificial intelligence on occupations and their associated tasks among more than 750 professions on one, five and 10-year timelines. It is based on large data sets, which it processes and synthesizes quickly, offering an alternative to traditional surveys, which are usually expensive.

According to the index, 980 million jobs around the world will be affected in some way by this new technology within the year. That amounts to 28% of the global workforce. Within five years, that figure will rise to between 38%, and in 10 years, 44%.

full.png


Those are compelling numbers, and while they don’t equate to jobs that will be necessarily lost, the fact that AI will impact nearly half of the world’s employment in some way implies that we are looking at a change of a magnitude similar to that which followed the 19th-century industrial revolution. “This is an industrial revolution that is growing exponentially. It’s going to take less time to implement. We must make adjustments quickly and that’s why we are carrying out this research, to send a message of caution,” says Parrado. The goal of the index is to serve as a warning, so that, instead of having negative consequences, AI will bring benefits.

“We are optimistic, we think that technological change will raise productivity. The fact that there will be job replacement doesn’t mean that employment will fall, because there will be opportunities that come up to create new occupations. We’ve seen it happen in the past. We have had very pronounced technological shocks, and they haven’t led to a decrease in employment, but rather a readjustment in the job markets,” says Parrado.

To turn what seems at first glance a crisis in the job market into a benefit, the study’s authors make recommendations that should be followed by companies and governments. Education and training are the factors that will have the highest impact on the success of adapting to this new reality. “We recommend a significant investment in education and re-training programs that should focus on developing skills that are complementary to AI, such as in areas of critical thinking, creativity and emotional intelligence,” says Parrado.

Women, the most vulnerable population​


It also recommends supporting groups who are most vulnerable to the change, like women, who will be more deeply impacted due to the kinds of positions they occupy. “Women, in the United States as well as Mexico, are more vulnerable on the three timelines that we studied because they hold more office, administrative, service and support jobs that are vulnerable to AI. 40% of women will be affected by task automation, which is greater than the 38% of men who will be impacted. It’s important that this gender disparity be considered when it comes to setting policy,” says Parrado.

Vulnerability also varies throughout the socioeconomic hierarchy. Workers with less education and those whose jobs require less training will be more affected. When it comes to income, in the United States, people who earn less will be more vulnerable, while in Mexico, the change will impact both working-class and middle-class jobs. “AI could exacerbate global inequality, that’s why we have to act fast,” says Parrado.

full.png


The risk also varies by field. Workers whose jobs require more subjective criteria will be more difficult to replace. The index includes a list of the most and least vulnerable positions. In the former category are occupations like telephone operator, telemarketing, credit evaluators, machine operators and travel agencies, among others. In some cases, such as that of telephone operators, 92% of all positions will be affected.

Firefighters and athletes, irreplaceable​


On the other hand, there are some positions that will beat the AI changes, like those tied to sports, teaching and firefighting. One of the innovative aspects of the index is that it differentiates between tasks that each profession performs. It found that there are specialties within each profession that are more easily replaced than others. For example, in the field of medicine, it’s more likely that a radiologist will lose their job as compared to a psychologist, a maxillofacial surgeon or even a nurse.

No analysis has been done on the differences of AI vulnerability by race or origin, but knowing that African Americans and Latinos occupy more low-income jobs, it’s easy to extrapolate that they will be more affected by the development of new technology.

The goal of the IDB study is drive public and private policy towards counteracting the negative effects that AI will bring. In addition to education and training programs for workers to adapt to the transformation, it recommends that governments improve social safety nets, especially in the short term, including instituting more robust employment insurance and subsidies for individual workers to help in their transition to the new labor model. It also advises the promotion of ethical development, support for small businesses, and periodic evaluations of the labor market.

To take on a transformation of this scale, Parrado says that it will also be necessary to adapt educational programs, urging governments to include AI in school curriculums because, “this technological change is here to stay,” he says.
 

yseJ

Empire strikes back
Joined
Apr 30, 2012
Messages
44,053
Reputation
2,506
Daps
63,215
Reppin
The Yay
thsi doesn't make sense because thats basically data contamination and would make it a useless benchmark. I seriously doubt they trained the model on their own questions and then tested it. :comeon:
I didnt say they trained it on strictly their own interview question data


I said that AI being able to regurgitate (via a LLM) problem solving of problems that are typically presented on an interview question doesnt create an AI worker capable to create, maintain, update, support and iterate on a software product of even small-size codebase.

right now o1 is a template-creator at best, trained on public data, including repositories. it is not trained on proprietary libraries and it doesnt have reasoning capability (nor does any generative LLM) and the more obscure/undocumented a library, API or code is - the worse the AI learning be. After all, an LLM is ALWAYS only as good as its training data. It cannot reason nor can it deduce nor does it have a state-based knowledge. the whole point of transformers is for them to be stateless.



it can help to create some skeleton based on copycat approach right now for folks who dont know or know basics of programming, but thats about it. it may help you learn a language faster or create very skeleton-y code.


I think every time we talk about LLMs taking real programming jobs, we need to emphasize that LLMs are stateless by their nature. Transformers are stateless. The only way state is passed is via context window. This is a big, gargantuan problem with maintaining and adding and having siloed knowledge that is up-to-date, correct and compartmentalized . Right now there are apis to feedback the prompt history each time, but this isnt a real solution with real programming tasks.


for example, I have had a real-world task in maintaining a codebase across several different legacy repositories such as clearcase, and making fixes to proprietary C libraries that only exists in my company but is used by dozens of different products/components. I also need to know how the library is used by these components, and the ability to intrerpret how this library may be used, how often and what issues can arise depending on what component uses it.


only in fantasy world does someone sit down on a programming job and solve interview tasks. real world doesnt work like that. Not to mention, LLMs rely on human language for prompt. In programming, so many things cannot be easily defined or specified in rigidity of language. many things require visual feedback (web development for example or texture creation and rig wiring in graphics) or can be described succinctly in diagrams but not language.

I dont see real programming jobs with people who have a lot experience being taken AI anytime soon. Maybe down the road when someone attaches state, logic to actual reasoning and not merely inference.
 
Top