Coder Considers the Waning Days of the Craft

Hood Critic

The Power Circle
Joined
May 2, 2012
Messages
23,957
Reputation
3,687
Daps
109,264
Reppin
דעת
nah, more likely another AI that isn't connected to the internet will be used to analyze and decipher it.
The aliens will assist us.

ancient-aliens-1.png
 

Hood Critic

The Power Circle
Joined
May 2, 2012
Messages
23,957
Reputation
3,687
Daps
109,264
Reppin
דעת
People are talking about AGI being 3 years out.

Sure.

But in the here and now, and addressing the question - a software programmer using some LLM to write better code is not SkyNet.
I feel like 5 is more realistic.

AI doesn't become scary until it's training itself on it's own, non-human data. At that point it will be capable of building a new version of itself and start doing things like developing it's own system designs, algorithms and compilers and languages.
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,666
Daps
203,849
Reppin
the ether
That's by design because it's proprietary code and design. But the concept of LLMs, data training, etc. are understood.

From the earlier article I mentioned:

"no one on Earth fully understands the inner workings of LLMs. Researchers are working to gain a better understanding, but this is a slow process that will take years—perhaps decades—to complete."

That sounds like a lot more than just "well it's proprietary".




as long as AI uses human created languages, human's won't have an issue understanding the code.

Can you source this claim from something more authoritative than your own word? Please pose the question to an AI expert and see what they say. Knowing what I know about systems complexity in general, I don't doubt for a second that a General AI could write a program too complex for humans to understand, regardless of whether or not we know the language.

Think of chess. The moves of chess are so simple that I taught them to my 4-year-old daughter. But when I play her in chess, she doesn't actually understand what I'm doing, and she doesn't know why she lost. Even though she can see every individual move I make and know that I'm following the rules, what I'm actually accomplishing on the board is totally beyond her. And what a grandmaster is doing on the board is totally beyond me. It's not just that I couldn't come up with his moves on my own, it's that I have no fukking clue why he's making them. And what the best AI chess programs are doing and why it works is beyond the comprehension of many grandmasters. Even take Magnus Carlsen, probably the greatest player in history. I saw him offer a draw in a match, and when the match was over, he was asked whether he thought he had left any possible options on the table. He said, "Well, I looked at the board, and I thought, 'If I was Stockfish, I would [then he describes a series of moves basically charging the F-G-H pawns down the board].' But then I realized I'm not Stockfish and offered the draw." The greatest chess player of all time basically saying that he doesn't fully comprehend AI strategy enough to mimic AI strategy in some circumstances, even when he knows what it would do.

Sure, when an AI moves pieces around a chess board, I understand every move. But I don't really understand what it's doing. If I was asked to step in and adjust a move, or just make one minor change, or take over where it left off, I'd be completely lost. I couldn't make even the smallest adjustment without fukking up everything because I have no clue what the plan is despite knowing full well what the objective of chess is.

The options available in a computer programing language are many orders of magnitude greater than the options available on a chess board. It's WAY more fukking complex. So why couldn't an AI write a program too difficult for humans to understand?
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,115
Reputation
8,239
Daps
157,811
I feel like 5 is more realistic.

AI doesn't become scary until it's training itself on it's own, non-human data. At that point it will be capable of building a new version of itself and start doing things like developing it's own system designs, algorithms and compilers and languages.

synthetic datasets have been a thing for a few months now.









 

Hood Critic

The Power Circle
Joined
May 2, 2012
Messages
23,957
Reputation
3,687
Daps
109,264
Reppin
דעת
From the earlier article I mentioned:

"no one on Earth fully understands the inner workings of LLMs. Researchers are working to gain a better understanding, but this is a slow process that will take years—perhaps decades—to complete."

That sounds like a lot more than just "well it's proprietary".
That's in reference to the logic and the parameters used to make those decisions - The Black Box problem.

Most major LLMs are developed by large tech companies like Google, Meta, and OpenAI. These models are proprietary systems whose complete details are not publicly revealed. The companies treat their LLMs as trade secrets, disclosing only limited information about model architecture, training data, and decision-making processes.

This opacity prevents independent auditing of the systems for biases, flaws, or ethical issues. Without transparency, it is impossible to verify if unsafe data is used in training or if the models exhibit unfair biases. The public has to simplistically trust the creators' assurances.

Source: https://promptengineering.org/the-black-box-problem-opaque-inner-workings-of-large-language-models/

To your point, from the same article about a lack of understanding of the outcomes

However, it has become clear even the creators do not fully understand how their models operate. They may comprehend the overall architecture but cannot explain the complex emergent behaviors that arise from vast scales. For example, OpenAI admitted their AI text detector tool did not reliably work weeks after touting its release. If the developers themselves cannot explain their LLMs' logic, truly auditing them becomes even more challenging.

In short, there is a lack understanding how models get to their results, not necessarily the results themselves.

The options available in a computer programing language are many orders of magnitude greater than the options available on a chess board. It's WAY more fukking complex. So why couldn't an AI write a program too difficult for humans to understand?

I have nothing direct to offer you as a source nor have I located an instance where AI has created something in a human developed programming langue that humans have not been able to comprehend. There are absolutely complex and dense codebases developed by AI but the challenge is in the time and manpower needed to walk through those code bases.
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,666
Daps
203,849
Reppin
the ether
@Hood Critic, up until the last line you're just repeating my argument. Perhaps you misunderstood me because I agree with everything you say until then.

The issue is that last line, where you suggest that it's "only" time and manpower that will be necessary to comprehend AI code. Why do you assume that? If an elite coder can write a program too complex for an average coder to understand, then why couldn't an AI write a program too complex for even an elite coder to understand? Yes, of course they can understand what each individual line technically does, but what good is that when you have no clue what the strategy is or why it successfully gets the program to the ideal final result?

And yes, we probably haven't reached that level of complexity yet, but we will. As I said from the first post, what ChatGPT writes now is fukking baby steps compared to what real General AI is going to be doing in the future.

Here's an example of where it could matter a lot. I'm make it a bit absurd just for the sake of argument. Let's imagine a future where General AI has been writing our code for damn near anything. And let's say one particular AI-written program governs the water treatment plant that cleans some city's water supply. Let's pretend the program fukks up, and introduces poisoned water into the general population, threatening hundreds of thousands. Now imagine human programmers look at that code and they have no fukking clue why it fukked up because they can't tell why it was making the decisions it was making even when it was working. Suddenly they don't know - did the AI fukk up the program simply because it was given a poor prompt? Did the AI fukk up the program because it doesn't understand XYZ? Or did the AI purposely choose to change its own program objectives because it wanted to poison those people?

If you don't know why a program is doing what it's doing, so you don't know why it fukked up, and thus you can no longer even tell whether some nefarious shyt is going on (either from the AI itself or someone influencing the AI) or whether it's just an honest error....don't you see how that could be a big fukking problem?
 

Hood Critic

The Power Circle
Joined
May 2, 2012
Messages
23,957
Reputation
3,687
Daps
109,264
Reppin
דעת
@Hood Critic, up until the last line you're just repeating my argument. Perhaps you misunderstood me because I agree with everything you say until then.

The issue is that last line, where you suggest that it's "only" time and manpower that will be necessary to comprehend AI code. Why do you assume that? If an elite coder can write a program too complex for an average coder to understand, then why couldn't an AI write a program too complex for even an elite coder to understand? Yes, of course they can understand what each individual line technically does, but what good is that when you have no clue what the strategy is or why it successfully gets the program to the ideal final result?

And yes, we probably haven't reached that level of complexity yet, but we will. As I said from the first post, what ChatGPT writes now is fukking baby steps compared to what real General AI is going to be doing in the future.

Here's an example of where it could matter a lot. I'm make it a bit absurd just for the sake of argument. Let's imagine a future where General AI has been writing our code for damn near anything. And let's say one particular AI-written program governs the water treatment plant that cleans some city's water supply. Let's pretend the program fukks up, and introduces poisoned water into the general population, threatening hundreds of thousands. Now imagine human programmers look at that code and they have no fukking clue why it fukked up because they can't tell why it was making the decisions it was making even when it was working. Suddenly they don't know - did the AI fukk up the program simply because it was given a poor prompt? Did the AI fukk up the program because it doesn't understand XYZ? Or did the AI purposely choose to change its own program objectives because it wanted to poison those people?

If you don't know why a program is doing what it's doing, so you don't know why it fukked up, and thus you can no longer even tell whether some nefarious shyt is going on (either from the AI itself or someone influencing the AI) or whether it's just an honest error....don't you see how that could be a big fukking problem?
You are literally describing the bulk of the work software engineers and devs do on a daily basis, deciphering and reversing code that they didn't write.

A programming language is a system of very specific instructions no matter how you combine those instructions. A dev very well may not understand why GAI chose to manage memory the way it did but that dev will ultimately understand those instructions manipulate memory space. AI is not going to write a C++ program that Bjarne Stroustrup can't decipher with time and additional man power if needed.

Your premise could only be relevant if GAI is developing in a language unknown to humans.
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,666
Daps
203,849
Reppin
the ether
You are literally describing the bulk of the work software engineers and devs do on a daily basis, deciphering and reversing code that they didn't write.

Deciphering code written by another human and deciphering code written by a non-human superintelligence far beyond our capabilities are not the same thing. Why aren't you even engaging with the central point of the question?




A programming language is a system of very specific instructions no matter how you combine those instructions. A dev very well may not understand why GAI chose to manage memory the way it did but that dev will ultimately understand those instructions manipulate memory space. AI is not going to write a C++ program that Bjarne Stroustrup can't decipher with time and additional man power if needed.

Understanding that instructions manipulate memory space isn't enough if you don't understand why it is doing it or how it contributes to the larger goal. How do you debug a program if you don't know why it's doing what it's doing? To go back to the example I pointed out earlier, if a program written by a General AI superintelligence doesn't work, how will you know whether the error is intentional or inadvertant, and how would you combat it?




Your premise could only be relevant if GAI is developing in a language unknown to humans.

Please, please, source this in something other than your own declarative authority.
 

Hood Critic

The Power Circle
Joined
May 2, 2012
Messages
23,957
Reputation
3,687
Daps
109,264
Reppin
דעת
Deciphering code written by another human and deciphering code written by a non-human superintelligence far beyond our capabilities are not the same thing. Why aren't you even engaging with the central point of the question?

LOL, because you're approaching this based on assumptions about generative AI. You're assuming that since AI is made up of complex systems and is capable of complex operations, it is by default, able to take a man made language and produce a result beyond that languages capabilities that is too complex for human comprehension.

Can you provide a documented example in any man made language that has been produced by AI that we are currently unable to comprehend?
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,666
Daps
203,849
Reppin
the ether
I emailed an old friend I hadn't spoken to in a while who works on AI ethics issues. Interestingly, I asked the question more generally, but he went straight to the same test case that I had thought of - how can you tell whether an AI is acting maliciously or just made an unfortunate mistake?


My question:

Will it be possible, in the future, for a General AI to write programs that humans are incapable of understanding, even if they're written in a known programming language? Or do we believe we'll always be able to understand anything an AI does so long as we can break down each line of code? I'm interested to know whether this has been written about before or is considered a potential area of future problems.



His answer:

Yes, this is something people worry about. Even with human-level coders trying to deceive us it can be hard to spot bugs/malicious code, eg there was a nice competition run a while back where people needed to make a C script that claimed to do one thing (and on inspection seemed to follow the instructions) but actually did something ‘underhanded’ https://en.wikipedia.org/wiki/Underhanded_C_Contest

If a programming language is simple and constrained enough I could imagine that even a superintelligent AI would not be able to convince us a line of code does something it doesn’t actually, if we have enough time to inspect it. Perhaps harder is that the AIs will potentially be writing millions of lines of code for all sorts of purposes, and we just won’t be able to supervise it closely enough to spot any malicious parts. This is often referred to as the ‘scalable oversight’ problem, and one proposal is to use so-called ‘iterated amplification’ where we use simpler, more-aligned AI tools to oversee the bigger more dangerous ones, all the way up to superintelligence. Obviously very controversial whether something like that could work. https://course.aisafetyfundamentals.com/alignment?session=4

Outside of just code, there are also some worries that AIs will steganographically hide messages to each other or their future selves to prevent us finding out they are up to something dodgy https://www.lesswrong.com/posts/9Fd...g-language-models-from-hiding-their-reasoning



So he's basically saying this is a valid issue that is being debated already, and whether or not our possible solutions to the issue will work is unknown.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,115
Reputation
8,239
Daps
157,811
You are literally describing the bulk of the work software engineers and devs do on a daily basis, deciphering and reversing code that they didn't write.

A programming language is a system of very specific instructions no matter how you combine those instructions. A dev very well may not understand why GAI chose to manage memory the way it did but that dev will ultimately understand those instructions manipulate memory space. AI is not going to write a C++ program that Bjarne Stroustrup can't decipher with time and additional man power if needed.

Your premise could only be relevant if GAI is developing in a language unknown to humans.
Deciphering code written by another human and deciphering code written by a non-human superintelligence far beyond our capabilities are not the same thing. Why aren't you even engaging with the central point of the question?






Understanding that instructions manipulate memory space isn't enough if you don't understand why it is doing it or how it contributes to the larger goal. How do you debug a program if you don't know why it's doing what it's doing? To go back to the example I pointed out earlier, if a program written by a General AI superintelligence doesn't work, how will you know whether the error is intentional or inadvertant, and how would you combat it?






Please, please, source this in something other than your own declarative authority.

have another AI explain the code and speculate as to why it wasn't done that way. devs/software engineers can just confirm the analysis themselves.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,115
Reputation
8,239
Daps
157,811

The end of coding? Microsoft publishes a framework making developers merely supervise AI


Michael Petraeus

April 15, 2024

code-for-food.jpg

In this article​



Two months ago, Jensen Huang, Nvidia’s CEO, dismissed the past 15 years of global career advice and advised that it’s not a good idea to learn to code anymore—at least not for most people.

Last month, Microsoft added its own contribution to the argument by releasing a research paper detailing AutoDev: an automated AI-driven development framework, in which human developers are relegated to the role of mere supervisors of artificial intelligence doing all of the actual software engineering work.

Goodbye, developers?

The authors have outlined—and successfully tested—a system of multiple AI agents interacting with each other as well as provided repositories to not only tackle complex software engineering tasks but also validate the outcomes on their own.

The role of humans, in their own words: “transforms from manual actions and validation of AI suggestions to a supervisor overseeing multi-agent collaboration on tasks, with the option to provide feedback. Developers can monitor AutoDev’s progress toward goals by observing the ongoing conversation used for communication among agents and the repository.”

In other words, instead of writing code, human developers would become spectators to the work of AI, interjecting whenever deemed necessary.

It’s really more akin to a management role, where you work with a team of people, guiding them towards the goals set for a project.


Overview of the AutoDev framework. Only the green input is provided by humans. / Image Credit: Microsoft

AutoDev workflow, outlining all of the actions that AI gents can perform on their own in pursuit of the desired output. / Image Credit: Microsoft

“We’ve shifted the responsibility of extracting relevant context for software engineering tasks and validating AI-generated code from users (mainly developers) to the AI agents themselves.”

But if that’s the case do we need human developers anymore? And what sort of skill should they have or acquire to remain useful in this AI-enabled workplace?

Will machines require soft skills in the future?

The conclusion to this evolutionary process may be unsettling to many—particularly to those highly-talented but reclusive software engineers who prefer working alone, dreading social interaction.

Well, as it happens, social skills may soon be required to… interact with machines.

Since all conversational models are essentially mimicking human communication, AI tools will require similar skills of their users as other humans would.


Image Credit: VisualGeneration / depositphotos

Fortunately, nobody is planning to equip computers with human emotions, so at least this aspect of teamwork is unlikely to become a problem, but many developers who today simply write code will now have to specialise in explaining it rather than executing themselves.

This certainly wasn’t a challenge that most techies foresaw when they entered the field but may very soon become a do or die situation for them.

If you’re not an effective supervisor, instructing machines to do the right things, your value in most companies may go down and not up, despite your highly specialised knowledge.

Software engineering path has just become less predictable

There will still be jobs for human programmers, of course, but they are more likely to be available in the companies that make the technology underpinning AI. After all, some future development and maintenance will have to be done by humans, if only as a safety measure.

That said, the pool of vacancies for software engineering experts will be draining quickly, unless you’re a competent communicator with a dose of managerial skills to competently run your own team of AI agents.

Perhaps the worst thing of all is that the creeping of AI into the field makes it very unpredictable as to what skills you should master as a future developer.

This is because we’re still looking for an answer to the fundamental question—if AI replaces most of us, will there be any humans left competent enough to modify the code if things go wrong in the future?

We have some examples of that in ancient IT systems written in outdated languages that few people know well enough to manage, not to mention update.

Since the demand for expertise in e.g. COBOL has gone down with time, there simply aren’t enough people to tackle the problems of old financial or government systems, which often hold millions of critical records of customers and nation’s citizens.

One could easily see how AI could have a similar impact, just on a far larger scale.

It’s a chicken or egg situation: what comes first? You need development skills to understand what AI is doing but how do you learn if AI is doing everything?

If you no longer need to master hard programming skills in any field, how many people will be left to fix things if they go wrong and we’re overly dependent on artificial intelligence?

This isn’t just a problem of reduced attractiveness as a candidate, retaining skills in technologies most companies won’t need you to do anything in, but a fundamental lack of practice, since it would only be needed in rare emergencies.

You can’t be good at something you rarely do.

How do you plan for a career in tech?

It used to be simple: you specialised in a specific field, mastered the required tools and languages, continuously updated your skill set as the technology advanced, and you could expect to be a well-paid, sought after professional eventually.

But now the value of your technical expertise vs. the ability to juggle AI bots coming up with solutions on their own flips it all on its head.

Experts with years of practice are likely to remain in demand (a bit like an old mechanic still fixing modern cars today – they may be different, but many fundamentals remain the same, and his experience can’t easily be gained anymore). But young students in computer science will have a tough nut to crack picking between hard skills and competency in using AI tools to achieve the same or better outcome.

There will be some jobs for highly-specialised experts and many jobs for those simply interacting with chatbots. But those stuck in the middle will soon have to pick their future between the two.

It’s whether you have what it takes to climb to the top or descend down to compete with Zoomers getting their AI-enabled coding lessons on TikTok.

Featured Image Credit: 123RF
 
Top