Neural networks have to be programmed by human being and extensively trained by human beings which goes to my point that computers can only do what we program them to do.
This is where you clearly don't understand the difference between existing neural nets and General AI.
Neural nets already prove that computers can self-program, they just have restrictions on goals and access to information. General AI with be a huge evolutionary step above that.
You’re comparing humans designing technology such as aircraft and then improving that technology to humans designing software and then that software becoming sentient and destroying the globe. We just aren’t there yet
Of course we aren't there yet. So let's pick a date. Because at the current rate, we are absolutely going to reach a point where the technological potential is there.
Most people sensationalizing the power of AI aren’t actually computer programmers. Most software engineers will tell you the most dangerous aspect of AI is it’s effect on the job market, not it’s ability to gain consciousness and destroy the world
This is bullshyt. There are a LOT of experts in the field who think that AI could destroy humanity.
In
a summer 2022 survey of machine learning researchers, the median respondent thought that AI was more likely to be good than bad but had a genuine risk of being catastrophic. Forty-eight percent of respondents said they thought there was a 10 percent or greater chance that the effects of AI would be “extremely bad (e.g., human extinction).”
AI gets smarter, more capable, and more world-transforming every day. Here’s why that might not be a good thing.
www.vox.com
And those are just the researchers, with computer scientists and programmers being famously myopic and unaware of the potential social implications of their work. The true risk is likely much greater.
Think about that for a moment. These are the most specific experts in the field, and half of them think there's at least a 10% chance that AI will destroy humanity or something of similarly negative impact. If you had a virus that had a 10% or greater chance of destroying humanity, would you be allowed to study it freely, to perform human tests with it, to release it in the general population? fukk no, there would be massive restrictions on what you could do with that virus, you aren't even legally allowed to develop such viruses even as tests, and if it did exist it wouldn't be allowed outside of the most secure facility. But these computer programs which have the potential to be every bit of dangerous are being experimented on right in the public sphere with no regulation at all.