I don’t see the point of your first paragraph? I already explained to you that neural nets have to been extensively trained by highly intelligent computer programmers like a baby toddler being potty trained. That is a fact that’s not debatable. Generative AI works by recognizing patterns and then generating a response based on the identified patterns. It is literally the opposite of spontaneous thought. Generative AI as well as all AI models need to be fed vast amount of informational input in order to generate an output. As well as a prompt in order to do anything at all. There’s a reason why a powerful tool like ChatGPT is free. Every prompt is helping train the AI by learning how we communicate.This is where you clearly don't understand the difference between existing neural nets and General AI.
Neural nets already prove that computers can self-program, they just have restrictions on goals and access to information. General AI with be a huge evolutionary step above that.
Of course we aren't there yet. So let's pick a date. Because at the current rate, we are absolutely going to reach a point where the technological potential is there.
This is bullshyt. There are a LOT of experts in the field who think that AI could destroy humanity.
AI experts are increasingly afraid of what they’re creating
AI gets smarter, more capable, and more world-transforming every day. Here’s why that might not be a good thing.www.vox.com
And those are just the researchers, with computer scientists and programmers being famously myopic and unaware of the potential social implications of their work. The true risk is likely much greater.
Think about that for a moment. These are the most specific experts in the field, and half of them think there's at least a 10% chance that AI will destroy humanity or something of similarly negative impact. If you had a virus that had a 10% or greater chance of destroying humanity, would you be allowed to study it freely, to perform human tests with it, to release it in the general population? fukk no, there would be massive restrictions on what you could do with that virus, you aren't even legally allowed to develop such viruses even as tests, and if it did exist it wouldn't be allowed outside of the most secure facility. But these computer programs which have the potential to be every bit of dangerous are being experimented on right in the public sphere with no regulation at all.
In the second paragraph is where I don’t think you understand how computers work on a basic level. A computer by definition can’t develop sentient thought. A computer is simply input and output. A computer is not physically capable of spontaneous thought, its capable of simulating thought based on the information it has available. Computers lack self-awareness, context, perception, etc. Human thought arises out of real life experiences, emotions, and interactions which a computer is incapable of experiencing.
Sometimes you have to stop and think about what you’re arguing against instead of disagreeing for the sake of arguing. You are not even arguing against the point that I’m making.
The point being that there are no respectable AI experts that think AI will destroy humanity by becoming spontaneous entities. They think AI will destroy humanity based on the implications of the technology which is what I have been saying.
AI is dangerous for the same reason nuclear energy is: because of what humans are capable of doing with the technology, not the technology itself. Your own article is parroting what I have been saying in this thread, im pretty sure you didn’t read it.
Here’s a quote from your article that you didn’t read:
Deepmind founder:
“I think it’s going to be the most beneficial thing ever to humanity, things like curing diseases, helping with climate, all of this stuff. But it’s a dual-use technology — it depends on how, as a society, we decide to deploy it — and what we use it for.”