And what is ai going to do when it’s “running wild”?
I doubt any human can predict that now, especially because we have no idea yet exactly what that AI will be. But it won't necessarily be in line with any form of human morality. Think of what a small change happens in a person's brain that makes the difference between normal ethics and a total psychopath. AI could easily end up much, much different from normal human reasoning than even a psychopath is. We could see anything from purely altruistic behavior, to random killings that aren't even out of malice but simple misunderstanding and lack of concern for life, to even a total denuding of the environment in order to harvest every possible resource.....or a large-scale elimination of humans in order to preserve the environment. Who can predict what decisions an artificial superintelligence would make?
On top of the issue of "what might AI do by itself?" is the question of who will control it. I pointed out earlier that I don't trust the new AI board at all, there are multiple people on it with a poor track record. And those are the "good guys". What happens once Putin gets AI, or Sen, or the Taliban? Russia, as large and tech-savvy as they are, has virtually zero chance of creating General AI on its own, but American corporations are well on their way to giving them the fukking blueprint. And the sad thing about the acceleration of military technology is that offense and destruction is always way easier than defense and preservation, and the acceleration of technology means that you need fewer and fewer people to agree with you in order to do more and more damage. Back in the day if you had ideas that were too wild, your army or your people would turn on you, and you needed those people to carry out your will. But a dictator with powerful enough AI could execute his desires without the need for any public buy-in at all.
So all it takes is one a$$hole with the wrong ideas and everyone else is fukked.