Capitalism like Artificial Intelligence, has no morals or ethics. Therefore, you better put people in charge of both that do.
Ethics and Artificial Intelligence: The Moral Compass of a Machine
...But Smith does not hold that markets always set the best price. He believes that producers tend to band together in order to control prices and eliminate competition, and that historically most of their activity—apprenticeships, guilds, tariffs—has been designed to help them do this. The urge to fair dealing that makes markets happen is always met by the urge to unfair dealing that distorts them. Sociability quickly becomes selfishness if the small society you inhabit shares only your interests. The laissez-faire economy, truly left alone, becomes a conspiracy of producers against consumers, of sellers against buyers. The interest of manufacturers and merchants, Smith maintained,....
Gopnik, Adam. "Market Man". The New Yorker (18 October 2010): 82. Market Man.
Ethics and Artificial Intelligence: The Moral Compass of a Machine
In 2016, Microsoft released a chatbot designed to interact with people over Twitter. Enabled with an AI routine that analyzed speech, the bot was supposed to show how machines could mimic human communication. Unfortunately, Microsoft had to remove the bot when it began tweeting racist and sexist comments; its AI engine was flooded with hate speech from pranksters and other bad actors online. Now the AI routine itself was certainly not sexist or racist, it was merely imitating speech based on the data it received. I’m sure this incident led to a lot of jokes about how AI-enabled machines will become evil geniuses bent on subjugating humanity. But I think what it really proves is that the real threat in our current generation of AI isn’t AI, it’s ourselves.
Take AI-powered visions systems, for example. Recent research revealed that a popular facial recognition software platform had “much higher error rates in classifying the gender of darker-skinned women than for lighter-skinned men.” Is the AI racist? No, but it does replicate the institutional biases inherent in society. In this case, the training models used to instruct the AI algorithm to identify faces were comprised mostly of white male faces. Accordingly, the algorithm performed better identifying the faces with which it had more experience. Now imagine law enforcement using this AI-enabled vision technology to scan crowds in busy public spaces for the face of a wanted criminal. If that criminal were a person of color, the chances of the AI incorrectly identifying an innocent person of color as the criminal would be higher than it would for a white criminal.