MIT trains psychopath robot "Norman" using only gruesome Reddit images
Scientists at the Massachusetts Institute of Technology (MIT) trained an artificial intelligence algorithm dubbed "Norman" to become a psychopath by only exposing it to macabre Reddit images of gruesome deaths and violence, according to a new study.
Nicknamed Norman after Anthony Perkins' character in Alfred Hitchcock's 1960 film Psycho, the artificial intelligence was fed only a continuous stream of violent images from various pernicious subreddits before being tested with Rorschach inkblot tests. The imagery detected by Norman produced spooky interpretations of electrocutions and speeding car deaths where a standard AI would only see umbrellas and wedding cakes.
MIT scientists Pinar Yanardag, Manuel Cebrian and Iyad Rahwan specifically trained the AI to perform image captioning, a "deep learning method" for artificial intelligence to cull through images and produce corresponding descriptions in writing. Their research set out to prove that the method of input used to teach a machine learning an algorithm can greatly influence its later behavior. The scientists argued that when algorithms are accused of being biased or unfair, such as the high-profile cases of Facebook news or Google Photos, "the culprit is often not the algorithm itself but the biased data that was fed into it."
Among the Rorschach inkblots used to test the now-tainted AI, Norman said an image showed a man being "shot dead," while a standard AI looked at the same image and saw "a close up of a vase with flowers." In another, Norman said he saw a man being shot "in front of his screaming wife," while the AI not exposed to sordid, disturbing images saw "a person holding an umbrella in the air."
In one of the inkblot tests, the standard AI saw a touching scene of a couple standing together. Norman, however, saw a pregnant woman falling from construction. Having only been exposed to negative images and depressing thinking, the AI's empathy logic simply failed to turn on. Skye McDonald, professor of clinical neuropsychology at the University of New South Wales, said Aldebaran Robotics' "Pepper" robot may be one of the closest experiments in teaching human empathy, she told The Conversation. Robotics experts are hopeful that one day artificial intelligence will be able to understand a human's emotions simply by viewing a person's facial expressions. However, McDonald argued, technology is "still a long way from fully understanding the complexities of how human empathy operates."
At an MIT event in New York last month, Andrew McAfee, co-director of MIT's Initiative on the Digital Economy, was joined by several Silicon Valley expertswho detailed past cases where racism and bias had been filtered into various algorithms. The panel cautioned that while human resources departments may be able to make smart hiring decisions, biases against minorities can creep in from the human who programmed the algorithm. McAfee and the other tech experts discussing algorithmic bias said AI can remain effective and unbiased "as long as" the input data is accurate.
The MIT researchers in this study redacted the name of the specific subreddits used to train the AI. The researchers said the AI "suffered from extended exposure to the darkest corners of Reddit" to illustrate "the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms."
Scientists at the Massachusetts Institute of Technology (MIT) trained an artificial intelligence algorithm dubbed "Norman" to become a psychopath by only exposing it to macabre Reddit images of gruesome deaths and violence, according to a new study.
Nicknamed Norman after Anthony Perkins' character in Alfred Hitchcock's 1960 film Psycho, the artificial intelligence was fed only a continuous stream of violent images from various pernicious subreddits before being tested with Rorschach inkblot tests. The imagery detected by Norman produced spooky interpretations of electrocutions and speeding car deaths where a standard AI would only see umbrellas and wedding cakes.
MIT scientists Pinar Yanardag, Manuel Cebrian and Iyad Rahwan specifically trained the AI to perform image captioning, a "deep learning method" for artificial intelligence to cull through images and produce corresponding descriptions in writing. Their research set out to prove that the method of input used to teach a machine learning an algorithm can greatly influence its later behavior. The scientists argued that when algorithms are accused of being biased or unfair, such as the high-profile cases of Facebook news or Google Photos, "the culprit is often not the algorithm itself but the biased data that was fed into it."
Among the Rorschach inkblots used to test the now-tainted AI, Norman said an image showed a man being "shot dead," while a standard AI looked at the same image and saw "a close up of a vase with flowers." In another, Norman said he saw a man being shot "in front of his screaming wife," while the AI not exposed to sordid, disturbing images saw "a person holding an umbrella in the air."
In one of the inkblot tests, the standard AI saw a touching scene of a couple standing together. Norman, however, saw a pregnant woman falling from construction. Having only been exposed to negative images and depressing thinking, the AI's empathy logic simply failed to turn on. Skye McDonald, professor of clinical neuropsychology at the University of New South Wales, said Aldebaran Robotics' "Pepper" robot may be one of the closest experiments in teaching human empathy, she told The Conversation. Robotics experts are hopeful that one day artificial intelligence will be able to understand a human's emotions simply by viewing a person's facial expressions. However, McDonald argued, technology is "still a long way from fully understanding the complexities of how human empathy operates."
At an MIT event in New York last month, Andrew McAfee, co-director of MIT's Initiative on the Digital Economy, was joined by several Silicon Valley expertswho detailed past cases where racism and bias had been filtered into various algorithms. The panel cautioned that while human resources departments may be able to make smart hiring decisions, biases against minorities can creep in from the human who programmed the algorithm. McAfee and the other tech experts discussing algorithmic bias said AI can remain effective and unbiased "as long as" the input data is accurate.
The MIT researchers in this study redacted the name of the specific subreddits used to train the AI. The researchers said the AI "suffered from extended exposure to the darkest corners of Reddit" to illustrate "the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms."