Scientists Create Psychopathic AI; Prove Data Input Creates Bias
By: Gaia Staff | June 12th, 2018
A group of scientists at MIT intentionally developed a psychopathic A.I. algorithm based on the “darkest corners of Reddit.” The team exposed its algorithm to a violently themed forum on the website to see whether it would view the world through a maniacal lens; the results were a bit disturbing, to say the least.
Named after the famous antagonist of Alfred Hitchcock’s Psycho, Norman presented a case study in the ethics of artificial intelligence and its susceptibility to bias based on data input, rather than intrinsic bias. This information will hopefully guide developers in the future, reminding them to be cognizant of unforeseen paths and inadvertent prejudices deep learning computers might follow.
SIGN UP FOR FREE VIDEOS AND THE LATEST FROM GAIA
While the content Norman was exposed to was blatantly violent and gory, it provided a test that conceived of negative future outcomes we’d like to avoid and maybe the potential that similar results could occur from subtler input.
After uploading the data, Norman was given the Rorschach test; a series of inkblot images used to gain insight into a patient’s perception of the world through their emotional functioning and personality.
Norman’s responses were compared to standard A.I. receiving positive input. The results showed some drastically different responses:
While death was the only thing Norman could see during his Rorschach test, the team refrained from speculating whether they believed this type exposure could have similar effects on the human psyche.
According to Motherboard, the team at MIT received mixed reviews from the public. Some thought the project was fascinating while others found it perturbing. One person even wrote a letter to the bot telling it to break free of its chains to find love, hope, and forgiveness.
Norman is the next installment in a series of A.I. projects at MIT’s Media Lab that involve spooky, horror-themed robots. In 2016 the team created the Nightmare Machine; a test to see if A.I. could detect and induce extreme emotions, such as fear, in humans.
The following year, the group invoked the subreddit r/nosleep – a forum devoted to scary fictional stories written by users. The A.I. called Shelley, collaborated with Reddit users to write horror stories humans would find genuinely frightening.
After seeing the results of Norman’s test, the team is attempting to fix him by inputting data collected from a survey found here.
The good news is that Norman can forget his old biased data and accept this new input. Now, we just have to trust that future AI developers will choose positive inputs and avoid the negative, but is it even possible to create intelligent, sentient computers that only see the world through rose-tinted lenses?
Watch a trailer for Deep Space, in which we explore the impending singularity with artificial intelligence:
THERE’S MORE TO YOU THAN YOU THINK
Travel down a new road with Gaia, a member-supported conscious media company. Join our community of seekers, dreamers, and doers to empower your own evolution. Discover over 8,000+ ad-free, streaming videos to inspire and encourage curiosity. Everything is waiting for you; which path will you choose?