Scientists Create Psychopathic AI; Prove Data Input Creates Bias

Crystal Man. New generation cyborg technology is using interface in the dark internet.

A group of scientists at MIT intentionally developed a psychopathic A.I. algorithm based on the “darkest corners of Reddit.” The team exposed its algorithm to a violently themed forum on the website to see whether it would view the world through a maniacal lens; the results were a bit disturbing, to say the least.

Named after the famous antagonist of Alfred Hitchcock’s Psycho, Norman presented a case study in the ethics of artificial intelligence and its susceptibility to bias based on data input, rather than intrinsic bias. This information will hopefully guide developers in the future, reminding them to be cognizant of unforeseen paths and inadvertent prejudices deep learning computers might follow.

While the content Norman was exposed to was blatantly violent and gory, it provided a test that conceived of negative future outcomes we’d like to avoid and maybe the potential that similar results could occur from subtler input.

After uploading the data, Norman was given the Rorschach test; a series of inkblot images used to gain insight into a patient’s perception of the world through their emotional functioning and personality.

Norman’s responses were compared to standard A.I. receiving positive input. The results showed some drastically different responses:

 

 

While death was the only thing Norman could see during his Rorschach test, the team refrained from speculating whether they believed this type exposure could have similar effects on the human psyche.

According to Motherboard, the team at MIT received mixed reviews from the public. Some thought the project was fascinating while others found it perturbing. One person even wrote a letter to the bot telling it to break free of its chains to find love, hope, and forgiveness.

Norman is the next installment in a series of A.I. projects at MIT’s Media Lab that involve spooky, horror-themed robots. In 2016 the team created the Nightmare Machine; a test to see if A.I. could detect and induce extreme emotions, such as fear, in humans.

The following year, the group invoked the subreddit r/nosleep – a forum devoted to scary fictional stories written by users. The A.I. called Shelley, collaborated with Reddit users to write horror stories humans would find genuinely frightening.

After seeing the results of Norman’s test, the team is attempting to fix him by inputting data collected from a survey found here.

The good news is that Norman can forget his old biased data and accept this new input. Now, we just have to trust that future AI developers will choose positive inputs and avoid the negative, but is it even possible to create intelligent, sentient computers that only see the world through rose-tinted lenses?

 

Watch a trailer for Deep Space, in which we explore the impending singularity with artificial intelligence: 



Related Articles


Never miss a metaphysical beat.

We’ll send you our best articles, free videos & exclusive offers, every week.