An AI robot doing algorithm and economics. / Photo by: Kittipong Jirasukhanont via 123RF


The deep learning capability in artificial intelligence is primarily used in improving many functions in various industries such as reducing operating costs and decreasing workflow problems. However, deep learning can also be used to replicate the dark aspects of the world. In a study conducted by experts at the Massachusetts Institute of Technology, a dark AI called Norman mimics the personality of a psychopath.

Psychopathy is a considered a mental disorder that is difficult to detect compared to other psychiatric problems. People with this disorder are likely to appear normal but deep inside they lack conscience and empathy. They also have an aptitude for communication and manipulation.

These traits are manifested by the Norman AI, created by researchers at MIT. The said AI has been trained on data consisting of dark elements found on the internet. Norman AI is equipped with a deep learning method called image captioning. It allows the software to generate textual descriptions of images. But in the case of Norman, it generates captions of infamous photos from a group on the Reddit website which contain graphic content associated with death.

Another part of the AI’s development is the application of the psychological test called Rorschach inkblots. The researchers compared the responses from Norman using a standard image captioning neural network on the inkblots which revealed numerous descriptions about the violence in the images.

The responses from Norman was also found to be completely different from the responses of a standard AI. For instance, one image was described by Norman as “a man who jumps from floor window,” while a standard AI described it as “a couple of people standing next to each other.”

“Norman is born from the fact that the data that is used to teach a machine learning algorithm can significantly influence its behavior. So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it,” researchers stated.

The data that is inputted in a software reflects the “personality” of an AI, which means using data filled with positive information can result in a “good” AI system.