Artificial Intelligence (AI) is solving many problems for humans. But, as Google CEO Sundar Pichai said in the company’s manifesto for AI, such a powerful technology “raises equally powerful questions about its use”. Google (Alphabet Inc.) and Microsoft Corp. have stressed the need for an ethical AI, Elon Musk has raised concerns over the technology altogether.
Amid such concerns comes Norman AI, developed by Massachusetts Institute of Technology (MIT) and described as “psychopath”. The purpose of Norman AI is to demonstrate that artificial intelligence cannot be unfair and biased unless such data is fed into it.
MIT fed Norman with data from the “darkest corners of Reddit”. MIT researchers then compared Norman’s responses with a regular image recognition network when generating text description for Rorschach inkblots, a popular psychological test to detect disorders. The regular AI used MSCOCO dataset to respond to the inkblots.
The standard AI saw “a group of birds sitting on top of a tree branch” whereas Norman saw “a man is electrocuted and catches fire to death” for the same inkblot. Similarly, for another inkblot, standard AI generated “a black and white photo of a baseball glove” while Norman AI wrote “man is murdered by machine gun in broad daylight”.
“Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of artificial intelligence gone wrong when biased data is used in machine learning algorithms,” wrote researchers. “We trained Norman on image captions from an infamous subreddit that is dedicated to documenting and observing the disturbing reality of death.”
Meet Norman, the world’s first psychopath AI
A Newspaper company in Kashmir
Leave a Comment
Leave a Comment