The Dark Side Of AI | More Than Just A Bunch Of Hocus Pocus

October 30, 2020

The Dark Side Of AI | More Than Just A Bunch Of Hocus Pocus

Halloween is drawing near, and in the spirit of all things ominous and eerie, we’ve decided to look past the allure of sharing campfire ghost stories and explore the evil that lurks in the shadows of emerging technology. This article is not intended for the conspiracy theory enthusiasts or those who indulge in Nostradamus’s doomsday predictions, but to take a closer look at how artificial intelligence could be put toward more nefarious uses. So get out those flashlights and let’s dive into the dark side of AI to uncover the modern-day examples of spine-chilling, Halloween-worthy AI tools and applications.

Deep fakes

Seeing used to mean believing, but in a world where deep fake software is easily accessible, can you really trust your own eyes? Deep fakes encompass voice recordings and videos that have been manipulated to suit the intentions of the party behind the keyboard. Deep fakes use deep learning tech to effectively learn what a source face looks like from a variety of angles in order to superimpose it onto another individual. In order to further increase the genuineness of the doctored media, generative adversarial networks (GANS) are applied to pit AI algorithms against each other. One algorithm is tasked with creating fakes, while the other algorithm harshly critiques the forgery, resulting in a back and forth that lasts until the forgery is perfect. When completed, the untrained eye can’t distinguish between the original and doctored versions of media, which only fuels the growing concern of how much damage this technology can do in the wrong hands.

Although the deep fakes created in the video above purely serve to amuse, their realism proves that digital content can easily be manipulated simply by utilising freely available software. 

Pandora’s Box

When we’re young, our insatiable curiosity leads us to ask a thousand “why” questions but as we get older, we tend to take things more at face value. At the end of the day, this overlooked question helps us understand the world around us, or helps us realise that we don’t understand it at all. Automated decision making is the product of machine algorithms mimicking human thinking patterns to produce results, such as, for example, Google’s search ranking algorithm. But have you ever wondered how the algorithm arrives at a specific output or why it behaves in a specific way? Asking why has brought to light the AI Black Box problem, which boils down to one simple truth: we have absolutely no idea how automated decision algorithms process their inputs into certain outputs. Non-transparency is due to the AI algorithms and processes being trade secrets, while the inability to explain in human logic how algorithms manage to produce accurate results has led to black-box problems sparking distrust in AI systems.

Understanding the why may not seem significant, but what if the answers produced by these automated decision-making algorithms were used to determine the future of a human being? In 2016, Wisconsin’s High Court sentenced Eric L Loomis to six years in prison, a sentence that was in part due to COMPAS deeming Loomis as “high-risk” and predicting that Loomis would probably commit a crime again in the near future. Loomis felt that his right to due process was violated since the judge relied on a vague algorithm’s conclusions. When human lives are at stake, it becomes significant to understand how the COMPAS and other similar algorithms arrive at certain conclusions, but unfortunately, trademark laws and nondisclosures keep these company secrets locked away from the public eye. It’s spine-chilling to think that humans are denied access to the decision making process AI applications subject them to. 

Not so normal Norman

If you’re a psychological thriller movie fanatic, you’ve most probably seen the acclaimed 1960s film Psycho directed by Alfred Hitchcock. Norman Bates, the main antagonist in this film, is a psychopath with a murderous streak, but the fictional Norman Bates isn’t the only infamous Norman who is far from normal. In April 2018, a group of scientists at the Massachusetts Institute of Technology activated the world’s first psychopath AI, named Norman. This bizarre project was inspired by the fact that machine learning algorithms are influenced by the data they learn from, thus when fed biased data, ML algorithms can spew biased and unfair results. 

Rorschach Inkblot (Image source: Norman AI MIT)

The team of scientists at MIT trained Norman to perform image captioning, which allows AI to create text descriptions for images by relying on a deep learning method. What makes this project chilling is that during Norman’s learning phase, the algorithm was only exposed to violent and deranged media posted on a subreddit dedicated to death. To prove the scientists’ theory that AI algorithms are not inherently biased, Norman was subjected to inkblot tests, along with other AI algorithms with standard image-captioning neural networks, to showcase how the information the algorithm was fed influenced its perception of images and image captioning capabilities. When shown Rorschahc inkblots, the standard AI algorithms all described innocent images such as “a vase of flowers” or “a person holding an umbrella in the air”, whereas Norman interpreted the same images as depicting gruesome deaths and used graphic detail to describe the inkblots. 

Fake News

As with all AI tools and advancements, the product itself is not malicious in nature, but it can be used for malicious purposes. OpenAI recently released GPT-3, a massive scale language model and, while not available to the general public, its predecessor GPT-2 has been around since 2019. The language model is able to write content after given a prompt, but due to its advanced writing capabilities, the possible malicious applications of GPT-2 were quickly made apparent, dubbed as “too dangerous to be released” by its authors. Some of GPT’s most destructive capabilities include automating the process of creating false content and the impersonation of prominent figures online. Thus, in the wrong hands, this language model could wreak havoc. 

In our interconnected society, reaching vast amounts of people is possible with the click of a button. And with misleading information looming around every corner of our digital worlds, it’s becoming increasingly hard to discern fact from fiction. According to Statista, a study conducted in the US concluded that adults are becoming less confident in their abilities to distinguish fake news from real news, with confidence levels dropping from 39% in 2016 to 29% in 2019. You may regard fake news as a mere branch of innocent trolling, but there have been numerous incidents where fake news believed to be real sparked individuals to commit crimes (cue Pizzagate). Fake news (when believed) can lead to real-life consequences, and with AI products such as GPT-3 and its predecessors, creating and spreading fake news at massive scale is possible. Below, we tested the content creation capabilities of a freely available modern neural network to showcase how an algorithm can write an entire paragraph from a simple prompt: “Pizzagate is not a conspiracy theory”. The algorithm was able to spew the coherent paragraph below in a matter of seconds, illustrating how even free text completion tools could be used to automate the process of creating fake news.

These AI applications may not come close to the likes of HAL 9000 or Skynet, but they are nonetheless bound to rattle, considering that they are fact, not fiction. Do you have some hair-raising examples to share? Leave your links in the comments! 

Jacques Fourie

Hendri Lategan

COO

Swipe iX Newsletter

Subscribe to our email newsletter for useful tips and valuable resources, sent out monthly.