top of page

Are You Afraid of Artificial Intelligence?

When talking about artificial intelligence (AI), people quite often are scared and think of a near-future doomsday. Before we look at deeper levels and more categories of AI, I suggest we answer this question first: should we be afraid of AI?

Are We Suffering from Technophobia?


Technophobia is the dread or hatred of cutting edge innovation or sophisticated gadgets, particularly PCs. Although there are various translations of technophobia, they become progressively mind-boggling as innovation keeps developing. Why? Some fears are outlandish. However, others can't be dismissed. With AI continuously evolving, scientists distinguish this fear between two categories: the possibility that AI will get cognizant and look to obliterate us, and the idea that corrupt individuals will utilize AI for insidious purposes. As of now, I believe the real threat we face is powerful AI applications in the wrong hands. Why?


Let's look into it: corrupt individuals will utilize AI to do evil. What would this look like? This way of trying to implement AI is called Dark AI, and according to Mark Minevich, this blanket terminology entails "any evildoing an autonomous system is capable of executing given the right inputs (biased data, unchecked algorithms, etc.)." The scenarios he lists range from economic malfeasance to privacy tampering and become real threats, given "malevolent AI applications, such as smart dust and drones, facial recognition and surveillance, fake news, and bots as well as smart device listening." But how exactly can this be achieved?


Dark AI Scenarios: with the rise of fake news - will AI make it even worse?


I want to look into two scenarios (unfortunately, there are more): fake news. With bots generating user-driven content, chances are damaging material can be rolled out to an even bigger audience (and evoking the audiences' reaction, which, as every marketer out there knows, makes content rank higher).


Bot is short for internet robots, a software application that runs automated tasks (scripts) over the internet. Moreover, systems are getting increasingly more sophisticated in faking photos, videos, conversations even. Already today, you need to be very careful when consuming content, not everything we read, listen to, or watch is legit (which is also a reason why I always list my sources and choose them carefully). Imagine a time where you can't tell anymore if you are talking to a bot or a real person. How spooky is that (btw. this potentially has happened to you already)?


You only need two ingredients: bots effectively spreading fake news via social media accounts (which happened in the 2016 presidential election!). Then criminals only need to use fake imagery or audio to cause personal or business damage or even can interfere with government operations. All it takes is this AI-enabled content to alter the public opinion massively. Forbes suggests that companies and governments look at this AI-enabled content as a cybersecurity threat and want them to act accordingly. What exactly does this mean? According to Forbes, the world is remarkably unprepared for AI being unleashed on unprotected citizens.


The second scenario: Facial Recognition


When it comes to facial recognition, I have two main issues: one is a security concern, the other is bias. Did you know that a study by the National Institute of Standards and Technology (NIST) found that these systems misidentified people of color more often than white people? Moreover, people of Asian and African American descent were up to 100 times more likely to be misidentified than white, middle-aged men. But the algorithms also misidentified elderly people, women, and children at a significantly higher rate the study found. It probably doesn't surprise you that middle-aged white men generally benefited from the highest accuracy rates. The alarming part in all of this is, that we're talking of a very profound study, in which, according to The Verge NIST looked at 189 algorithms from 99 organizations, which counted together are powering the majority of the facial recognition systems that are in use (may I add... globally!!).


Why is it such an issue that these algorithms are tremendously flawed? Well, for starters, criminals having access to this type of data can still steal your characteristics and produce fake images, which then, thanks to AI applications (bots and cloning), can be distributed in the form of compromising deep counterfeit pictures and videos (click here for more on "deepfakes"). And secondly, if these flawed systems are used in law enforcement and across other government agencies lead to false arrests, tickets, and prison sentences.


What does all of this mean? It means that this study has to be a sobering reminder that facial recognition technology has consequential technical limitations alongside posing threats to civil rights and liberties.


Why I fear AI? Because of What Humans Could Do With It


Remember the second scenario I had described earlier? The one where AI will become conscious and in a second step seeks to destroy us? If I think of what evil humans can accomplish with this technology, there is no need to fear AI itself.


Ultimately, I think as humans and individuals we have the obligation to inform ourselves on the critical aspects of AI - digitization and digitalization as a whole even - in order to have a stance in how this technology can be used worldwide. Furthermore, more has to be done to ensure inclusion, equality, and equity in the development of such applications. This requires our attention and cooperation as citizens, as humans, and participants in the ecosystem we live in. We don't have a choice in this matter: machine learning, deep learning, and all other aspects of AI are here and they are here to stay.


So what will you do?


Thanks for hanging in there, I know this blog post was a tad longer than usual. However, I wanted to cover at least the most pressing downsides to AI for you. In this article, I used the following items:

53 views0 comments

Recent Posts

See All
bottom of page