What Is Artificial Intelligence? What Are The Three Types of Artificial Intelligence?

Updated: Aug 7

Whenever you look into any tech review or read articles on tech sites, they always make you believe that we're living among robots already and that AI (artificial intelligence) is already everywhere. Creepy and incorrect, right? NO! It actually does already exist and you are interacting with AI. Do you sometimes ask Siri or Alexa something? What about that recommended movie on Netflix you've watched (btw. 75% of Netflix users select recommended films)? Hello and welcome back to another blog post, this time we'll figure out what AI is and what types of AI exist. Let's go!


What is Artificial Intelligence and how does it work?


AI is not magic, but there is clearly a hype surrounding AI. Most companies worldwide view it as a must-have. AI refers to any type of computer or machine (robots, PCs, phones…) having the ability to do tasks that up until recently would have been conducted by a human being. To break it down even further, these machines simulate human intelligence and/or gestures thanks to algorithms (an algorithm is a finite set of instructions to perform computations), commands, and a lot of data. Programmable functions of AI systems entail reasoning, planning, learning, problem-solving, and most important decision making.


The most poignant definition to me is the following: "Artificial Intelligence is the study of how to make computers do things at which, at the moment, people are better" by Elaine Rich.


Types of Artificial Intelligence


There are three types of AI: narrow (also referred to as weak AI), general (or strong), and super artificial intelligence (also known as artificial superintelligence). Basically, these types try to classify different AI levels, depending on how capable of learning a system is. In one of my sources, I found another way of putting this: the different types of AI are simplistically put, measured by how strongly machines are programmed to deliver the aspired results. So how does it work?


What scientists and engineers do is, they write code for an AI application that follows the Neural Networks structure. These networks mimic the structure of the human brain with the goal to summarize complex information into tangible results. They train the applications' code with vast loads of data. This data "fills" the constraints and limitations up with actionable pieces of information so that the AI application can operate. How you feed the application with information is called machine learning.


What is Machine Learning?


This means, that you teach a machine to perform certain tasks by providing it with examples for it to learn. Examples (meaning data!) demonstrate the aspired output of an AI application for what any given input should be (if you don't succeed, then you don't have the tangible results I have mentioned before). There are different types of machine learning, however, the technique can be divided into two main groups: unsupervised learning (this is when grouping and interpreting the data is solely based on input data) and supervised learning (the AI application learns based on both input and output data). The main obstacles engineers face with machine learning are the following:


  • How can they solve complex problems involving large amounts of data and a lot of different variables with a machine learning approach?

  • How to deal with messy and incomplete data (most of the time the data also comes in different formats)?

  • And therefore, how can engineers determine the right model based on this data?


In conclusion, machine learning is a specific application of AI and falls into the category of narrow/weak AI. It's also the only stage of AI that has been conquered (with applications such as Siri, Alexa, drone robots, among others). Should we be afraid of this stage in AI? I say no, since these applications are all about the data fed to them which is done by humans, so, therefore, humans have control over the application and want an output that makes sense. Now to the downside: you might have noticed, that I haven't talked about facial recognition in this context. Why? I will talk about it in the context of AI and bias (this really upsets me!). And because I think it is very important to point this out, I will take the time to further elaborate in a separate blog post. For now, keep in mind that the bias of the human creating the application is to some extent transferred to the application.


How is Artificial Intelligence changing business?


Today, companies still struggle (according to IDC, roughly 35% of organizations succeed in getting AI models into production successfully) to implement these types of AI applications. One reason being, that these models don't provide clear-cut answers. Which cuts into a very basic principle in business: if you don't know what the problem that needs solving exactly is (and this has to be narrow and measurable of a machine learning-based AI application - for the obvious reasons) - how do you think the results of the application will look like? I say like a hot mess and you've just wasted a lot of time and money, honey. Or as Forbes puts it: don't boil the ocean. Another no-brainer (at least to me) is that companies should pay close attention to their workforce. Clear communication is key!


How do you feel about AI? Let me know in the comments or find me on Twitter! Next time I will look at other categories of AI and why I think it is very important to fight bias in artificial intelligence. Sound good?


Here are some of the sources I have used to write this article:

58 views1 comment

©2020 by The Unlikely Techie. Proudly created with Wix.com