I am thrilled to be back with new technologies, theories, and questions surrounding artificial intelligence. As you might know, I have been writing about it for quite a while now, covering various topics related to different industries such as the consumer, industrial automation and machinery, or automotive.
In 2020, together, we debated whether robots will inherently turn racist, concluded that a hacked brain chip might not be the best option in the future, and looked into how to name an algorithm. This year I believe it is even more important to look into ways technology benefits many instead of just a few. I am sure that this pandemic will shape us way beyond what we can foresee now. Still, I think it's essential to start looking into how medical AI works, ways to fight the pandemic, and new technologies that have emerged in the past months.
Wait, so how did you get lost?
Like most fancy technological terms, I had to find a system to figure out "how much AI" was really in the packaging. As companies further embrace AI-driven applications to automate their processes, I believe, as a consumer, we should find ways to be at least aware that we might be feeding a data model, no?
It turns out that most AI applications look and feel like we are using standard software. Built on conventional code to perform tasks such as interfacing with users, integrating with other systems, and managing data, these next-generation applications have trained data models at their core. It is up to these models to interpret images or transcribe speech, more complex tasks.
Since these applications will further drive the economy and continuously influence our daily lives, I asked myself the following: Are there sets of questions I could ask myself while researching new AI-based technologies?
Understanding AI: Don't get lost!
Since these applications are in our everyday life, with applications ranging from chatbots, ridesharing apps to spam filters, we have to find a healthy way to deal with them. While they can make our daily life much more comfortable, there are downsides to them too. For instance, the data used to empower these applications can be flawed and biased. Here are some key areas where you can make better decisions on how and when you'd like to use specific systems:
Asking the right questions: These questions evolve mainly around who is working on an application (scientists, researchers from a privately held company, or university professors?) and the type of data used. Furthermore, ask yourself, apart from yourself, who benefits from the application?
Reflecting values: To find out how AI systems fit into the social fabric is essential. Questions such as how the AI application reflects society's values and whether it helps or harms people are vital questions to ask, even though finding an answer might be more difficult.
Understanding the claim: business, in general, strives to do one thing: solving problems. Ask yourself what the application claims to solve and if there are any insights into how it solves the problem it claims. And, is there more to the claim?
Looking closely into the data: What type of data is fed into a specific application? Could this data have a bias in it, and do I want the application to use my data?
I am thrilled to be back after an extensive break to discuss the newest technologies, bias, and data, and everything in between with you! Check out my Twitter for the latest news. As always, stay curious and happy 2021!