The Moral Machine: Teaching Right From Wrong

While scientists and engineers have not yet been able to establish the highest form of artificial intelligence (artificial general intelligence, short AGI), we still have to ask ourselves what a moral machine would be like and if it can be achieved. Even today's narrow AI needs our attention since these applications have to make choices in everyday situations (think of an autonomous vehicle's need to grasp a stop sign). So what do I mean when I am asking, whether this can be achieved or not? I will try to give an answer to the following question: how can you teach morality to a machine?


Let's take morality as a phenomenon of consciousness as the starting point and try to figure out how we could then teach our machine morality. There are many different definitions of consciousness, I prefer a broader definition of it, however, if you want to read more about conscience I have a link ready for you at the end of this blog post.


A phenomenon of consciousness: morality


While morality centers around questions of what is right and what is wrong, consciousness means knowledge. These two terms put together, therefore, is the knowledge of good and evil, of what is right or wrong. Furthermore, through our individual conscience, we become aware of our very own deeply embedded moral principles. Not only do we evaluate our character against those principles, but we also act upon them. Faced with a moral dilemma, humans will react out of a gut feeling in accordance with those principles. Which, for obvious reasons is not feasible for a machine.



Since humans don't decide based on elaborate cost-benefit calculations, how can we program objective, measurable, and explicit algorithms for a machine to learn from?


Is this fair?


In his article Vyacheslav Polonski outlines three approaches to this, which I want to discuss with you:

  1. Explicitly defining ethical behavior: what this encompasses is finding quantifiable parameters for ethical values (sort of a global code of conduct for ethical behavior in machines). While some believe this can be achieved globally, I have to disagree with this view. Why? Because culture changes values. If we stay with the moral dilemma an autonomous vehicle might be faced with: depending where you were socialized, you'd program the car differently. And even if the highest principle is prioritizing the protection of human life without offsetting humans against one another, this might not be the way a human would decide to go about the exact same situation. So how do humans decide? The best possible way has to be the second point of this roadmap outline.

  2. Crowdsourcing human morality: this is a data-driven point. By crowdsourcing solutions to moral dilemmas from millions of humans, scientists and engineers hope to train machines more effectively (e.g. MIT's Moral Machine).

  3. Making AI systems more transparent: this refers to policymakers. There needs to be more governmental control and guideline regarding such systems. This is not only a question of liability, but also of how such ethical values are implemented, widely discussed, enforced and regulated. While I agree that the law has to adapt to such questions (and please also globally, instead of only nationally), one critical aspect seems to get dropped once more: the reason why data is so often biased (which makes total sense given human history - just think of the glaring century-old inequality of wide strata of society and sexes) could perpetuate itself in law-making. Who will decide in the end how such a legal text should look like? My gut feeling tells me it's probably not going to be a diverse panel of experts and politicians (let alone people in power coming together globally to work together on these topics).

All in all, it will all come down to humans (all of us) agreeing on what is fair/right or unfair/wrong and finding a way to translate this newly achieved common human understanding into globally accepted algorithms. If we cannot achieve this, then maybe, just maybe AI should outsmart us and ultimately teach us something - and when/if AI does, then it might not only be teaching globally but universally.


What is your take on this? Let's discuss!


As always, I have a list of sources for you to go through. Have a great time reading them.

PS: I am not always this pessimistic about humanity, but lately it has been very hard for me to be optimistic. Let us hope that the power goes back to where it belongs: to the people.

©2020 by The Unlikely Techie. Proudly created with Wix.com