top of page

Can Machines Be Moral?

While artificial intelligence (AI) is taking over more control over our lives, we have to ask ourselves: are we at a point in the development of technology, where we slowly have to give machines morale so that they can't cause damage?


Yes, we need to think about this. There are, however, some obstacles. Some of which are technical, the crucial other one is our inability to pinpoint how intelligence works. I have mentioned the technological boundaries that AI faces today and have looked at different AI scenarios, which are frightening and are happening today. Therefore, in this blog post, I'd like to discuss one specific question and what consequences have to be drawn from it.


What is intelligence?


While we all have some vague idea of how to define intelligence, it is tough to grasp and understand it scientifically. Psychologically, it is perceived as a broad term to describe human intellectual capabilities, manifested in sophisticated cognitive accomplishments and high levels of motivation and self-awareness. But how exactly does the world's most advanced learning system, the mind of a baby, really work?

We simply don't know. As a matter of fact, we don't even know how adults learn (and I have argued before, that machines are not learning). So if we have a broad term to describe intelligence somewhat, don't understand how humans learn, then aren't we in a mess here? While we can build programs such as AlphaZero, a program that plays board games with superhuman skills, beating the best human players, even going so far as to invent new and ingenious gameplay. But it can never match a baby's ability to understand how the physical world works, and how to adapt to unfamiliar situations.


Not knowing how intelligence works is unsettling. How can we figure out how intelligence relates to morality then? And what's more, both are phenomena that decisively shape our self-image. We are intelligent and moral beings - and it is precisely in this respect that we differ from the rest of life on the planet.

At the heart of AI: we're not sure what to aim for


It feels like fishing in a murky pond: it would be nice to catch something alive, but the expectations are not high. Nevertheless, if you do get lucky, you do need some idea on how to act. Because, let's face it: we can't afford not to teach AI systems and robots right from wrong.


The question to be answered here has to be: is it conceivable that someday robots will be "good" decision-makers, meaning the can not only act according to ethical principles, but also from them? If the answer is yes; then we have to look at different types of ethical agents, as described by James H. Moore:

  • Ethical impact agents (weakest form) are agents whose actions have ethical consequences, whether intended or not. This can be a robot, depending on its effort: can it harm or benefit humans?

  • Implicit ethical agents are agents that have ethical considerations built into (i.e., inherent in) their design. To James H. Moore, typically, these are safety or security considerations. "These agents have designed reflexes for situations which require monitoring to ensure security. Implicit ethical agents have a kind of built-in virtue—not built-in by habit but by specific hardware or programming".

  • Explicit ethical agents "are agents that can identify and process ethical information about" various situations and make sensitive determinations about how to act. When ethical principles are conflicting, these robots can work out reasonable resolutions.

  • Full ethical agents can also take action; however, full ethical agents have "central metaphysical features that we usually attribute to ethical agents like us – features such as consciousness, intentionality, and free will. Adult humans are our prime example of full ethical agents".

So while science is nowhere close to general artificial intelligence (the highest, most humanlike form of AI), we can't just answer the question of what AI is allowed to do with a: "what we allow it to do." And therefore, shutting down the moral-ethical discussion, since it would become redundant. Is it that simple, though? No, what we allow or not is influenced by moral (human) considerations.


Morality as a phenomenon of consciousness


According to Christoph Bopp, there will be no way to avoid addressing morality as a phenomenon of consciousness. He goes as far as to say, it has to be viewed exclusively from this point of view. Just as we eradicate the errors in the representation of the world in our consciousness by gaining knowledge, moral criticism will always try to correct our reasons and justifications for action. Only in this way can we understand the claim to the obligation that morality places on us.


In my next blog post, we will try to take consciousness as a starting point and turn to the question of how to teach machines morality. Sound good? Until next time, and as always: stay curious!


For this blog post, I used the following sources:



49 views0 comments

Recent Posts

See All
bottom of page