The underlining concept of Machine Learning (ML) is systems learning from data, identifying patterns, and making decisions. Depending on the application, decision-making occurs without or only minimal human intervention. Since the production of data is a continuous process, machine learning solutions adapt autonomously, learning from new information and previous operations. But is this learning in correspondence to the way humans learn? No, it is not.
Why? In the context of ML, learning is statistical. While Léon Bottou argues that this statistical nature is well understood (e.g., Vapnik, 1995) and statistical machine learning methods are now commonplace; others have been arguing that this cannot be labeled as "learning."
But why should we care about terminology? I share David Watson's view that "such rhetoric is, at best, misleading and, at worst, downright dangerous. The impulse to humanize algorithms is an obstacle to accurately conceptualizing the ethical challenges posed by emerging technologies" (Watson, 2019).
Artificial Intelligence: What does this imply?
Humans have always wanted to create machines that can think, learn, and reason. The research in the field of artificial intelligence pushes us non-techies and techies alike to look at specific algorithms believing they are comparable to our human ways of thinking and, subsequently, reasoning. And as outsiders, how can we tell whether to take these descriptions literally or metaphorically?
Here is where it gets tricky: on the one hand side, using anthropomorphic tendencies in describing AI phenomenon can benefit future research in the field. However, it is also very hindering if not even dangerous in socially sensitive applications. Why? Because the anthropomorphic tendency in AI is not ethically neutral.
Anthropomorphism In AI
The psychological phenomenon known as anthropomorphism is known as the tendency to ascribe human characteristics to non-human objects (take Bambi, for example). Because of this phenomenon and the rhetoric that comes with it, we expect intelligent androids to appear any day. And quite frankly, papers have shown that it's not only the general public which is torn between science fiction, make-believe, and what can be accomplished.
Algorithms Don't Function Like Human Brains
In machine learning, algorithms are deployed to solve a specific problem. Every guide out there about how to implement machine learning applications will tell you that you have to have a clear vision of the problem it has to solve. In many cases, the machine learning applications are faster, more accurate, and time-saving, therefore, among other benefits, shortening time-to-market. However, it will only address this specific problem with the data given.
Can We Hold Algorithms Accountable?
What happens when we let algorithms decide in socially sensitive applications? For starters, depending on the data fed into the system, we are faced with racist, sexist, and discriminating outcomes. Secondly, how can we sustain our ability to hold influential individuals and groups accountable for their technologically mediated actions?
It is of paramount importance to understand that the notion of machine learning technologies being humanlike when it comes to their ability to fully understanding data (meaning finding patterns and exploiting them) is not correct. While these applications are powerful (e.g., the Optometrist Algorithm), they merely mimic human intelligence. And that is, what is essential here: such systems are powerful tools for good or for harm.
I want to conclude this blog post with the final sentence in David Watson's paper: "The choice, as ever, is ours".
For more, here are the resources I have used for this blog post:
The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence by Daniel Watson, 2019.
Stay tuned for more, and as always: stay curious!