What Is Data Feminism?

Data feminism looks at data from an intersectional feminist point of view. It acknowledges the good done by applying big data and scientific profound data sets to expose injustice, improve health outcomes, and topple governments. However, as data science systems become more common, interpreting results, governing, and managing them is a challenge. How do we reliably know the trustworthiness of data, algorithms, and models?


In this blog post, we have to start asking ourselves the following: Data science by whom? Data science for whom? Data science with whose interests in mind? While it is undisputed that the view has been overwhelmingly white, male, and (maybe a little too) tech enthusiastic, there is a need to challenge hierarchical (and empirically wrong) classification systems in data science. Why? Well, the overwhelming majority of humanity is actually not white and male.


Seeing the World Through the Lens of Intersectionality


The theoretical framework named intersectionality takes many aspects of a person's social and political identities into account for a more knowledgeable understanding of advantages and disadvantages that are felt by people due to a combination of these identities. It examines the overlapping systems of oppression and discrimination, uncovering unique modes of discrimination and privilege. This qualitative analytic framework identifies how interlocking systems of power affect those who are marginalized in society (aka social exclusion), promoting the idea that oppressive factors cannot be viewed in isolation. Coined by Kimberlé Williams Crenshaw in 1989, it is now at the forefront of conversations about racial justice, identity politics, and policing. And this is also why data feminism is needed: it opposes and discloses oppressive power structures.



The Importance of Data Feminism


This approach tries to identify oppressive power structures and flawed data by posing questions such as: How do these structural inequalities permeate the data science process? How does this influence data visualization? And ultimately: How can data scientists trust algorithms they frequently don't understand or can't explain?


Recent studies have shown that trust in data science is as collaborative as it is calculative. While algorithms are rule-based, they are not rule-bound, leaving room for interpretation, creativity, and discovery. While Steven Jackson and Samir Passi highlight the importance of collaboration in building trustworthy data science projects, I argue data scientists need to take it one step further and address critical problems society faces today. Which is also the reason why we need to keep asking the following questions:


Data science by whom? Data science for whom? Data science with whose interests in mind?


For this blog post, I have used the following sources:

Stay tuned, creative, and as always, stay curious!

©2020 by The Unlikely Techie. Proudly created with Wix.com