Gendering personal intelligent assistants

In a previous post I asked ‘Why are we copy pasting the shitty-ness of IRL structures in digital systems?’ drawing a pyramid that represents the hierarchy of shit that sees 8% of the population owning 86% of the world’s wealth and controlling a structure of deep rooted inequality, oppression, violence and discrimination. This shitty-ness is being translated into digital systems that build on and perpetuate in real life (IRL) inequalities – see the work of Virginia Eubanks.

The digital systems we’re creating reflect the view of the world of the non-diverse privileged groups of people that create and control these systems: mostly white men. These digital systems are also layered with fantasies about a perfect world, a perfect woman, a perfect future.

The patterns of interaction we are creating in digital worlds are not only building on and perpetuating IRL dominant notions of violence and oppression, but are also reinforcing them and reshaping them back IRL. It’s the IRL-Digital loop of shitty-ness.

By looking at how women are portrayed as robots on screen, in science fiction and now as personal intelligent assistants we can make the connections between how women are viewed and treated IRL and how we are gendering our technology.


AI is a mirror of IRL inequalities

This blog post is a collage from relevant articles, papers etc that critically look at couple of dominant notions around social inequality embodied in Artificial Intelligence (AI):

  • Bias 
  • Decision making
    • Outsourcing decision making to algorithms via scientists and engineers
    • Black boxing AI