Why AI is sexist | campus.sg

ai sexist
Image by Comfreak from Pixabay

As our lives are changing through digital transformation, our smartphones, smart homes, and smart cities now influence how we live. All of this is driven by technologies like artificial intelligence (AI) which is a main driver of automation of tasks ranging from medical diagnoses to judicial verdicts and recruitment decisions.

While they save us time and human error, relying on AI can be risky because of its inherent sexism.

How does AI become sexist?

Artificial Intelligence is a product of machine learning – a function of monkey see, monkey do. One of the ways AI learns is through Natural Language Processing (NLP) which combs through linguistic data to interpret outcomes using mathematical models. 

The problems come when an NLP’s algorithm gives us gender-biased outcomes simply because of the language it’s learning from; yes, the English language is inherently sexist.

Firstly, masculine nouns and pronouns often refer to both men and women (ie. man-kind, king-dom, etc), and it’s only relatively recently that we used gender-neutral words, like firefighters instead of firemen. Secondly, ingrained sexism has trained our collective societies to hold onto outmoded gender roles. For example, we associate “doctor” with men, and “nurse” with women. 

Machine learning algorithms currently aren’t sophisticated enough to pick up nuances. For instance, take the words king and queen: one is a royal male, and the other is a woman married to the king. As human beings, we also use queen to describe a woman leading a kingdom by herself, but a machine can’t wrap around the idea of a queen ruling a masculine king-dom.

Machines also have a habit of attaching gender to certain professions; the word programmer is gender neutral, but it tends to associated it with “male” because of the social perception humans have of the job.

It’s not just English

In Romance languages like French and Spanish, some words are tied to gender. For instance, in Spanish, a kitchen (cocina) is female, and money (dinero) is male. Therefore, it’s impossible to ensure that certain data sets are unbiased prior to training. A classic example of masculine default due to biases can be seen in machine translation systems like Google Translate, for example in French:

Despite the context clearly referring to women, “Ils” – the masculine plural subject pronoun – is used to relate to ‘efficiently’. 

It’s the same the other way around. In translating gender-neutral Hungarian sentences “Ő egy orvos. Ő egy nővér” to English, it assumes the subjects as “He’s a doctor. She’s a nurse”.

These issues are common to most languages, highlighting how gender discrimination spreads across the world – and therefore how machine learning based on human languages can easily be biased.

How does it affect you?

As we move further into an automated world where we are increasingly reliant on incorruptible and dedicated workers, we’re giving everyday tasks like recruiting candidates to AI that could be discriminatory because they copy human behaviours.

According to a Reuters report, Amazon spent years working on an AI system to review resumes and recommend the best candidates. Because the industry is male-dominated, the majority of the resumes were from men, so the AI discriminated against women (eg. down-scoring resumes that included the word “women”). Despite multiple attempts to correct the algorithm, Amazon scrapped the AI because it could not “unlearn” this bias.

Skewing the hiring process is just one aspect — bias in algorithms can also lead to discrimination in loan applications, medical diagnoses, and even the criminal justice system.

With gender inequality still deeply rooted in our society, machine learning algorithms run the risk of propagating and amplifying all our biases. This could have alarming consequences especially when we put our blind trust in AI in many different decision-making scenarios.