How not to: Bias in AI

Bias in ai

Or: What’s up with bias in AI and what can we do to revert it?

When people first hear of AI and it’s use, they fear intelligent robots that will take over humanity. At least, that’s what Hollywood suggests in their blockbusters. But is this what we should really fear?

As we have already explained in prior articles, an artificial intelligence is trained using “historic” data. Therefore, whatever this data suggests, the AI will use for future decision-making processes.

Examples of bias in AI gone wrong

The most prominent example is probably Amazon and it’s AI used for the process of hiring new employees. The AI chose men over women on a very regular basis – no matter the qualification of the single person, which is a nightmare for diversity and simply unfair.

Fortunately, Amazon noticed this quickly and dismissed it’s use right away. Why though, did the AI prefer middle-aged white men to any other group of people?

There are other examples, which result in a similar outcome: facial recognition works best on middle-aged white men and worst on dark-skinned women. Voice recognition prefers male tone pitches to higher pitches, associated with women speaking.

We can notice a pattern here: in general, middle-aged white men are better off using AI-based technology than any other group. But why?

Historic data

As already mentioned, an AI is trained using historic data, which it uses as a fundament for further learning. Still today, most management positions are held by the above mentioned middle-aged white men. Therefore, historic data suggests that a man will be better suitable for the position, than a women or a person of another ethnicity.

We notice straight away, that this is simply incorrect and wrong, and qualification needs to be the key factor for making (hiring) decisions.

Here’s what Amazon did: they left out the gender marker when feeding data to the AI, so that it could not make decisions based on gender. However, the hiring decisions did not improve, and women were still less likely to be hired for management positions. Why? Because the AI still searched for keywords in the CV, such as “women’s college”.

Who is at fault for this and why is it so hard to estimate the bias?!

How to get rid of the bias?!

The keywords here are “unknown unknowns”. We are unable to predict which keywords will trigger the decision-making process. The AI is missing the social context and empathy, which we as human beings have, but the AI (does not yet) have.

At this point we are blaming “AI” for the wrong doing, but is this accurate? The bias is built into the data and therefore the system learns the bias.

It’s hard to find the cure-all for how to get rid of the bias, but we have to choose the “correct” wording and posing of a question.

The definition of fairness needs to be adjusted and statements need to be brought in social context. Data needs to be reviewed and reprocessed.

The core question

Even if we re-evaluate the data and train AI to think differently, are we working at the core of the problem? The problem being, that still today middle-aged white men are preferred to women in many cases? That facial recognition works best on white men and worst on women of colour?

We need to bring diversity to real-life, and we need to do it now.

Diversity and “Diversity”

When we speak about diversity, we usually mean the fair mixture of men and women, ethnicities and ages that all work together in a heterogenous group. This mix is a must for a successful company in order to gain as much insight as possible.

But there is more to diversity. We also need to view different qualifications and backgrounds as a part of diversity: imagine the engineer working next to the marketing student. The tech start-up working alongside the traditional business. These are the relationships which offer the highest benefits and a lot of creative and new input.

Where it all comes together

And this is where it all connects: diversity, networking, multidisciplinary exchanges, co-working spaces, new work and many other buzzwords of today.

We need to open up and be willing to try new things, to learn from other people and to let us be inspired. This will help us to get a different view on the “unknown unknowns” and will also diminish bias in AI in the long-run.

(FS)