Racist Technology?

As if racism in the real world wasn't enough...now we've got racist robots? Oliver Cohen reports on racism in the technology sector, and the underrepresentation of minority groups in training data.

Race isn’t usually a topic that is discussed in a calm technical matter void of emotion. In fact, that’s why I was all the more surprised to learn the perpetual flame war had moved to the realm of the tech sector. You’d be forgiven for thinking that the realm of an AI, a mechanical computer processing that doesn’t usually encompass the emotion and foibles of its human creators, had inherited the sins of the parent and is now being accused of the very same prejudices.

The algorithms were better at correctly identifying a white male face than that of a dark-skinned woman.

What I’m referring to is a study that recently was published by the MIT media labs by Joy Buolamwini as part of their thesis. It is a long, detailed document that is too complex to fully explain in a short article, however well worth the read if you have some spare time on your hands. The crux of it was looking at facial recognition algorithms for gender classification and their performance across a variety of modalities with a particular emphasis on skin colour and gender of those being identified.

The results are “interesting”. Looking at the classification algorithms, there was a large discrepancy with the error produced by the algorithms when tested on light skinned versus darkly skinned data and also in males vs females. The algorithms were better at correctly identifying a white male face than that of a dark-skinned woman. It was not by an irrelevant margin either. The error discrepancy for males vs females was between nine and twenty percent for the various algorithms and for skin shade, between ten and twenty-one percent. These were also not algorithms confined and only used in the labs. The algorithms surveyed included ones from Apple, IBM and Microsoft. The chances are that most people reading this have no doubt used one at some point in the recent past.


Photo by Soroush Karimi / Unsplash

The obvious question might be, while this is an issue, is it a big deal in the whole scheme of the racial problems persisting today. The answer, in my not so important opinion, is a resounding yes. The problems that cause such issues are certainly not limited to facial recognition. In fact, from cases of inappropriate chatbots, to a court risk assessment algorithm that was biased against black prisoners; this is an issue that pervades a lot of past, current and worryingly, possible future AI. Artificial intelligence in its current form starts as an empty vessel and is programmed by its creators. This means whatever it is, is nothing more than a mere reflection of an environment that created it.

This is not to say I categorically believe all of Silicon Valley/ the tech sector are a huge bunch of white cloak wearing, 1960’s racists. Like most issues of the present day, the situation at hand is a lot more nuanced. I think a lot of the situation can be looked at through the facial recognition example.

It hammers home the point that an algorithm of this sort is only as good as it’s training data.

Firstly, however, I need to explain how such an algorithm works, very briefly of course. The exact mathematical process whereby you teach a computer how to recognise a woman's face versus a man’s is not just complicated but well over the realm of this authors understanding. One of the key details that I think is quite simple, is the starting point, training data. It’s an important concept and one I’ll come onto later. Essentially the process is such that once you have the algorithm to “train it” you show it examples of data and the correct classification. I.e. if you were training it to recognise shapes you would show it many squares while telling it what it’s looking at is indeed a square. This is a gross oversimplification, and other algorithms work in varied ways. However, it hammers home the point that an algorithm of this sort is only as good as it’s training data.


Photo by Alex Knight / Unsplash

What was interesting was that of the reasons the MIT study hypothesised for the differential inaccuracy was the training data specifically. Along with the benchmarks that are used to test such algorithms, both were found to be inherently underrepresented in minority departments. The issue could lie in the fact that although people of colour are an inherent minority in the US (around 12.6% according to the 2010 US census), training data that mirrors this percentage is never going to produce algorithms that achieve an accuracy of the white counterparts.

With a future that will no doubt amplify the present, where mortgages, loans and insurance are all decisions made based off of AI like talked about here, as students we must be mindful. We are a generation where we could be the ones both creating and being the victim of inherent biases of the prejudices that will pervade our machine counterparts.

Featured Image: Unsplash / Markus Spiske


Keen to find out more? Get in touch!

Facebook/Epigram/Twitter