Thinkers who go against the grain: Acquired errors

To err is human. At least that is what people think. However, when an algorithm makes a mistake, we consider it to be a very different story. Research from the University of Pennsylvania has shown that an algorithm making a small and insignificant mistake can cause us to completely lose confidence in it. Researchers refer to that as algorithm aversion.

We are therefore far stricter about technology than we are about ourselves. For example, we have accepted for decades that the introduction of the car causes the death of more than 3000 people worldwide each day. However, a single death caused by an error of a self-driving car is enough to turn the world on its head. We expect AI systems to be completely transparent. But if a human driver causes an accident, we also have to rely on a rehearsed statement in retrospect. In that case, we cannot determine the exact reason for an action either since we cannot look inside a person’s head.

Furthermore, many system errors arise due to our own mistakes. We accuse AI systems of being biased and thereby giving rise to inequality. However, that inequality is mainly due to human prejudices in the data that is used as the input. Therefore AI is not an issue of nature but of nurture. Instead of pointing the finger at ourselves, we point it at the technologies. That is not only unjustified, but it also hinders the development of AI’s potential.

AI will enable us to gain more control over inequality in the areas of gender or race, for example. AI can far better control the conditions that give rise to biases than people can. In this way, we can ensure sufficient diversity in the datasets as well as in the background of the programmers who develop the algorithms. If we accept that prejudices exist, then we can construct systems that fairly spread this inequality. AI might not be able to fully remedy our mistakes but it can distribute these better.

Rudy van Belkom
On behalf of the Netherlands Study Centre for Technology Trends (STT), he investigates the future of artificial intelligence.

Rudy van Belkom (Michiel Laurens).jpg
Photography: Michiel Laurens