‘I find it quite exciting’, says Peter Bosman, AI researcher at the Centrum Wiskunde & Informatica, over the telephone. ‘Will a moment come when we trust AI too much?’ This spring, the first patient was given radiation therapy at Amsterdam UMC according to a treatment plan that had been calculated with the help of artificial intelligence. The computer carries out the calculations in a minute, whereas that would have previously taken an hour. The radiotherapist who bears final responsibility therefore has more time to think about the different treatment plans that the computer proposes. Bosman: ‘I change his work in a positive manner: he obtains more time to think about the treatment decision.’ The patient probably experiences fewer side effects due to a more precise, patient-specific radiation treatment.
People continue to bear the final responsibility
Up until now, Bosman could practice with AI on the data of patients who had already been treated. This revealed that in 98 percent of cases, doctors preferred the AI plan instead of the plan they previously made themselves. But now that AI is going “live”, isn't there a risk that eventually the doctors will blindly trust it? Will they still think for themselves? Just like we have become used to blindly trusting the navigation in our smartphone, will a doctor eventually accept the outcomes from the computer without a second thought? Bosman: ‘That’s why we always want to emphasise that our system supports the decision-making process. People continue to bear the final responsibility.’ In a nutshell, this example reveals a moral dilemma of AI: won't such a systems stealthily take on too many of our tasks? How do we maintain our freedom, creativity and autonomy if a computer can do everything better? And: is letting go of tasks such a bad thing anyway?
According to Frank van Harmelen, Professor of AI at VU Amsterdam, we really do need to get rid of the image that machines will replace us. ‘That would harm society’, he says. ‘Furthermore, it is scientific nonsense, because artificial intelligence cannot be compared with human intelligence, so why would you replace one with the other?’ However, that does not take away the fact that a considerable wave of automation is taking place in which computers can take on simple tasks. ‘But that is the standard automation story of the past two hundred years. That does not yet mean that we will be governed by machines.’
Computer as co-author
Instead, Van Harmelen foresees an alternative development in which computers collaborate with people as a sort of colleague. They can deliver better work together than if they work in isolation from each other. Such a colleague could be very useful in healthcare, science and education, he thinks. ‘The computer is able to find correlations in an enormous pile of literature. In that, it can discover, for example, that the same proteins play a role in two completely different diseases. That could lead to an alternative application for a medicine, such as Viagra which was once developed for blood pressure and that is now also used to treat erectile dysfunctions.’ However, to work well with a person, such a computer must have a theory of mind: it must be able to correctly interpret the knowledge level of its human “colleague”. Over the next ten years, Van Harmelen and many of his colleagues within the NWO Gravitation programme Hybrid Intelligence will investigate how people and machines can strengthen each other in this manner, instead of replacing each other. The aim is that in several years’ time, the computer will be a co-author of a scientific publication.
Nerd solves problems
How can humans retain their autonomy against such a fast, multitalented and accurate colleague? Bosman thinks that AI researchers have the important task of emphasising that people are always in control: they continue to think and make the final decision. ‘However, I also don’t know how we can maintain that awareness in the longer term too.’ He notes that these ethical questions are becoming relevant now that AI will actually be used in clinical practice. ‘We are simply a group of nerds who solved a problem for radiotherapists, but we must never forget to continually ask ourselves whether we should be trying to solve this problem at all?’
Privacy, autonomy and justice
According to Fleur Jongepier, assistant professor Digital Ethics at Radboud University, there is a high chance that a certain amount of laziness will nevertheless occur, the longer and more often a computer system is used. It would then be very tempting to mindlessly click on “okay”. Jongepier investigates the self-knowledge and autonomy of people in the digital world. According to her, it is possible to integrate human intervention in the design of a new system. ‘For example, with a pop-up that forces you to delay and consciously reflect. However, it must not lead to too much frustration either, as is the case with cookie notifications on websites.
Technology is not neutral, but is sensitive for bias or prejudice
That remains an interesting challenge for designers and engineers.’ Jongepier notices that, in recent years, she and her fellow ethicists have received lots of requests to help think about new technologies. ‘Ethicists are now being snapped up by companies – sometimes with sincere intentions but sometimes as a type of ethics washing. Then they are invited without any form of say, or to prevent legislative regulation.’ Ethics first is therefore her maxim. The design of the new technology must be based on values that you must first of all formulate with a broad group of interested persons. ‘Examples are privacy, autonomy, solidarity and justice. Efficiency should most certainly not be the only starting point.’
Two different types of computers
Jongepier is allergic to both the techno-utopians and the techno-pessimists. ‘AI can be incredibly successful and will perhaps eventually become a right too. It is not inconceivable that patients will shortly be able to demand a medical treatment or advice based on AI.’ Nevertheless, she does see risks of an AI system acting as a scientific co-author. ‘Technology is not neutral, but is sensitive for bias or prejudice. An AI system might suggest that research only still needs to be done if it can be successfully realised within two years, which could be at the expense of fundamental research.’ Van Harmelen agrees with that. ‘I do not yet have a solution for this either. Moral values, such as responsibility and explainability, need to be included in the development project right from the start.’
Even though AI is causing quite a stir in society, human creativity and inventiveness are definitely not under threat, concludes Van Harmelen. ‘Ultimately, thinking is a form of calculation too. However, people and machines are two completely different types of computers with different types of creativity. There is no reason why these two cannot complement each other really well.’