Why the risks of artificial intelligence are limited and how to address them
Understanding the benefits of AI while managing the risks is necessary for society to progress.
November 18, 2021
The 'existential threats' of AI are very limited. The apocalyptic scenarios depicted in some science fiction novels are far from being realised. Machines are light years away from dominating man.
An algorithm can play chess, but not poker (others can play poker, but not chess); others can determine the risk of default on a loan, but not translate a text from English to Mandarin; and others can recognise faces, but not compose music.
As one of the leading pioneers in the field, Andrew Ng, says: "Worrying about AI dominating humans is like worrying about overpopulation on Mars".
However, the development of this technology, which has demonstrated a great impact on effectiveness (it improves process performance) and efficiency (it does so at a lower cost), is not without risks. There are at least five ethical and social factors that need to be addressed as soon as possible in order not to hinder the benefits that these advances can bring to society.
1. Sudden job displacement
The development of AI lowers the cost of certain tasks, shifting the value - and the role - from humans to other links in the value chain. As recently as 50 years ago, most complex engineering calculations were performed by armies of 'human calculators' armed with manual slide rules (delightfully reflected in the film 'Hidden Figures', 2016), which were displaced with the advent of the first computers. Other jobs have all but disappeared (developing photographic paper) and have been replaced by others as different technologies have appeared (who would have guessed the role of cybersecurity expert, data engineer, creator of virtual universes, 'community manager'...). Although overall some jobs will disappear and others will be created, the risk lies in the speed at which this happens and the possibilities for the people who performed the tasks at risk to retrain fast enough. Anticipation is the key.
2. The Big Brother
The ability to collect, cross-reference, aggregate and analyse personal information is developing at such speeds that we are often unaware of what they 'know about us' with our consent.
Today, most information is used for relatively benign (though often annoying) commercial purposes (advertising, trade). The temptation to use such 'absolute power' to surveil any citizen can be irresistible to any totalitarian power. The impact of any tool in the wrong hands can be overwhelming.
3. Perpetuation of biases
In aspects that have to do with people's lives, there is a risk that AI algorithms will deepen, rather than alleviate, certain biases. These algorithms are fed by thousands of 'real' cases, so that the machine learns to detect and apply the most important determinants of the likelihood of a crime, the value of a court award or the risk of non-payment.
And it is not easy to completely eliminate such biases, even in real life: even if there is no 'race' field, the algorithm may inadvertently infer it from data such as address, education or even first and last names overrepresented in races or social groups.
4. Complex ethical dilemmas
In the face of an unavoidable accident, should an autonomous car prioritise the lives of the car's passengers or human life in general? Is it better to drive off a cliff killing the two passengers in the car I am driving, or turn around and hit a bus stop where there are many pedestrians waiting?
This set of dilemmas can also be applied to fields such as medicine or defence and require a universal and consensual response.
5. Increases in inequality
The cumulative nature of data and improving algorithms generates 'winner-take-all' dynamics that can lead to increases in inequality at different scales. For example, it is very difficult to build a better search engine than Google because it has a sustainable advantage in the amount of information about the web and users that it has accumulated.
How can these threats be resolved?
The speed of technological development is such, and the impact it has on people's lives is so profound - and growing - that it is impossible for law and regulation to anticipate these problems that we do not foresee today, just as we did not foresee many of today's problems only 10 or 20 years ago.
The law takes a long and negotiated development time, and must be reasonably objective, albeit interpretable. Therefore, the law cannot be the only solution (although it must be part of it) to the dilemmas of the - undoubtedly socially positive - development of AI in different areas of business and administrations.
The alternative of a law that could cover all possible cases 'ex ante' is not only not possible (due to the speed of technological development), but not even desirable, because of the strong impact it would have on innovation - bad in itself, and even worse in a context of strong technological competition with the United States and China
The best solution
The best solution is to develop protocols and appoint ethics officers in companies and administrations who, in a flexible manner with the degree of maturity of the technology, and depending on the values of what is acceptable and what is not, effectively control - with real veto power, access to information and key decisions - potential dilemmas.
Think of a medical ethics committee discussing the appropriateness or otherwise of applying certain treatments in ethically charged medical cases. Let us hope that good governance rules - or perhaps regulation itself - will lead to the creation of these self-monitoring roles, allowing time for law and jurisprudence to develop at their own speed, protecting society and fostering innovation.