Transparency in Artificial Intelligence: Are we getting it right?
Updated: Oct 25, 2021
It would be appropriate to demand clarity on what and where, i.e. what is and what is not IA within a solution.
María Jesús González-Espejo
October 13, 2021
In the field of artificial intelligence, organisations (companies, associations, public administrations) are aware of the need to incorporate transparency as a principle in their strategies and policies. In fact, transparency is being taken up as a principle by most of the ethical codes of companies and institutions, although the way in which it is cited and used is very varied (as can be seen, for example, through the results of the research based on natural language carried out by Linking Artificial Intelligence Principles, which shows that it is linked to concepts as varied as explainable, predictable, intelligible, auditable or traceable).
So is the European legislator, which in the draft EU Regulation has also included transparency as a principle and requires it for some types of artificial intelligence (AI)-based solutions. This draft sets harmonised transparency rules for AI systems intended to interact with natural persons, emotion recognition systems and biometric categorisation systems, and AI systems used to generate or manipulate image, audio or video content.
In general, when talking about AI and the need for it to be transparent, the debate focuses on algorithms and data and, more specifically, on their power to influence our decision-making on aspects that may affect our fundamental rights in financial, health, legal, educational, etc. matters. And also on our will, limiting our ability to choose freely (by being informed, voting, buying, etc.), without us being aware of it.
The reality is that for now it seems that what politicians, legislators and doctrine are most concerned about is how the data is used by AI, where it comes from, how it is labelled, etc., as well as how the algorithm works: what it does and how it does it. But the debate on transparency and AI should focus on many more aspects. In particular, all those that make up or influence the development and commercialisation, deployment and use of AI systems. Among others, it would seem appropriate to demand transparency on the what and where, i.e. what is and what is not AI within a solution, the business model that justifies it, the degree to which an AI system influences and shapes an organisation's decision-making process, or the capabilities and limitations of the AI system. As well as about the subjects that have a role of any kind in relation to that AI, such as its owners, developers, supervisors, certifiers, users or stakeholders / affected / beneficiaries. In addition, transparency should be required with regard to all processes of use, supervision, certification.
In conclusion, we should be aware that AI is an increasingly relevant technology whose potentialities may affect our fundamental rights. Transparency is a polysemic concept in the legal field and understanding its different meanings is interesting, because from them we can perhaps draw useful conclusions to define the meaning of transparency applied to artificial intelligence.
Transparency is being incorporated as a principle in most laws and ethical codes for AI in companies and institutions, although the way it is defined varies widely. In general, when we talk about AI transparency today, it is claimed in relation to data and algorithms, but not to other issues that may be equally relevant. This is why it is important to reflect as soon as possible on everything that influences the development and commercialisation, deployment and use of artificial intelligence systems, as transparency is not just a nice word that sells a lot; nor is it enough to apply it to data and algorithms.
This is a complex, multifaceted right that needs to be properly defined, based on a thorough understanding of this category of technologies, before it is too late and we live in a world that is artificially intelligent, but perhaps too dark.