top of page
  • Writer's pictureCristina Ferrero Castaño

Why ethics is essential in the creation of Artificial Intelligence

Artificial intelligence (AI) has long been a feature of modern technology and is increasingly common in technologies used in the workplace.

Revista Byte

December 10, 2021

This can be seen in Spain with the creation of the Secretary of State for Digitalisation and Artificial Intelligence (SEDIA) in January 2020, or the appointment of the first director of the office of data within the secretariat itself. The Spanish government is betting on AI with very powerful investments.

However, according to a dossier by ONTSI (National Observatory of Technology and Society) on indicators on the use of artificial intelligence in companies during 2020 in Spain and the European Union, this growth is still very slow.

Although positive, the data show that Spain, with an AI uptake in companies of 7%, outperforms the EU27 average of 6%. The leading countries are the English-speaking Ireland (20%) and Malta (15%), followed by Finland (10%) and Denmark (9%).

One of the possible reasons for the general lack of trust in AI is the possibility of unethical biases being introduced into the development of AI technologies. While it is true that no one sets out to build an unethical AI model, it only takes a few cases for disproportionate or accidental weighting to be applied to certain types of data to the detriment of others, creating unintended biases.

Demographics, names, years of experience, known anomalies and other types of personally identifiable information are the types of data that can bias AI and lead to biased decisions. In essence, if the AI is not well designed to work with data, or the data provided is not clean, this can lead to the AI model generating predictions that could raise issues with ethics.

El creciente uso de la IA en todos los sectores aumenta la necesidad de modelos de IA que no estén sujetos a sesgos involuntarios, incluso si esto ocurre como un subproducto de cómo se desarrollan los modelos. Afortunadamente, los desarrolladores pueden asegurarse de que sus modelos de IA se diseñen de la forma más justa e igualitaria posible para reducir la posibilidad de sesgos involuntarios y, en consecuencia, ayudar a aumentar la confianza de los usuarios en la IA. Para mejorar el proceso, se deben seguir los siguientes pasos:

The increasing use of AI across all sectors increases the need for AI models that are not subject to unintentional bias, even if this occurs as a by-product of how the models are developed. Fortunately, developers can ensure that their AI models are designed as fairly and equally as possible to reduce the possibility of unintended bias and, consequently, help increase user confidence in AI. To improve the process, the following steps should be taken:

1. Adopt a mindset that prioritises equity:

Integrating fairness into all stages of AI design and development is a crucial step in developing ethical AI models. However, fairness principles are not always applied uniformly and may differ depending on the intended use of AI models, posing a challenge for developers. All AI models should have the same fairness principles at their core. Therefore, educating data scientists on the need to build AI models with an equity mindset will have a significant effect on the way models are designed.

2. Clean data clears the way for AI

With such a significant investment in AI technologies, it is essential that the tools being used and produced deliver a strong return on investment. The skills, experience and expertise of the IT professionals and data scientists involved in AI projects can determine the success of these projects and therefore whether they will deliver a strong return. Being able to ensure that the data used for AI models is clean and relevant is a key skill required before starting an AI project.

IT professionals and data scientists must be able to not only identify and provide clean data, but also understand how to identify results that have been skewed by biased data. They can then retrain the model using more appropriate data, leading to constantly improving the results of AI projects.

3. Continue to participate

While one of the advantages of AI is its ability to reduce the pressure on human workers to devote their time and energy to menial and repetitive tasks, and many models are designed to make their own predictions, it is essential that humans remain involved in AI to at least some extent. This must be taken into account throughout the development phase of an AI model and in its application in the workplace. In many cases, this may involve the use of shadow AI, where both humans and AI models work on the same task before comparing results to identify the effectiveness of the AI model.

Alternatively, developers may choose to keep human workers within the operational model of the AI technology, especially in cases where an AI model does not have sufficient expertise, allowing them to guide the AI or overwrite any errors to help the system achieve optimal results.

The use of AI is likely to continue to increase as ANZ organisations, and the world, continue to digitally transform. As such, it is increasingly clear that AI developments will need to be even more reliable than they are today to reduce the possibility of unintended bias and increase user confidence in the technology.

14 views0 comments


bottom of page