Cristina Ferrero Castaño
Ethical AI is easier said than done
The non-profit group AlgorithmWatch created a global inventory of guidelines and frameworks that governments, businesses and societies could follow to create ethical AI.
IT User
August 23, 2021
The list has more than 160 items, but it is hard to find enforcement mechanisms within them. Last year, a group of researchers examined 22 ethical guidelines, frameworks and principles to see if they affected human decision-making. The disheartening answer they came up with? "No, most of the time they don't".
At a meta-level, the problem is somewhat bigger than finding a framework that is appropriate for a company or has a compliance mechanism that works. In reality, although there are many, these frameworks do not have a common, global and agreed standard for assessing and implementing ethical and accountable IA.
Whatever guidelines and frameworks for ethical AI an organisation chooses, they have one element in common: they emphasise the use of high-quality data to train AI. Poor, incomplete, biased and skewed data is the root cause of AI's poor reputation. If we reduce data bias and launch the technology on an ethical backbone, we can create new, interesting and reliable futures for AI, one in which AI does not confuse us or harm businesses or cause social distress.
The need for a common global language and unified standards is urgent. Without this, no amount of guidance will match the problems presented by AI. But smart companies know they must forge ahead: does fortune favour the brave? - make the best use of what is available.
As investment in AI grows, so does the need for ethical AI.
Companies are betting big on AI. Investments in artificial intelligence are forecast to grow from $27.23 billion in 2019 to $266.92 billion in 2027. As investments increase, the need to give the technology a "moral compass" will become more urgent. The National Institute of Standards and Technology (NIST) is trying to do this by establishing US federal artificial intelligence guidelines that improve confidence in artificial intelligence technologies. NIST could also identify metrics to measure and monitor ethical AI, setting the pace for common standards.
Every company should start by investing in legal resources, data sources, data science, analysing industry and geography-specific regulatory requirements, conducting risk assessments, understanding societal concerns and leveraging external technology expertise. These investments will result in the creation of auditable processes for accountability, documentation of why and how a dataset is used, and an inventory of decision-making models. These mechanisms may not always prevent ethical failures, but companies will be able to quickly identify failures and take corrective action.