Subscribe
About

When AI becomes harmful to humans

Sibahle Malinga
By Sibahle Malinga, ITWeb senior news journalist.
Johannesburg, 15 Sep 2022

While the development of artificial intelligence (AI) is on track to propel societies and industries to unprecedented levels of progress, its unintended negative consequences have caused great harm to some sectors of society.

This was the word from Elizabeth M Adams, CEO of EMA Advisory Services, giving a presentation on human-centred design and AI bias at the second Global AI Summit in Riyadh, Saudi Arabia.

Adams is an AI ethics and organisational culture advisor, and has been researching AI biases. She was awarded the inaugural 2020 Race & Technology Practitioner Fellowship by Stanford University's Centre for Comparative Studies in Race and Ethnicity.

Discussing social layers and negative implications of AI, Adams highlighted the importance of AI developers in engaging with people who have been victims of AI biases and involving them as strategic partners in the development of responsible AI, to build societal trust as the technology advances.

Modern advancement in AI has made it possible for business to develop systems in an expanding range of fields, creating new products and services, and ensuring efficiencies across operations.

However, when the physical world collides with the virtual world, there can be dire consequences, Adams warned.

“The adoption of artificial intelligence is not entirely beneficial to all sectors of society and some are fearful of what AI has already become. For those persons who have been negatively impacted, AI biases exist in multiple ways – from inaccurate healthcare diagnostic outcomes, to hiring algorithms gone wrong and inaccurate credit score rates.

“AI biases interrupt the lives of some members of society, and in some cases there is no accountability or recourse,” she explained.

AI systems often display biases orprejudice, as a result of the way they have been programmed, or the data sources used in their training. For example, a machine learning tool could be trained using datasets that misrepresent a particular gender or ethnic group.

Fair and just society

When the virtual world and physical world collide, all the time, money and resources spent by organisations to develop AI solutions becomes wasteful expenditure, she noted.

Referencing an example of AI bias, Adams showed a presentation which displays two hands holding an infrared thermometer. One hand was of a light skin tone and the other a dark skin tone.

“When analysing the light-skinned hand, the AI model predicts a 68% chance that the image contains a technology in it [thermometer].

“And for the image with a darker-skin hand, the AI model predicts an 88% probability that the image contains a gun. When we lighten the dark hand, the prediction changes. So algorithmic errors in predictive labelling can be harmful to individuals and large groups of citizens who are unaware they are subjects of AI bias.”

Research has shown that representation matters in AI development, and those who are part of groups that have been negatively impacted by AI biases are rarely represented in the design and development of AI, she continued.

To move to a more fair and just society with AI, the industry must start with companies that influence the design, development and use of AI, she asserted.

“By actively involving impacted individuals as stakeholders in AI development, emerging systems will be more responsible, and gain tangible and symbolic support from those most affected by it.These individuals can help developers foster tech advancements that can benefit everyone and not only a particular group of people.”

Share