Subscribe
  • Home
  • /
  • CIO Zone
  • /
  • Building transparent ethics into artificial intelligence

Building transparent ethics into artificial intelligence

AI models can be complex and have thousands of variables − each of which may introduce bias in imperceptible ways − so checks and balances are vital.
Tian Horn
By Tian Horn, Hyland account manager, Southern Africa.
Johannesburg, 01 Dec 2022

The 1982 classic science-fiction film, Blade Runner, depicts a dystopian version of Los Angeles in 2019, in which synthetic humans are bio-engineered by a powerful corporation to work in space colonies. The film predicts what a future state and age of machines might hold for us and delivers warnings of the potentially negative effects of artificial intelligence (AI).

So, here we are in 2022 and AI is now mainstream, requiring regulation and the implementation of standards aimed at building AI into businesses but with ethics and transparency. It is now ubiquitous, with tech giants confirming it has officially entered the mainstream and in a resounding fashion.

A PwC survey found that more than half of respondents accelerated AI efforts due to COVID, with nearly 90% indicating they view AI as a mainstream technology. Similarly, an IDC report shows AI system spending will grow by 140% by 2025 − on top of the already massive amount of growth the technology has experienced.

Essentially, there can be no debate: AI is mainstream. But with this development, there is, as they say in the classics, good news, and bad news. Let's kick off with the bad news. AI introduces many potential risks to enterprises, from security issues to bias.

It's important to assess parameters to ensure technologists building the AI algorithm are not introducing bias into the process.

Gartner reports that public concerns about AI dangers are warranted. It cites widespread protests against racism in the US, following which tech giants Microsoft, Amazon and IBM publicly announced they would no longer allow police departments access to their facial recognition technology.

Their concern was that AI can be prone to bias, particularly in recognising people in underrepresented groups. Gartner emphasises that such ethical concerns don't end at facial recognition software, saying organisations developing or using AI solutions need to be proactive in ensuring AI dangers don't jeopardise their brand, draw regulatory actions, lead to boycotts, or destroy business value.

But the good news is that among other benefits, including developing comfort with and embracing AI, it can lead to the development of new products and services, while allowing employees to focus on more strategic projects and rid their daily schedules of mundane, repetitive tasks.

Companies can build their AI with transparent ethics, eliminating many of those risks that have given so many pause when implementing AI within their organisations. Here's how:

Ensure transparency and proper privacy measures are in place: Data security is a topic that's front and centre of business strategies right now − from the C-suite and cyber security teams, to marketing and sales teams.

However, when capturing data that informs AI models, security and privacy should be at the forefront, too. But how? Commence with designing and deploying solutions that have encryption and access control features, then move on to enabling consumers to choose how personal data is collected, stored and used through settings that are clear and accessible.

For transparency, companies can adopt and communicate policies that are clear about who is training and accessing models, how data will be used and for what purpose.

It's important to avoid data bias: AI models often rely on tons of information, and humans then play a critical role in training those models, by setting parameters, plus filtering and curating data.

No matter how neutral people attempt to be, preferences can and do come into play. In turn, it's important to assess parameters to ensure technologists building the AI algorithm are not introducing bias into the process.

Those humans create AI models, feed them, train them and ultimately interpret the ensuing data − actions that may unwittingly be influenced by their beliefs, backgrounds or other factors.

There are several strategies that can be implemented to avoid bias in models. For supervised models − where humans have a strong influence on the data − ensure stakeholders involved in preparing the dataset are equitably formed and have received some bias training.

It's also important to use the right training dataset. Machine learning is only as good as its training data. Training data should replicate real-world scenarios with proper demographic distributions and not contain human predispositions. Also monitoring the models to ensure they reflect real-world performance so they can then be tweaked if bias is detected.

Remember: Models can be complex and have hundreds and thousands of variables − each of which may introduce bias in imperceptible ways − so checks and balances are vital to avoid this when designing models.

Increasingly, AI use cases are dealing with more than just marketing intelligence, driving the need for scrutiny of the AI model.

For example, it's great that Facebook and other social media platforms can use AI-based learning to target you with retail advertising. However, healthcare providers and government officials are developing machine learning models that impact daily life, sometimes literally in life and death situations.

To fully achieve the potential of AI in healthcare, four major ethical issues must be addressed: 

  • Informed consent to use data − as dictated by POPIA in South Africa.
  • Safety and transparency.
  • Algorithmic fairness and biases.
  • Data privacy.

Gartner advises that an external AI ethics board can help embed representation, transparency and accountability into AI development decisions.

My next article will look at necessary checks and balances on data, and expand on what can be done to ensure ethical AI practices are alive and well in your organisation.

Share