Subscribe
About

National AI policy lacks consequences for transgressors

Sibahle Malinga
By Sibahle Malinga, ITWeb senior news journalist.
Johannesburg, 24 Mar 2025
In cases where AI laws are too stringent, this often has a negative impact on innovation, say experts.
In cases where AI laws are too stringent, this often has a negative impact on innovation, say experts.

While South Africa’s national artificial intelligence (AI) policy is seen as a step in the right direction, industry experts have raised concerns around its lack of harsh consequences for organisations that transgress the ethical practices of AI.

This emerged during a panel discussion held at the recent ITWeb BI Summit 2025, where panellists discussed the imminent AI law under the topic: “Understanding the latest developments in AI legislation and governance”.

The discussion was chaired by AI and automation consultant Johan Steyn, with the panel featuring Dr Rejoice Malisa-van der Walt, CEO and co-founder of AI Nexus Research, Training & Consultancy; and Bruce Bassett

The panellists agreed that SA’s AI policy is premised on good objectives – to balance innovation with ethical responsibility, ensuring AI technologies contribute positively to the country’s socio-economic development.

However, the lack of clarity on consequences and accountability for organisations that fail to comply with its guiding ethical principles could result in dire ramifications for local firms that are slack in adopting legally-binding instruments to safeguard users of the emerging technology, they warned.

Bassett explained: “SA's AI policy framework very much follows the European Union (EU) AI Act, but without the teeth. The EU legislation has clear warnings of what happens if people don't comply, and sanctions apply for companies.

“In the South African proposal, this is just a start, I would say – there is no teeth to it. Although it's got a visionary statement about the goals for equity and the things we would all agree with – it's more of a guideline, rather than providing clear penalties for those that don't follow it. But it’s early days.”

The policy framework will be the foundation for creating AI regulations and potentially an AI Act in SA.

Following the release of the draft national AI plan document in April 2024, the Department of Communications and Digital Technologies published the national policy framework for AI in August, and requested feedback from the ICT industry and other stakeholders.

Steyn highlighted that regulation only works if there are clear consequences for not following it.

Providing an example of the lack of harsh consequences for non-compliance of current laws, he pointed out that the Protection of Personal Information Act (POPIA) is touted as a world-class piece of legislation that is on par with similar laws; however, SA’s Information Regulator has been lenient on transgressors of the law.

“I don't even think we are getting POPIA, our privacy legislation, right in terms of accountability. It’s a world-class legislation, but I have not seen anyone go to prison for not following it. There are probably 10 000 breaches we don't know about and already they want to add AI on top of that, so how do we enforce it? I have read that the rate of innovation in Europe is slowing down due to the legislation. I don't know if that is correct or not.”

According to Steyn, by July, SA should have made some progress in paving way for the introduction of the national AI policy.

“In the meantime, businesses need to get ready for it, even if we don't have a framework for the next few years. This is not just an exercise of ticking the box; corporates need to be encouraged to use AI in a responsible manner.”

From left: Dr Rejoice Malisa-van der Walt, CEO and co-founder of AI Nexus Research, Training & Consultancy;AI and automation consultant Johan Steyn; and Bruce Bassett, professor and chief AI advisor at the University of the Witwatersrand.
From left: Dr Rejoice Malisa-van der Walt, CEO and co-founder of AI Nexus Research, Training & Consultancy;AI and automation consultant Johan Steyn; and Bruce Bassett, professor and chief AI advisor at the University of the Witwatersrand.

The EU AI Act, the first legal framework on AI, entered into force on 1 August 2024, with some provisions applying later, such as prohibitions on certain AI systems starting on 2 February 2025. Full application of the Act is expected by 2 August 2026.

The panellists highlighted several ethical, legal and economic concerns, relating primarily to the risks facing human rights and fundamental freedoms, as pointed out in the EU AI Act. For instance, AI poses risks to the right to personal data protection and privacy, and equally so a risk of discrimination when algorithms are used for purposes such as to profile people, or to resolve situations in criminal justice.

Consequences for non-compliance with the EU AI Act vary based on the nature of the transgression. Non-compliance with banned AI practices is subject to fines up to €35 million, or up to 7% of the company’s total global annual turnover, whichever is higher, according to the Act.

Non-compliance with other provisions of the Act is subject to fines of up to €15 million, or up to 3% of total global annual turnover, whichever is higher.

Malisa-van der Walt highlighted that the EU AI Act sets out a clear set of risk-based rules for AI developers and users, regarding specific uses of AI, making it easy to assess its risk and regulate.

“The EU has categorised AI into four categories: unacceptable risk – where all AI systems considered a clear threat to the safety and rights of people are banned. Then there is the high risk, limited risk and transparency risk/acceptable risk categories. The EU’s AI office is constantly studying various ways to advance their laws and ensure safety,” she stated.

Bassett warned of future capabilities of large language models, such as ChatGPT, and their dire consequences if left ungoverned.

“One of the biggest risks that are not often spoken about is the fact that one day, ChatGPT 5 and ChatGPT 6 will be released. At some point, AI will have an IQ of 170, or even 200. And there are possibilities that these may one day become national security risks.

“So, it’s important for anyone in this business to plan for a possibility that someone like [US president] Donald Trump may want access to these systems switched off, as their risk progresses,” he commented.

Innovation and investment

While rigorous laws are necessary, Malisa-van der Walt concurred with Steyn that in cases where the laws are too stringent, this often has a negative impact on innovation.

“The EU's AI legislation is very stringent and this can stifle innovation, but with the EU scenario, there are many other dynamics that come into play. They need to look at what the hindrances are that stop this legislation from being as competitive as regions such as China and the US. One of the things that I look at is their approach to investment. You find that in Europe there isn't much investment going into the AI space, and also, they don't have a roadmap to encourage talent.

“Whereas if you look at the US, technology professionals receive a lot of support, as stipulated in the National AI Initiative Act. If you go bankrupt, you cannot continue doing business in the EU, but in the US, you can still continue to pitch for funding in Silicon Valley and you can raise $100 million,” noted Malisa-van der Walt.

Steyn added that once finalised, SA’s AI policy will likely follow in Europe’s footsteps.

“The EU AI Act is safe and sensible. If we want to be aggressive, then the US approach is the one to follow, since it also drives innovation. I think for larger organisations, following the European Act is probably a very safe way. It will take a long time for things to actually happen in SA, but I can imagine we won’t end up too far away from the EU approach.”

Share