The European Union’s (EU’s) new Artificial Intelligence Act (AI Act) is poised to have a wide-ranging impact on South African firms that offer AI products, services, or systems that are used within the EU.
This was the word from David Luyt, associate at Michalsons, speaking during a webinar organised by the privacy and data protection-focused law firm, titled: “The EU AI Act is here – what now?”
According to Luyt, the EU AI Act will set a new standard for AI regulation and enforcement worldwide, transforming AI governance for companies using AI in products and/or services targeting EU residents – and SA is no exception.
This landmark legislation – the first to specifically regulate AI technologies – not only introduces stringent compliance requirements, but also opens vast opportunities for innovation across various sectors globally, he noted.
It was published on 12 July in the EU Official Journal and will come into effect tomorrow, 1 August, with the implementation process set to be in a phased approach.
Luyt stated the AI Act establishes a comprehensive legislative framework to ensure the safe and ethical use of AI technology. By setting standards for transparency, accountability and oversight, the Act protects EU citizens’ fundamental rights, while fostering an environment conducive to innovation in AI.
“Some businesses may assume that because they are based outside of the EU, it will not affect them, but the reality is that there is a territorial scope that includes anyone developing, deploying or using AI systems within the EU, regardless of their location,” explained Luyt.
“In the same way that the General Data Protection Regulation (GDPR) applied the processing of personal data for EU residents, no matter where in the world they are – the EU AI Act applies beyond the borders of the EU.
“For example, a South Africa-based tech company might be using AI systems in their EU customer services, and it could be something as simple as a chatbot, so the Act has a far range of scope.”
This framework positions the EU at the forefront of global AI regulation, setting a benchmark for other regions and likely influencing policies beyond the EU, he pointed out.
Deadlines for compliance start with prohibiting certain AI practices by 2 February 2025, extending to specific requirements for high-risk AI systems by 2 August 2027.
South African companies that fall under the Act can prepare themselves by investing in compliance and risk management training, and ensuring employees who are involved in the deployment and management of AI systems are well-versed on compliance requirements, he added.
Organisations are advised to select a compliance team or an officer who will play a crucial role in implementing the Act’s policies, ensuring AI deployments adhere to legal standards.
“The person or team responsible for AI compliance must have a multi-disciplinary background. In our view, they could come from IT or information security, legal, risk and compliance, privacy and data protection.”
SA has witnessed significant growth in the adoption of generative AI and AI; however, there are rising concerns among South Africans about its ethical use and privacy risks, reveals PwC’s Voice of the Consumer Survey 2024.
According to Michalsons, the new Act represents a dynamic model designed to adapt alongside the evolution of AI, with the aim to foster the development of trustworthy AI.
Despite the EU receiving much criticism for what is labelled “legislative colonialism” for deploying GDPR rules across the world, the AI Act is expected to shape the future of AI regulation and governance across many regions, noted Luyt.
On its website, the European Parliament writes that the Act’s priority is to make sure AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally-friendly.
“AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes,” it says.
The Act is based on six principles for responsible processing, including accountability, transparency, privacy and data governance, introducing rules for different risk levels, protecting fundamental human rights and supporting business innovation.
“The EU AI Act is the latest in a long line of principle-based legislation and this is the trend we are seeing more and more, particularly when we are dealing with emerging technologies − the goal posts are moving all the time,” said Luyt.
“The pressure is on businesses to show how they comply with these principles, and we see this in the Act’s obligation for businesses to do a robust risk assessment exercise. Businesses are going to have to show they did their homework when a regulator comes and asks what steps they did to comply with these principles.”
The reality is that in future, every single organisation is going to have more and more touchpoints of AI and there will be a consistent need to introduce more and more rules to regulate this emerging tech, he asserted.
Much like the GDPR compliance requirements, businesses are not always going to be 100% compliant with all rules, but rather they will always be striving to fully comply, Luyt said.
“We are far away from an era where the drafting of an Act was wordy and tailored to deal with a specific use case. The use case here is changing the whole time, so with that in mind, even the definition of AI in the Act is broad and designed to grow, depending on what new technologies we develop in future.”
The Act classifies AI systems in four risk categories: unacceptable risk, high risk, limited risk and minimal risk. There are regulations and requirements that speak to each risk level, and organisations are advised to ensure their AI systems are compliant, according to each risk level.
“Financial entities and organisations handling significant volumes of data face increased scrutiny under the Act. They must establish robust data governance frameworks to ensure accuracy and security in their AI applications.
“In creative sectors such as digital arts and publishing, the Act raises important questions about copyright and intellectual property rights for AI-generated content. Specific usage rights agreements may be necessary, impacting the management of royalties and digital content distribution.”
Share