The South African Artificial Intelligence Association (SAAIA) has made a submission to the Information Regulator of South Africa around the use of data to train AI models for the LinkedIn platform, without acquiring users’ prior consent.
This, after LinkedIn, the world’s biggest business and employment-focused social media platform, recently updated its data usage policy, igniting discussions about the use of user data to train AI models.
This update raised concerns regarding user privacy and data security, particularly in relation to personal information.
Launched in June last year, SAAIA is a body that focuses on promoting the advancement of responsible AI in the country.
It also aims to unite practitioners across commercial, government, academic, start-up and non-governmental organisation sectors.
The Information Regulator, which enforces South Africa’s data protection law − the Protection of Personal Information Act (POPIA) − confirmed to ITWeb via e-mail that it has received the SAAIA complaint about the Microsoft-controlled LinkedIn.
In a statement, SAAIA says it seeks to encourage stakeholders to adopt responsible AI for the commercial and societal benefit of South African citizens, with a primary focus on economic growth, trade, investment, equality and inclusivity by uniting practitioners across the commercial, government, academic, start-up and NGO sectors.
Dr Nick Bradshaw, founder of SAAIA, says: “The race to build new AI products and services is a global one but its impacts can also be local. We have been monitoring the breakneck speed of AI innovation, as vendors and investors are spending huge sums of money to bring these new offerings to market, and while doing so, we are assessing if this is being done in a responsible manner.
“To this end, we feel it’s important that individuals and nation states must not be disadvantaged in both the short- and long-term, especially when it comes to how our personal data is being used to train the next generation of AI-powered platforms and applications.”
Broader issues at stake
SAAIA advisory board member Nathan-Ross Adams, who heads up regulatory affairs and was principally involved in drafting this submission, states: “Our letter of complaint to the Information Regulator is focused on LinkedIn’s use of South African users’ personal information to train its generative artificial intelligence models in that it does not meet the conditions for lawful processing under Chapter 3 of POPIA; their conduct likely constitutes interference with personal information, as outlined in section 73 of POPIA; and given the significant public interest, requires investigation from the Information Regulator.”
Adams adds: “This is more than just a legal matter; it’s about protecting the rights of individuals in an age where data is currency. SAAIA’s mission is to ensure that as AI grows more powerful, it also grows more accountable.”
Says Bradshaw: “The SAAIA’s mission is to engage society in this debate, be they citizens or governments, AI novices or AI experts.
“No one should be left behind in the race to embrace AI. It is of vital importance that the opportunities presented by artificial intelligence should have at their heart the principles of responsible AI and don’t just benefit a select few. We will await the feedback from the Information Regulator of South Africa on this important matter.”
Nomzamo Zondi, spokesperson of the Information Regulator, says the complaint is currently being processed and considered by the watchdog.
“Once the complaint has been assessed, a decision will be taken and necessary communication will be made with the complainant and the responsible party. Therefore, we are unable to make a determination while we are conducting a pre-investigation,” she says.
User disagreement?
Last month, LinkedIn updated its user agreement and clarified some practices covered by its privacy policy.
“In our user agreement, updates include more details on content recommendation and content moderation practices, new provisions relating to the generative AI features we offer, and licence updates designed to help creators expand their brand beyond LinkedIn,” says Blake Lawit, senior vice-president and general counsel at LinkedIn, in a blog post.
According to the company, these and other user agreement updates will go into effect on 20 November.
“In our privacy policy, we have added language to clarify how we use the information you share with us to develop the products and services of LinkedIn and its affiliates, including by training AI models used for content generation and through security and safety measures,” Lawit adds.
“When it comes to using members’ data for generative AI training, we offer an opt-out setting. At this time, we are not enabling training for generative AI on member data from the European Economic Area, Switzerland and the United Kingdom, and will not provide the setting to members in those regions until further notice.
“As technology and our business evolves, and the world of work changes, we remain committed to providing clarity about our practices and keeping you in control of the information you entrust with us.”
Share