A recent survey conducted by ITWeb in partnership with KnowBe4 Africa examined the extent of generative AI (GenAI) adoption by SA organisations.
The objective was to understand businesses’ concerns around the technology and how leaders mitigate risks.
A total of 156 valid responses were captured, with 73% of respondents being at executive or middle management level. While almost half (42%) of the survey respondents came from the IT sector and 12% from financial services, the remaining 46% came from a wide range of major industry sectors.
“The majority of South African organisations participating in this survey are embracing emerging technologies and embedding GenAI into their day-to-day operations,” says Anna Collard, SVP of content strategy and evangelist at KnowBe4 Africa.
She continues, “However at the same time, not enough is being done to regulate the use thereof or educate their staff. Security and privacy risks, ethical concerns such as bias, inaccuracies, and impact on critical thinking are challenges that need to be addressed by a combination of regulation, guidelines and awareness training.”
The majority of South African organisations...are...embedding GenAI into their day-to-day operations.
Almost all (92%) of the survey respondents said they were familiar with ChatGPT and/or other GenAI technologies. Three quarters (72%) of respondents currently allow employees access to ChatGPT and/or other generative AI technologies. However, 39% of those don’t have any restrictions in place, 14% have defined process and policies, 13% have additional technical control measures and 6% say it is part of one of the products or processes that they use.
The top five current and future use cases for ChatGPT or other GenAII in respondents’ organisations include:
- Content generation (64%)
- Research purposes (62%)
- Customer support (56%)
- Writing code/scripts (43%)
- Creative tasks (49%).
The ways in which data privacy concerns are addressed when using ChatGPT or GenAI are: employee training on responsible AI usage (36%); policy (29%); and encryption (28%). A quarter (27%) of respondents say this is not addressed or regulated.
Half (50%) of respondents say employees are provided with guidelines on the appropriate use and limitations of ChatGPT and GenAI. Of those providing guidelines, 67% provide basic guidelines only, while 37% are detailed.
Over a third (36%) of survey respondents don’t address or regulate the potential misuse of ChatGPT or GenAI within their organisation. For those that do, the top three ways in which misuse is mitigated are: employee training on responsible AI usage (33%), policies (32%) and AI ethics and usage policies (25%).
The majority of respondents (74%) say there haven’t been any incidents of misuse or security breaches related to ChatGPT or generative AI within their organisation. A quarter (22%) of respondents aren’t sure and 4% said there had been incidents.
Some 30% of respondents provide basic training to employees about identifying and countering AI-generated misinformation or deepfakes. Thirteen percent of respondents provide comprehensive training and 58% say no specific training is provided around this.
The majority (79%) of respondents are concerned about adversaries using generative AI like ChatGPT or uncontrolled versions of it in their attacks (i.e., better social engineering, deep fakes etc).
Respondents’ main concerns about generative AI in general are:
- Security threats becoming more sophisticated (60%)
- Ethics and bias issues (49%)
- Impact on critical thinking (49%)
- Not being able to keep up with the pace of innovation and competitors (36%)
- Job replacements and unemployment (29%).
The AI services currently in use in respondents’ organisations are ChatGPT (68%), CoPilot (27%) and Otter.ai (21%).
Just over a third (36%) of respondents’ organisations have integrated ChatGPT or other generative AI into their business processes, systems or applications. Half of respondents (53%) say they’re planning to use ChatGPT or other generative AI systems in the future.
Collard ends off by underlining the importance of awareness, education and training as a measure to ensure the responsible use of AI. “This is followed by a need for more regulations and policy, as well as technology to help solve the challenges.”
Share