We’ve heard the hype. Following OpenAI’s launch of the first readily-accessible, easy-to-use generative AI (GenAI) software – ChatGPT – two years ago, GenAI has been categorised as everything from the silver bullet that will transform businesses in ways never dreamed of, to an insidious threat that will hasten the replacement of humans by machines.
Neither extreme is wholly accurate, but there is no question that Gen AI – which is not to be confused with ‘traditional AI’ – has captured the imagination of the market in ways few other “new” technologies have, in as short a time. ChatGPT garnered 1 million users within five days of its launch.
Traditional AI has long been used in business to analyse data and provide insights, enabling it to dominate areas like fraud detection. In contrast, GenAI is trained on large datasets to generate new data such as human-like text, code and images.
A recent McKinsey survey released in May 2024 revealed a surge in overall AI adoption, rising from around 50% where it languished over the past six years to 72%. McKinsey attributes this to the rapid uptake of GenAI: nearly two-thirds (65%) of respondents in the 2024 survey reported that their organisations are regularly using GenAI in at least one business function. That is “nearly double” the percentage of GenAI users identified in the company’s previous survey just 10 months before. The largest increases, where reported adoption has gone up by more than 100%, were in the areas of marketing and sales.
However, in an exclusive webinar for C-suite executives and management, Edward Müller, sales and solutions architect at Mint, warned that Gen AI adopters should bear in mind the technology’s limitations and the need for ethical considerations.
“GenAI models can be biased based on the data on which they are trained. As a result, they may produce biased responses. In addition, the models are also know to hallucinate (create false information). Companies using these models must therefore consider ethical use and validate outputs to avoid errors, especially in sensitive applications,” he said.
Nevertheless, Müller firmly believes that GenAI has the potential to streamline operations, boost content creation and technical development, enhance customer experience and unlock new revenue streams.
“In the area of customer interaction and support, GenAI is improving chatbots, making them more human-like in understanding context,” he said, noting that it can also automate tasks like routing e-mails and summarising conversations and interactions, something traditional machine learning models find more challenging.
GenAI is also revolutionising content creation and communication not only because of its ability to quickly create marketing content, but also because it can translate languages, summarise text and even provide educational support by explaining complex concepts.
In business operations, it can boost productivity and aid in decision-making, personalise recommendations, automate HR queries, deliver financial report analyses and, in a hospital environment, for example, summarise patient files.
In the technical and development arena, tools like GitHub Copilot assist developers by generating code, explaining existing code, creating test cases and reviewing legal documents.
However, Müller said if GenAI is to deliver on its potential, there are critical steps that need to be taken to develop and execute a generative AI roadmap that aligns with business goals and future-proofs operations.
“In addition, a successful GenAI deployment requires more than just technical skills. It also needs to address compliance, risk management, legal considerations and ethical implications,” he added.
Müller proposed a 10-step strategic framework that would enable the roll-out of a successful GenAI strategy.
- Assessment of current capabilities: Begin by assessing the quality and availability of your data, as well as your team’s skills in managing and utilising Gen AI. This includes technical expertise and legal and ethical knowledge.
- Identify partners and available technologies: Researching available technologies and identifying strategic partners with industry expertise and AI experience is crucial. Also investigate using off-the-shelf or customised solutions.
- Building the AI team: Identify the key roles needed, such as data engineers, cloud architects and compliance officers. Existing team members can be upskilled to fill many roles while vendor partners can also help fill gaps in AI skills and domain-specific knowledge.
- Process and policy: Ensure your security, compliance and infrastructure are aligned for AI deployment. This includes managing data access controls and adhering to regulatory and privacy regulations like GDPR and POPIA.
- Ethics and legal concerns: Develop processes to mitigate bias in AI outputs. Validate and test AI for fairness and accuracy. If necessary, upskill compliance and legal teams on AI.
- Risk management and stakeholder engagement: Implement risk assessments and keep stakeholders informed to ensure transparency and buy-in for the AI solution. Train staff where necessary.
- North star and objectives: Align the AI strategy with business goals, ensuring that the AI solutions are ethical, transparent, scalable and provide clear value to the business.
- Partnering and technology: Evaluate existing technologies and potential partners to streamline development and implementation. Include infrastructure specialists, product owners, data scientists and engineers.
- Iterative development: Start with small, focused use cases rather than a large-scale solution, gather feedback from users and iterate to improve.
- Measuring success: Define metrics such as user engagement or business impact, to assess the success of AI solutions and scale them across departments once pilots are proven effective.
“Following these key steps will help organisations to realise the benefits of GenAI and ensure the technology achieves its full potential,” Müller concluded.
Share