As we advance into 2024, all eyes are on generative artificial intelligence (AI) as the key to revolutionising how businesses operate and create value.
While this transformative technology represents a leap forward for AI, with immense potential for problem-solving and innovation, is it the silver bullet that it is often made out to be?
In short: no, not yet. But it could drive significant change and opportunities in the year ahead.
Where GenAI fits in
Generative AI doesn’t replace predictive AI – instead, it complements it. It enhances and elevates traditional AI approaches.
The term 'traditional' AI in itself reflects the rapid growth and dynamic nature of this field. Recent advancements focus on simplifying complex processes and unifying workflows, making AI more accessible and effective for businesses.
Incorporating state-of-the-art elements like large language models (LLMs), vector databases and advanced embedding models, these technologies empower businesses to innovate across diverse environments, driving tangible value.
While each AI type has its strengths, generative AI cannot do everything, despite what some may claim.
To fully harness the potential of both predictive and generative AI, a combined approach is essential. This integration facilitates a cohesive and efficient AI-driven strategy, crucial for businesses seeking comprehensive solutions. While each AI type has its strengths, generative AI cannot do everything, despite what some may claim.
Where GenAI could add business value
Below, I explore several key applications of generative AI that are likely to add value in the coming year:
Knowledge retrieval and summarisation
Businesses often have vast stores of knowledge that are relevant and personal to them, whether it relates to customers, products or services, business dealings, or even internal rules and processes. This is a perfect gap for an LLM to fill.
Businesses could start training LLMs on their internal knowledge base, allowing employees and/or customers to query this in a conversational manner, akin to a chatbot. Rather than spending hours searching through scores of paperwork looking for an answer, one could merely type a question to which an LLM generates a reply it believes is the most likely response.
This could be integrated with an employee-facing application, such as Skype, Teams or Slack, where a user merely types a query as if sending a message to any other user, negating the need to jump between platforms. It could also be embedded into a customer-facing interface to maximise use and, most importantly, add value faster.
An application like this is well-suited to businesses that have vast amounts of data to learn from, and where employees and customers may not know every aspect of the business.
For example, a law firm can train an LLM with previous case law, as well as current legislation and context – this opens up a world of opportunities for employees, as they can query their custom chatbot and have a wealth of information, references and resources to point them in the right direction in a matter of minutes.
Another example is financial services providers like banks and insurers, which offer multiple products across different industries. GenAI could be used internally to help employees give the best information and service to customers, and externally to help clients better understand the company’s offerings.
Custom communication and model interpretability
With predictive AI, we end up with a prediction without always understanding and being able to communicate why the model predicted the value that it did. Understanding the driving forces of a prediction is sometimes as important, if not more important than the prediction itself.
Some auto machine learning tools give prediction explanations along with the prediction, and a generative model can be used to interpret and relay that information back to a user, be it an employee or a customer.
Querying the model’s features and their respective impacts on the model allow more non-technical users to understand a model and the data (data scientists, this will save you countless hours of back and forth with your product owners).
A good example use case for this is automated loan pre-approval, where a user fills out a form and the values they input are given to a model to make a prediction on their likelihood of default. Users may then be greeted with the result, but with little to no explanation why – this is where a generative model could be very useful. It could curate a personal message to the user, explaining why they are being denied.
The length and tone of the communication, and the type of language used, can all be given in the automated prompt, along with the user information and ideas to increase their chances of success, among other things.
This all leads to a more personal approach to customers without the need to dedicate employees to do this.
Document interaction
Businesses deal with millions of documents daily – but GenAI offers a faster way to interact with the information stored in a document.
For example, an HR department dealing with applicants’ CVs can use an LLM to identify which candidates meet requirements. Some questions that may be asked for a data scientist position could be: “Does the applicant have Python or R skills?”, “Is the applicant familiar with DataRobot?”, “Does the applicant have more than three years of experience?”, etc.
If the answer to all of those is a ‘yes’, then the CV progresses. The applications of this are almost limitless, with all types of documents being used.
GenAI meets predictive AI for milestone year
There are a host of other applications for generative models. With exponential growth in generative AI’s capabilities, businesses are already exploring new frontiers in using it to drive value, and 2024 is expected to be a milestone year for GenAI in South African business.
Combining generative AI with predictive AI will extract the most value. Integrating predictive AI with generative AI for model monitoring, particularly to ensure the generative model is producing accurate and reliable outputs, involves a few key strategies:
Performance metrics and predictive monitoring: Predictive AI can be used to continuously monitor the performance of a generative model. By analysing patterns in the model's output over time, predictive AI can forecast potential declines in performance or accuracy.
Anomaly detection: Predictive AI algorithms can be trained to detect anomalies in the output of generative models. If a generative model starts producing outputs that deviate significantly from expected patterns, it could indicate a problem like model drift, data corruption, or a shift in the underlying data distribution.
Feedback loops: Implementing a feedback loop where the outputs of the generative model are periodically assessed and fed back into the predictive model. This process allows for continuous refinement of the predictive model's ability to monitor and evaluate the generative model's outputs.
Predictive maintenance: Just as predictive AI is used for predictive maintenance in machinery and equipment, it can be applied to anticipate when a generative model might need retraining or maintenance. By predicting when a model's performance is likely to degrade, interventions can be scheduled proactively.
Quality assurance: Predictive AI can be used to forecast the quality of outputs based on various input parameters or environmental conditions. This can help in setting up quality control mechanisms where outputs are checked and validated against expected standards.
Simulation of edge cases: Predictive AI can help identify potential edge cases or rare scenarios that the generative model might not handle well. These cases can then be simulated to test and improve the robustness of the generative model.
Data drift and concept drift detection: Predictive AI can monitor for data drift (changes in input data over time) and concept drift (changes in the statistical properties of the target variable), which are critical for maintaining the accuracy of generative models.
By combining the foresight and pattern recognition capabilities of predictive AI with the content generation abilities of generative AI, organisations can create a more robust, self-improving system. This integration ensures generative models remain accurate and relevant over time, adapting to new data and evolving requirements.
Enter LLMOps
To optimise the potential of LLMs, we also need functionality to monitor the entire lifecycle of LLMs, from development to deployment and ongoing maintenance, with a strong focus on quality, reliability and ethical considerations.
A term that will become more and more popular is LLMOps, which is to LLMs as what MLOps is to predictive AI. LLMOps will encompass the development and training of LLMs, focusing on computational efficiency and model tuning.
Rigorous testing and validation are crucial, given the complexity of language processing and the need for unbiased, accurate outputs. Deployment in LLMOps involves scaling and resource management, ensuring responsive and efficient model performance.
Continuous monitoring and maintenance are essential, addressing challenges like data and model drift, and adapting to evolving language trends. Automation plays a key role in streamlining processes, and feedback loops are implemented for ongoing improvement of the models.
LLMOps thus represents a comprehensive approach to managing the lifecycle of LLMs, ensuring they are effective, relevant and ethically sound.
Any platform or tool that combines the potential of generative AI with the power of predictive AI, while integrating MLOps and LLMOps seamlessly into the workflow, will gain a competitive advantage over those focusing on one or the other, and will allow businesses to realise their full AI potential.
Share