Large language models (LLM), a subset of generative Artificial Intelligence (AI) focused on natural language processing, is garnering increasing attention as a technology that could potentially revolutionise the workplace and drive productivity to unprecedented heights.
Independent research company Autonomy recently released a report predicting that within 10 years, 71% of workers in the US could have their working hours reduced by at least 10% should LLMs be introduced into workplaces and used as the basis for increased free time. Autonomy also estimates that in at least 10 US states, a quarter of all employees (25%) could move to a four-day work week without any loss of productivity – or pay.
Not surprisingly, therefore, the market potential of LLMs is enormous. Independent research company MarketsandMarkets expects the global natural language processing market to grow from $18.9 billion in 2023 to $68.1 billion by 2028, at a CAGR of 29.3%. This, it says, will be driven in part by the increasing use of LLMs.
And everyone, it seems, is scrambling for a piece of the LLM action. LLMs are defined as very large deep learning models that are pre-trained on vast amounts of data that gives them the ability to answer specific questions from information held in digital archives and to produce content in natural-sounding human language.
While the best known of the LLMs available today is probably OpenAI’s ChatGPT, there are others, many from small start-ups (think Open AI just eight years ago). Not to be outdone, most of the tech giants are moving quickly to integrate LLM offerings into their online services: Microsoft has Copilot; Google has launched Palm (Pathways Language Model; Meta has introduced LLaMA (Large Language Model Meta AI); and even companies like Tesla are entering the fray.
However, Henri Fourie, Head of Clients at Mint Group, cautions that although the potential benefits of LLMs in the workplace are enormous, adoption and implementation of the technology should still be approached with care.
“The power of LLMs is that they are so versatile,” Fourie says. “The list of tasks an LLM could perform is huge – but they are not for everyone in every organisation. They have limited value on the factory floor where workers make limited use of computers, and produce physical goods rather than content, for example. Therefore the capability should be targeted at those individuals and roles where they really will make a difference in terms of time savings and productivity improvement.”
According to Fourie, LLMs can be used for applications ranging from copywriting and knowledge-based answering to text classification and code generation. They could summarise a recorded meeting and highlight the salient points; write a proposal based on that meeting; or create a graph based on an Excel spreadsheet so that you can understand trends and data at a glance. They can also be used for repetitive clerical tasks and for customer service chatbots.
He also points out that getting the most from LLMs requires careful consideration of which roles could benefit most from the technology. Thereafter, users need to be trained to ask the right questions to ensure they receive the best possible answers.
“However, while LLMs can save an enormous amount of time when used correctly, the LLM is not accountable for what it produces. It will use whatever data it has access to to complete its assigned task. That data could be on the Internet – and we all know there’s a lot of fake information out there, which is why there are increasing complaints that ‘ChatGPT lies’. It doesn’t – it just uses whatever information it accesses, regardless of whether it is fake or biased,” Fourie says.
In addition, everything produced by an ‘open’ LLM like ChatGPT is accessible to everyone else using the application. Thus, if employees use an open LLM to generate reports, marketing materials or sales proposals based on confidential corporate data, this data becomes public property.
To avoid this, growing numbers of corporates are implementing ‘closed’ LLMs like Microsoft Copilot, which are restricted to using data that’s held on the corporate network and which isn’t shared with anyone outside the organisation.
However, even this type of restriction is not foolproof. Unless the security for accessing that data and classifying and protecting it are clearly defined, the LLM could access any content stored within the organisation's network or cloud storage and online services, including e-mail. It could inadvertently provide access to a confidential company memo, or make use of data that is out of date, such as a discontinued product, when producing a sales proposal. In addition, it could inadvertently divulge sensitive corporate secrets or access – and disseminate – private and personal employee information that might have been discussed behind closed doors but not secured and stored adequately.
“It’s therefore vitally important that before implementing an LLM, companies have to ensure that only the right content is available to the right users, and that it is secured so that only the intended audience has access to it,” Fourie adds.
“Ultimately, regardless of how good an LLM is, human intervention for checking and sign-off will still be required. Humans will have to review whatever the LLM has produced to ensure the content is relevant and factually correct. Microsoft named its LLM Copilot and not Autopilot to emphasise the fact that it’s an assistant and not a replacement for human employees. While an LLM is no substitute for human creativity and critical thinking, it will and can save time in that users won’t have to constantly have to generate proposals, reports, analyses or other content from scratch.”
Share