Subscribe
About

ITWeb TV: Red Hat in AI push

Matthew Burbidge
By Matthew Burbidge
Johannesburg, 23 Sep 2024
ITWeb Brainstorm editor Matthew Burbridge speaks with Red Hat’s Bruce Busansky about how Red Hat and IBM’s InstructLab are driving innovation in AI by reducing the technical hurdles. #itwebtv #Redhat #AI

Firms looking to deploy GenAI models in their businesses face a number of challenges, not least that they have to rely on opaque models that may contain biases.

Bruce Busansky, platform specialist, Red Hat, spoke to ITWeb TV about how companies can go about building their own large language model (LLM) using their own data.

He says this is done using its AI project called InstructLab, which was developed by IBM and Red Hat, and which the companies open-sourced earlier this year.

Busansky says a firm could either consult a GenAI service, such as ChatGPT or Anthropic, or point an API at their services.

One challenge is that the data can be biased, because the firms hadn’t trained the model themselves, or there may be hallucinations.

The outputs of the models can also differ. He says: “If you asked a question and used ChatGPT 3.5 and 4, they can give you completely different answers even though they were effectively trained on the same data.”

He says Meta’s Llama LLM is an open-use model, but it can only be used within commercial limits, and companies can’t make any money out of the model.

“The problem that Red Hat saw was, although these models had been trained on this data which a lot of people didn’t have access to, if false stuff came back, or people wanted to contribute to the model, there was simply no way of doing that.”

Bruce Busansky, platform specialist, Red Hat.
Bruce Busansky, platform specialist, Red Hat.

He says InstrutLab can be thought of as a front-end for building chatbots. “Chatbots need data. With InstructLab and the Granite series of either large or small language models, all of that can be cobbled together to create your own GPT-like service inside of your business,” he says.

“You can point it at data, let it ingest it and you’re always in control of the source of that data. It’s said that every time you use these publicly available GPTs, you’re effectively training them. For data sensitive companies, InstructLab is the first place to go to start to build it out.

“We see companies playing with it, and we’ve had some early conversations. One thing we’re realising is that operationalising AI is hard. Inside an organisation, the same thing that’s choking them up around modernising core systems is chocking them up as they try to operationalise AI. But there are some early movers, and we’re watching quite excitedly.”

Share