With the Copilot artificial intelligence (AI) assistant feature being made generally available in Microsoft 365 this week, ITWeb discussed some of the prominent issues around artificial intelligence (AI) and skills on the sidelines of the recent Microsoft ‘AI: A new era’ event.
ITWeb spoke to Mark Chaban, CTO for Central and Eastern Europe, Middle East and Africa; and Lillian Barnard, president, Microsoft Africa.
“Every Windows 11 user will be able to have access to a Copilot-like experience on their device, so it will be easier to just chat with it using conversational AI,” said Chaban.
Generative AI hit the mainstream this year, thanks to ChatGPT’s widescale adoption. ChatGPT was developed by OpenAI, an organisation that Microsoft partnered with in 2016. This partnership resulted in Microsoft developing specific computing infrastructure to cater to the needs of large language models (LLMs) used to support generative AI.
Chaban said: “Working with OpenAI helped us re-engineer the infrastructure, from the ground up, to suit the needs of large language models.
“Going back to 2019, the early days of ChatGPT, the strong partnership with OpenAI allowed us to partner with them to take it to the masses, and knowing what we’d need in the future allowed us to build our AI supercomputer, so it was not only how to build it, but how to scale it so everyone can have access to it.
“I feel like we built an AI supercomputer from the ground up, and we are the first [AI supercomputer] and the fifth largest supercomputer, and we’ve done that with an understanding of what we’re solving for and the future, from sustainability and scalability efforts and democratisation efforts and so everyone can have access to this technology.”
Chaban detailed this meant the components − such as switches, networking and GPU configurations − were all built in specifically-designed ways to be scaled for billions and trillions of parameters used in large language models.
Asked if we would see a regionalised AI supercomputer in Africa in the future, Chaban said: “In January this year, when we launched OpenAI-as-a-Service on Azure, we launched it in our data centres worldwide, with South Africa [included].
“We have many regions worldwide; not all of them have the full GPU capability, South Africa does. Customers have the ability, not only for them to run LLMs, whether it’s hundreds of billions of parameters, whether it’s OpenAI, ours – the Mega Turin model, or trillion parameter models, they have the ability to run those in South Africa using our GPUs.”
On plans for hyperscale data centre rollouts in key African markets, Barnard said bringing the different regions of the continent together under one team meant best practices could be shared and solutions launched faster.
While Barnard declined to give details of any specific plans, she spoke of listening to customer feedback in certain markets, including a recent visit to Nigeria. “We will continue to prioritise digital infrastructure, not just in South Africa, but also beyond.”
With skills centres already in Nigeria, Kenya and Egypt, what are the expansion plans for training in South Africa? Barnard said South Africa is a “focus for us” and with data centres in the country “there’s an opportunity to do something different here”.
Barnard also spoke of the 300 000 opportunities for AI skills development as part of government’s Youth Employment Service programme.
Share