Subscribe
About

Moving from data stack curation to ever-smarter business

To advance, the data stack needs to learn, and input outcomes back into the stack in a feedback loop that continually evolves.
Lawrence Smith
By Lawrence Smith, KID Group presales solution architect.
Johannesburg, 09 Feb 2021

In the quest for a ‘crystal ball’ to accurately predict future outcomes, organisations everywhere have recognised the value of data gathering, management, sorting and analysis.

But this approach to simply curating the modern data stack does not yet deliver on all the hype of recent years. To effectively use data to drive business improvement, organisations must start addressing the shortcomings of current data stacks, and position themselves to optimise these stacks to support not just ‘crystal ball gazing’, but also transformative action.

For most, the current data stack is essentially a one-way pipeline from source to warehouse to analysis to business. To advance, the stack needs to learn, and input outcomes back into the stack in a feedback loop that continually evolves. It needs to harness data from all origination sources, and export to all destinations. And business needs to take concrete action to understand and apply the learnings and outcomes – in an automated fashion.

Getting the basics right

Data quality and effective data governance are foundational to smarter businesses. Without these in place, any strategy or solution implemented to derive real value from data will not derive the desired results, or quite simply fail in the long run.

In recent years, most organisations have begun exploring the predictive value of data – often with disappointing results. This is usually because the data is of poor quality and not effectively governed, and these are foundational elements that need to be in place before any organisation can look to predictive analytics.

Locked-in data

It goes without saying that security and end-to-end governance are critical: any data management stack that doesn’t include these capabilities will shortly become obsolete.

Concerns around data sovereignty and lock-in should also be addressed: data must be available and properly governed to be of value for analysis.

Lock-in is not a new concern and has been around for many years. However, the challenge has now moved from traditional on-premises ERP systems into the modern world of SaaS applications. Many IT veterans would have experienced these challenges when working with old school monolithic ERP vendors, who would not allow their own customers to access their own data via means other than those provided by the proprietary vendor.

Getting to grips with hybrid multicloud environments

To achieve more flexibility, scalability and cost savings, most organisations have moved to cloud in some form. In South Africa, many organisations started by hedging all their bets on one cloud provider.

Data quality and effective data governance are foundational to smarter businesses.

This is now changing as organisations start to see the risks and costs that are associated with putting all of their eggs in one basket. As organisations evolve and learn, they are beginning to engage with two or three cloud vendors to ensure vendor lock-in is no longer an issue.

However, multicloud environments also introduce other challenges in terms of maintenance; ie, skills to manage these different environments and integration – most integration is designed and built ground up on a particular platform, therefore organisations are also looking into agnostic cloud integration solutions instead of relying on one provided by a particular cloud vendor.

Enhancing predictive analytics

Once the basics are in place, predictive models can be designed to include the performance and economic future trend analyses of backend operations as well (systems and processes); ie, the “engine”. However, it should be noted that predictive models can be expensive, mainly due to a lack of available skills and the cost of technology to support this.

It is crucial that organisations do prescriptive analytics immediately after predictive analytics – that is the actioning of predictive insights found. It is wasteful and senseless doing the latter only.

Artificial intelligence (AI) and machine learning (ML) are now being introduced into the data management stack in order to assist humans with managing the large volumes of data that now exist within organisations.

More importantly, AI and ML are now being used to assist with data cleansing, data discovery, data movement and data governance, and will become critical to effective data analysis.

Democratisation of data

Smart data utilisation demands the democratisation of data, in which any business user – whether they have data science skills or not – is able to base decisions on accurate data and predictive analytics. We are seeing this trend emerging in South Africa, but it is still in the infancy stages.

Business users are starting to ask more of IT and want more access to data. The challenges that IT now faces is the ability to provision this to business – as and when they need it – as well as providing access to good quality trustworthy data. ‘Data marketplaces’ are now emerging, where business can in essence shop for their data in a governed way.

With data clearly a ‘goldmine’ and the ‘new oil’ for organisations seeking to grow in future, any plan to harness it effectively has to start by setting in place the right foundations.

Data stacks must be strategically designed and governed to deliver the outputs that will be needed for accurate and valuable outputs needed for predictive analytics and prescriptive analytics in future.



Share