Data is being widely described as the ‘new gold’ or the ‘new oil’, and the most valuable asset enterprises have. In many respects, this is true. But while the forward-looking enterprise depends heavily on its data, putting a rand value to this data remains a challenge.
By its very nature, data has at its core informative, instructional and locational properties that make it the “glue” within any ontology or existence of anything.
It is fundamental for all business, institutional, governmental, household and public processes, systems or exchanges to operate, whether manually or automatically. Data enables communication, learning, design, knowledge, information transfer, actions or execution in all spheres of life.
As an enterprise asset, the type of data most frequently valued is franchised (aka “monetised”) data – that which has been processed to produce outcomes like metrics and insights that are important and valuable to the company and down-stream consumers for regulating and operating the business.
Pricing models exist to put a rand value to this data based on factors such as acquisition, integration and processing time, data volume, the number of inputs it had and the importance of the decisions it can inform.
Data enables communication, learning, design, knowledge, information transfer, actions or execution in all spheres of life.
However, most data valuation models are subjective and based on criteria that are not always standardised across industry sectors and disciplines; and in many cases, data is not even recognised as an asset for accounting, strategy and other purposes.
Where data is recognised as an asset with monetary value, valuations could be made based on the cost of managing and provisioning data, the volumes available for data discovery or analytical consumption purposes, trigger actioning dependencies, the data’s importance for decision-making, enabling or learning and knowledge transfer capability, and even the distance the data is transmitted.
Even data ‘placeholders’ or capacity can be priced and sold to enable communication before usage as evidenced with the telcos and data vendors of the world.
Although most of these data pricing models in existence are regulated by communications and government authorities, these models are by no means altogether perfect, and the perceived value and pricing criteria of these data ‘placeholders’ may differ from one community to another, with the entire model subject to the risk of monopolisation.
The value assigned to the data may vary depending on the sector the organisation operates in. For example, in the financial sector, the most valuable data is likely fashioned around bottom line or profit; for others, the key focus might be data relating to sales (revenue) or expenses. Data such as this may well be valued as an asset during the sale of a business, and due to its importance to the enterprise, it might even be insured against loss.
But what about the data relating to enterprise intellectual property (IP) – its algorithms, models and methods? Or its data currently in transit without context, or its historic data, which may not be in use currently, but could become vital for trend modelling. It is harder to put a rand value to IP and data which has not been used in years but has the potential to improve the business at some point in future.
Maintaining and increasing the value of data assets
All data has value, but without context and effective data management, it cannot contribute its full value to the outcomes of whoever is using that data.
It could be argued that analytics models can use weightings to overcome inconsistencies and gaps in data; however, the ideal is to not have the inconsistencies or gaps at all. To achieve this, organisations need to retain and properly manage quality data that has been verified, validated, cleansed, integrated and reconciled.
Effective governance should also be in place, to ensure and direct data management practices, with tried and tested rules, controls and architectures to assure commonality in practices/processes and sustained data quality and avoid the dreaded data ‘spaghetti junction’.
Even when quality data is available and well-managed, however, the value of this data remains subjective and theoretical – particularly when its future importance is not known yet.
Historic data, for many simply volumes in costly storage, is very important for those in the financial sector, for example, serving as irrefutable evidence when analysing the full lifecycle and long-term behaviour of customers, and building predictions informed by this.
Outside of the business world, data indicators going back millions of years are vital for fields such as astronomy, archaeology or geology.
Therefore, no matter what the current accepted data valuation models are, the answer to the question: ‘what is data worth?’ is that quality, well managed data is potentially priceless.
Share