As with most buzzwords and phrases, there are many interpretations of what the “intelligent edge” means.
Many of them are similar, and typically speak to convergence of people, data and technology. For many, the focus is almost exclusively on digital devices and the Internet of things (IOT).
But beyond these obvious examples, it can also speak to a more general principle, that data can be processed, analysed and even aggregated on remote or distributed infrastructure. Most typically, where the data natively resides, as opposed to passing control over central teams, infrastructure and systems for processing.
Now, if you will permit me, let’s briefly take a step back and talk about a concept that I believe is related to this. If you have read any of my previous Industry Insights, you will know that I, along with the organisation I am associated with, believe very strongly in data democratisation.
We believe in the importance of data, and that data needs to be treated as a key asset for any business.
With so much value in data, it should be placed at the centre of all core business activities. This applies to all development projects and implementations that occur in an organisation.
When thinking of the intelligent edge, I like to focus on the keyword: convergence.
But…it’s hard. The systems we must build are complex, and time-consuming. And very often many businesses are under-resourced, and under-skilled to do the work.
In the constantly changing, rapidly moving world that we live in today, it is becoming more and more difficult to keep up with the demand and deliver key, enterprise solutions at the pace required by business, while still adhering to the quality and standards that business also demands of us.
It is still surprising how many modern-day corporations there are today that are still dealing with legacy data integration solutions. Manually processing and sending files, laboriously building data extraction routines, one by one for every single file and table. Why are we still doing this?
How can the intelligent edge help us?
When thinking of the intelligent edge, I like to focus on the keyword: convergence. Outside of the context of IOT, when I think of how the intelligent edge can help in the paradigm of data engineering, data analysis and business intelligence, the concept of convergence stands out quite clearly.
Let’s look at a traditional landscape for a second. A corporation has dozens of systems that are processing linear, line of business relevant logic at the place of origination. Then large-scale, complex extraction, transformation and loading routines are written by central teams to build up complex systems that can deliver on key business deliverables such as financial reporting, marketing automation, credit risk, etc.
This is a long, laborious process. We have spoken often about how critical it is to implement Agile principles to help optimise this process. However, Agile on its own isn’t enough. We need to change the way we think.
To truly place data front and centre of the software development life cycle, we need to completely change the way we interact with this data.
We must remove the slow, left to right concept of processing data and ask how we can apply the concept of convergence to this landscape.
I believe this is where the intelligent edge comes in. We need to remove the barriers of legacy, archaic ingestion patterns. We need to maximise the use of technology and people at the place that the data resides, to connect our systems.
None of this is new, not really. It’s just that technology has improved to the point where we can do it a whole lot better now.
But what does it mean?
We need to insert the data acquisition process into the heart of the system. When a system is writing a change of some sort to its formal line of business database, at the same time, the system should be asynchronously publishing this data to the analytic data platforms.
In this process, as much standard data cleansing and data preparation rules that are required, and traditionally performed by the central EDW teams, should be included. What this will result in is a constant stream of data trickling into central, standard collection points from across the business.
By doing this, data becomes part of the heart of the business solution and we can start achieving true convergence between operational source systems − and downstream strategic data platforms. The technical mechanisms for publishing the data are varied, and can include real-time streaming, message queuing and even simply REST APIs.
This will greatly shorten “the distance” that the central data engineering teams must travel in order to process this data.
Remember your core principles
Having said all of this, we must still remember the core principles. The ultimate enterprise, strategic data models that large organisations require will still need a central team. It is important to remember this as organisations still need a single, unified data architecture and enterprise data model in order to deliver on enterprise-wide, strategic analytic requirements.
This does not mean, however, that all the work needs to be done by this central team. If the business can implement data acquisition strategies at source, it should consider what standard data preparation, cleansing and transformation can be pushed down to the team publishing this data, to further lighten the load on the central team. This will also help ensure the line of business systems is still speaking the same language as the enterprise.
By doing so, a true partnership can be formed with the operational source systems; the teams will get access to data in real or near real-time and will finally have a real chance of delivering data solutions at the pace the business requires it.
Share