Data is an organisation's crown jewels, and consequently it is the target of increasingly persistent and well-resourced cyber criminals. It's also, of course, vulnerable to rogue employees − the insider threat remains a potent one.
It is worth reminding ourselves that South Africa appears to be especially vulnerable. Kasperksy says that in the first four months of 2022, ransomware attacks doubled as compared with the same period in 2021, and the country is ranked fifth globally in terms of cyber crime density, according to research by Surfshark.
Even more worrying, there is growing evidence that cyber attacks are targeting backups, which are ultimately the organisation's final redoubt. Backups are particularly important given the increase in ransomware attacks − best practice advises against encouraging these attacks by paying ransoms, but that is only practical if the organisation is quite certain it can recover its data.
Unsurprisingly, then, 93% of organisations reported that cyber attacks featured attempts to access backups, with 73% saying the attempts were partly successful.
Oh, and did I mention that paying ransoms is no guarantee either: only 16% of organisations were able to recover their data after paying over the cash.
In short, backups have always been critical and now they are existential. In turn, a whole new set of challenges arise for the under-pressure CISO or equivalent.
The first of these challenges is that backups have traditionally been troublesome. From the days of tape backups laboriously taken off-premises at regular intervals and later streamed to disk subsystems, backing up has been one of those jobs that is extremely time-consuming and dull, but absolutely vital.
In short, backups have always been critical and now they are existential.
Backups need to be frequent to be useful and easy to manage, but they also need to be rigorously tested to ensure they are usable. All of this makes them especially vulnerable to sins of error and omission, with potentially catastrophic consequences when the attack comes, as it will.
Another sobering statistic: 93% of organisations said they experienced significant challenges with current backup solutions.
Hybrid environments and their discontents
Clearly, all of these figures lead one to the inescapable conclusion that getting backups right − by which I mean totally reliable and bulletproof − is critical in this age of aggressive cyber crime.
A complicating factor is the emergence of the cloud, which is now an element (or soon will be) of every corporate IT strategy. As always, there is no one-size-fits-all solution, and it's clear that organisations are typically opting for a hybrid environment.
Increasingly, the common-or-garden corporate IT estate will have some systems running on-premises, some in a co-located data centre, and some on a public cloud like Microsoft Azure or Amazon Web Services. The actual specifics of each individual recipe are immaterial, the point is that the all-important data is now currently stored in an extremely heterogeneous environment.
This complicated data landscape makes it imperative for organisations to identify where and what their important data actually is. It also makes backing up the data more complicated; there are just more things to go wrong.
In this context, how to ensure bulletproof, up-to-the-minute backups of corporate data is keeping CIOs, CTOs and security officers awake at night, and with good reason. To ensure a good night's rest for these individuals − and for the rest of us too − organisations need to be able to manage their complex data environments easily, and create backups that are immutable (encrypted/inaccessible from the outside), and isolated from servers and the internet.
Another key requirement is the ability for each backup to be compared to an un-infected full baseline backup so that anomalies are immediately detected. This is important because of the phenomenon of "the patient hacker" − the hacker who infiltrates malware into a corporate environment where it exists undetected for many months.
In due course, the malware is backed up frequently, rendering subsequent backups useless for recovery. The data needs to be under continuous observation and any anomalies quarantined and investigated rapidly.
Traditional backups just aren't going to cut it.
One Australian health insurance company suffered a ransomware attack and decided not to pay. However, its backups were found to be infected by malware and it ultimately had to go back for an extensive period to find a 'clean' un-infected backup. Attempts to recover proceeded over a lengthy period, and the calculated recovery costs by many accounts have been higher than the original ransom demand.
The reputational damage of such a disaster just doesn't bear thinking about.
Outlining a solution architecture
So, what would a modernised backup solution look like?
Given the heterogenous IT estate mentioned above, one important element of a modern backup solution has to include a single platform on which data can be managed. This platform has to enable the organisation to discover the location of all its data, actively search for threats and quarantine suspect data, enable risk assessment and test backups.
The solution should be primed to leverage artificial intelligence and machine learning to make it both more powerful and more automated.
From an architectural point of view, the backup solution should include the encryption of the backups and "air gapped" to protect them from attack.
The benefits of this design approach are manifold and include much improved cost management and testable cyber resilience to reduce risk, along with a reduction in time and effort thanks to the single cyber intelligent backup platform.
Most importantly, though, it delivers a guaranteed ability to recover data − and thus a good night's sleep for all concerned.
Share