A spate of high-profile hacks and ransomware attacks over the past few years has shown just how much cyber risk has grown and how complex it’s become.
Many CIOs and CISOs feel somewhat overwhelmed when they consider the size and resourcefulness of the criminal syndicates that are circling them − most feel the question is when, rather than if, when it comes to a security breach of some kind.
The rush to move to digital ways of working in response to COVID-19 lockdowns and changing work styles exacerbated an already large challenge.
Research by PwC shows that half or more of the CISOs and CIOs in a recent survey say they haven’t fully mitigated the risks associated with remote work (50%), digitisation (53%) or cloud adoption (54%).
Unsurprisingly, 64% of them expect a jump in reportable ransomware and software supply incidents in the second half of 2021.
Another factor: with everybody working online and with the internet of things growing steadily in the background, the volume of data that security teams must work with is staggeringly large. This is good because it’s valuable data that can be used to improve the organisation’s security posture − think login information, workflows and so on − but how to deal with it all?
One must remember that cyber criminals are well-resourced and aggressive, so attacks happen quickly, often employing thousands of machines to overcome defences by that old standby − brute force − so responses have to be quick and effective.
In short, there are two parts to all this: identify the threat and make the right response − in double-quick time.
When addressing the question of identifying threats from a vast mass of data, the emerging answer is artificial intelligence (AI) and machine learning (ML). Combined, AI and ML enable even a modest-sized company to identify patterns in the data to reveal something that’s anomalous and potentially threatening.
For example, like many people, I use Office 365 from a laptop. I generally log in from somewhere in South Africa, unless I’m away on a business trip, always on a Windows device and typically within certain time parameters. The ability to detect variations from that pattern − for example, an attempted login from Russia at 3am − would be immediately flagged as suspicious and requiring a response.
When addressing the question of identifying threats from a vast mass of data, the emerging answer is artificial intelligence and machine learning.
A critical enabler that should be acknowledged is the cloud, which makes the storage of all the data and the massive processing power needed available cost-effectively.
Organisations can also build specific parameters into their algorithms. For example, a certain company might only use Windows computers, or only work in certain geographic areas. The algorithm could then specify that any attempted log in from a non-Windows machine or from a certain geographical area would automatically be blocked.
All of this effectively reduces the attack surface available to hackers, and by making their lives harder, could even discourage them from targeting such a well-protected company.
The good thing about ML is that it is dynamic, and over time will keep on refining a picture of my behavioural patterns, the better to identify anomalous actions.
Dynamic solution
At this point, it’s worth noting what I have found to be a common misconception about AI: many people believe the algorithms that underlie it are self-generated. On the contrary, while the programs do build up their knowledge dynamically, as noted above, the criteria they use are governed by a man-made algorithm.
In other words, the AI and ML only collects and processes the data specified in the algorithm − for example, login times, location, type of device, password − which it then processes to find patterns.
AI is thus not, as frequently supposed, all about replacing people, but rather it is about taking on the jobs that people aren’t best suited for, either because they are too demanding or just too boring.
Aside from behavioural analytics, AI and ML are also being used in other areas of security; for example, to improve biometric authentication and to detect phishing attempts more quickly.
AI can also help in optimising network topographies to make them more secure. AI is also making it possible for security companies to analyse big data from millions of cyber incidents to identify new trends and novel types of malware; in other words, keeping up with the notoriously inventive bad guys.
Now let’s turn to the second component: rapid response. The threat needs to be identified quickly, but the response must be equally swift or all will be lost. The key here is to automate responses, again based on algorithms and benefitting from dynamic improvements built up over time by AI and ML.
Very often, cyber attacks use sheer volume to overwhelm defences − obviously, responding to millions of hacking attempts on the network at once is not possible if you are relying on humans. Automation has a role in allowing the organisation to respond rapidly and at scale.
And, of course, if an attack should happen to be successful, AI will have helped create a response to mitigate the effects; for example, by quarantining an infected machine as certain thresholds are crossed.
AI and ML are not silver bullets, but are emerging as critical weapons in the CIO’s or CISO’s armoury in an increasingly bitter war.
Share