Separating fact from fiction: How organisations can combat the new era of deepfake fraud

Brett Beranek, Vice-President & General Manager, Security & Biometrics Line of Business at Nuance Communications

Deepfake technology has been hitting the headlines like never before over the last couple of years. Whether it’s the footage of US House Speaker Nancy Pelosi, altered to make her appear intoxicated, or the Queen’s 2020 alternative Christmas message created for entertainment purposes by Channel 4, there are now several high-profile examples where individuals have appeared to say or do something when the reality is that they have not.

As the quality and quantity of deepfakes increases, so too does the likelihood that they will be used for more sinister purposes. With the ability to make individuals say anything in any way, they have already perpetuated our era of fake news. The most believable will one day have the power to manipulate public perception and sway personal decisions.

While the creators of deepfake content have traditionally targeted those in the spotlight – or individuals who have a large amount of visual and audio data already in the public domain – it is just a matter of time before they look towards the corporate world. Therefore, businesses need to act today and get the tools and strategies in place to defend themselves and their customers against this next chapter in fraud.

When hearing and seeing isn’t believing

The deepfake phenomenon is not set to slow down anytime soon. In fact, research discovered that from 2019 to 2020, the number of deepfake videos on the internet grew from 14 678 to 100 million. That’s a 6 820-fold increase. While this year’s numbers aren’t yet in, we can only assume they have continued to balloon, especially given the increase in digital interaction driven by the pandemic.

Although the majority of the deepfake content currently circulating the internet is not sophisticated enough to fool its audiences, some videos and recordings are very realistic. The technology behind them is also improving at a breathtaking pace and could be dangerous, especially if – or indeed when – it falls into the wrong hands. After all, organised crime groups and even lone malicious actors are constantly evolving to incorporate new technologies. Deepfakes could give them the option to target public figures, individuals or corporations.

Most deepfake content will have both a visual and an audio aspect. However, when it comes to targeting businesses and their employees, in our current landscape, attackers are likely to focus more on audio and use voice cloning. Whether it’s criminals posing as a senior board member to gain access to confidential information, or pretending to be a customer withdrawing a significant amount of money, attacks that involve voice cloning could have serious repercussions for businesses, both financially and in terms of reputation.

The best known and first reported example of an audio deepfake scam took place in 2019. The chief executive of a UK energy firm was conned into sending €220 000 ($240 000) to cyber criminals who used artificial intelligence to fake the CEO of his parent company’s voice. The executive was told that the transfer was urgent and the funds had to be sent within the hour. He did so and the attackers were never caught.

This case undoubtedly gave us a snapshot into the future of fraud and the power of deepfake techniques, such as voice cloning. So, when criminals are able to use technology to effectively mimic an individual’s accent and style of speaking, how can we separate the real from the fake?

Detecting deepfakes

When the human ear is unable to tell the difference, businesses can turn to biometric technologies to analyse voices and detect anomalies.

Conversational biometrics, for example, can analyse vocabulary, grammar and sentence structure. Many companies already use it alongside voice biometrics as a successful authentication tool. This is because these technologies cannot be compromised in the same way as knowledge-based security methods – such as passwords and PINs. In fact, human voices are as unique as a fingerprint. By using sophisticated algorithms to analyse more than 1 000 voice characteristics, biometric technology can use a caller’s voice to validate their identity in the first few seconds of the interaction.

Another protective layer on top of voice biometrics is behavioural biometrics. This technology measures how an individual interacts with a device – how they type, how they tap and how they swipe or even hold the phone – in order to find out whether they are who they say they are.

When it comes to fraud attacks which use voice cloning, these technologies can be used to quickly identify whether a person is who they claim to be. The best on the market will even include several algorithms such as synthetic speech, liveness and playback detection that expose fake voices. So, just as they can be used to identify a person through their voice, they can be used to discover if a voice is real or not. Therefore, these technologies can help to shine a light on the true identities and prevent fraudsters from conning both customers and employees.

As deepfakes continue to rise in terms of prominence and improve in terms of accuracy, it’s only a matter of time before cyber criminals adopt them more widely. Businesses across all industries need to act now to combat this next wave in fraud and ensure that they are protecting both their customers and their employees from whatever comes next.

Share