Subscribe
About
  • Home
  • /
  • Security
  • /
  • Authorised payment fraud, scam centres on the rise

Authorised payment fraud, scam centres on the rise

Christopher Tredger
By Christopher Tredger, Portals editor
Johannesburg, 08 Aug 2024
Jason Lane-Sellers, director, identity and fraud, LexisNexis Risk Solutions EMEA.
Jason Lane-Sellers, director, identity and fraud, LexisNexis Risk Solutions EMEA.

Authorised payment fraud, including romance and investment scams, is increasing globally, facilitated through scam centres – well organised and highly technical resources that generate billions in revenue each year.

This is according to global data and analytics company LexisNexis Risk Solutions, which provides insights from its 2023 Cybercrime Report.

“Fraud thrives in changing circumstances, such as the widespread and ongoing adoption of new technologies. The initial application of generative AI to fraud attacks in 2023 gives cause for concern now but will likely be regarded as unsophisticated with the hindsight of a few more years’ innovation,” reads the report.

For example, fraudsters are increasingly exploiting instant payment systems to target consumers through authorised payment fraud, known internationally as authorised push payment (APP) fraud and as authorised transfer scams in the the US.

 “As more consumers adopt instant payments, and as more instant payments platforms enable international transfers, fraudsters will likely continue to exploit the payments channel.”

Money mules

Jason Lane-Sellers, director, identity and fraud, at LexisNexis Risk Solutions EMEA, says APP fraud via ‘money mules’ is an increasing global concern, with South Africa being no exception.

“Criminals manipulate people to gain funds from their accounts and immediately distribute these funds through mule account networks. Money mule, a person who transfers money acquired through theft or fraud and who does so either willingly or involuntarily, uses the financial system to funnel and hide criminal profits. This enables fraudsters to move and access targeted funds through multiple accounts, attempting to ensure the money moves before detection or action by official agencies,” said Lane-Sellers.

The level of impact depends on the scam attack methodology employed, which determines the typology and nature of the attack. Lane-Sellers highlights romantic scams as an example.

“Romance scams target individuals experiencing loneliness. A fraudster pretends to be a love interest, gaining the victim's trust over time, typically fostering a relationship purely online. At some point, the fraudster feigns a desperate need for money or presents a non-existent, urgent and once-in-a-lifetime investment opportunity.”

According to LexisNexis, in the UK, London police reported that these schemes allowed scammers to accumulate £92.8 million from 2022 to 2023. But these numbers only reflect the cases that have been officially reported, Lane-Sellers notes.

Scam centres

The increase in APP fraud goes hand in hand with the growth of scam centers in certain border regions of South-East Asia, notes LexisNexis. These centers are sophisticated operations, creating phishing sites and mobile malware, and running call centers with multilingual staff.

“Scam centres can be established in any location, often serving as a hub for coercing individuals into working against their will,” Lane-Sellers says.

“Fraudsters exploit factors including border locations, transient or immigrant populations, low-wage and low-credit communities in both rural and urban areas. Some of South Africa's population have these characteristics. These scam centres can also be concealed in areas where other organised crimes are prevalent.”

The initial application of generative AI to fraud attacks [...] will likely be regarded as unsophisticated with the hindsight of a few more years.

LexisNexis Risk Solutions

From the outside, these centres may appear indistinguishable from a legitimate business, resembling a call centre or marketing operation.

“In many countries, these centres may even be registered as genuine businesses and sponsor work visas for immigrant populations or promise job opportunities in destitute areas… Workers receive set processes, detailed role instructions and targets or key performance indicators to achieve. Consequently, these centres may not be immediately discernible from genuine operations,” Lane-Sellers explains.

Human trafficking

‘Workers’ can be coerced into operating in a scam centre through manipulation, including confiscation of passports, visas or other official documentation necessary for residency or employment.

LexisNexis cites a UN report which states: “Hundreds of thousands of people from around the world have been trafficked to work in these call centres.”

Scams can target any individual or business and relying solely on traditional methods of fraud defence is insufficient. Since these are consumer authorised transactions, basic account security or authentication measures are no longer adequate, warns Lane-Sellers.

Instead, organisations need to develop a comprehensive understanding of their customers, observing how they interact and behave during every transaction. The situation also calls for a holistic approach to identify high-risk situations.

“Employing behavioural intelligence, biometrics, digital persona profiling and advanced AI and ML technologies is crucial for effective scam prevention,” concludes Lane-Sellers.

Bot sophistication

In 2023, LexisNexis identified over 3.5 billion bot attacks in its Digital Identity Network, which collects data from globally contributed transactions across various industries.

Financial services organisations bore the brunt of these attacks, accounting for half of them, with e-commerce sustaining most of the rest. Gaming and gambling operators saw a 103% increase in bot attacks.

While bots are not new, their automated capabilities and sophistication continue to increase. LexisNexis warns that traditional bot prevention solutions often focus only on large-scale bot attacks, but fail to detect more advanced bots that exhibit more human-like behavior

According to LexisNexis, advanced bot detection should  monitor for bot traffic using IP proxies to mimic legitimate customer locations; abnormal timing of events and unusual on-page or in-app behaviours; and evidence of virtual machines mimicking real customer devices.

Share