Contact us
Switzerland
Singapore
Kenya
News
- FS-ISAC 2025 EMEA Summit | 20-21 May 2025 | BrusselsMay 9, 2025 - 9:21 am
- GITEX AFRICA | 14-16 April 2025 | Marrakech, MorrocoMarch 19, 2025 - 10:11 am





Sandy Lavorel
Head of Fraud Intelligence
NetGuardians
Pallavi Kapale
Senior Financial Crime Officer – Financial Intelligence Unit
Bank of China
Before delving into the world of scam, it’s crucial to grasp what exactly we’re looking to identify. To illustrate this, let’s paint a simplified picture of the financial crime cycle. Imagine a scenario where scammers execute a love scam targeting Customer A. This customer, thoroughly convinced and psychologically manipulated, unwittingly makes a transaction, not to a charming individual, but to a domestic money mule within the same country. Indeed, criminals recruit money mules within the domestic country in order for those transactions to look more legit. The fraudulent transaction remains within the borders. Subsequently, the money mule aids the criminal in laundering the unlawfully acquired funds, allowing them to revel in their illicit gains. This step could be done by different means.
In this streamlined perspective, it is necessary to discern both the fraud/scam aspect and the money laundering dimension. Today, it’s evident that these two crimes are intricately connected and experiencing a significant surge.
In 2023, consumers suffered a staggering loss of 1 trillion USD due to scams. This figure, unfortunately, only scratches the surface, as an estimated 86% of fraud cases go unreported.
In parallel, an estimated (or rather, “guestimated”) 4 trillion USD is funneled through money laundering annually. Once the victim transfers funds, they are rarely sent to the criminal directly. Instead, money mules are often recruited via social media or job scams to launder the proceeds of crime. Money mules play a pivotal role in the fraud landscape, facilitating the transfer of stolen funds to criminals.
It’s important to acknowledge that we are all potential targets, either as victims or unwitting accomplices (mules).
This trend is poised to intensify, driven by various factors, with Gen AI being a significant contributor.
Based on the Federal Reserve with “FED” definition, a scam is the use of deception or manipulation, most often through social engineering, with the intent of achieving financial gain.
In the UK, this type of threat is commonly referred to as an Authorized Push Payment (APP) fraud instead of Scam.
Scammers today generally fall into two categories:
We are living in the midst of a global “Scamdemic”, a period marked by an explosion of scam activity that touches every geography, channel, and demographic. At the heart of this phenomenon are large, organized crime rings operating at scale across borders, running scams as structured enterprises with defined hierarchies, operational infrastructure, and aggressive revenue targets.
These groups are not only transnational, but they are also specialized. Their scam typologies often align with regional capabilities and vulnerabilities. For example, Chinese-led networks in Southeast Asia (notably Cambodia, Myanmar, and Laos) operate industrial-scale scam compounds, where thousands of trafficked workers are coerced into executing romance and crypto-investment scams, often referred to as “Pig Butchering” or “Romance Baiting”. In West Africa, groups like Black Axe are infamous for large-scale romance fraud and business email compromise (BEC) schemes. Meanwhile, in Eastern Europe, organizations such as the Milton Group run sophisticated fraud call centers, posing as legitimate investment firms to defraud victims across Europe and North America.
All of these operations often exploit vulnerable individuals as labor, use multilingual scam agents, and rely on digital laundering networks, making them not only effective but incredibly hard to dismantle. According to Inflection Point 2025, these scam ecosystems thrive in jurisdictions with poor regulation, low AML oversight, and inadequate enforcement, often within special economic zones (SEZs), online gambling platforms, and crypto hubs.
Crucially, scams are now the most profitable form of organized crime, surpassing even drug trafficking in global revenue. If the proceeds of scam operations were measured as a national economy, they would rank as the third-largest GDP in the world, behind only the U.S. and China. And this trend shows no sign of slowing down. The scale, automation, and cross-border nature of scams, combined with increasing digitalization and limited global cooperation, ensure that the “Scamdemic” is far from over.
Technology has become a significant enabler for committing crime on a larger scale and reshaping the landscape of online crime. Advancements in artificial intelligence (AI), large language models (LLMs), and cryptocurrencies have allowed for larger-scale fraud with minimal investment. AI-generated deep fakes are increasingly used in scams to manipulate victims and bypass security measures. As digital crime grows more complex, many criminal enterprises now offer services that simplify the process for less experienced offenders. This includes;
On one hand, we see the criminal underworld becoming increasingly industrialised. Sophisticated fraud kits, phishing platforms, and ransomware delivery services are all available for purchase, complete with customer support, dashboards, and subscription models. The barriers to entry have vanished. Organized crime no longer needs to invest in building its own infrastructure; instead, they can simply rent it like software-as-a-service (Saas).
On the other hand, the compliance industry is also evolving, turning itself into a commodity by offering rapid deployment of AML (Anti-Money Laundering) and KYC (Know Your Customer) controls “as a service.” These solutions are often pre-packaged, one-size-fits-all, and seamlessly API-integrated and cloud-based. The selling point is tempting: ‘Just plug in AML, and you’re all set’. The asymmetry is profound: criminals are scaling with agility and coordination, while many financial institutions are outsourcing complexity, not solving it.
To reach every geography, channel, and demographic, criminals are both innovative and antifragile, they adapt and evolve under pressure.
The prevalence of specific scams varies depending on the region, method, and target. Some scams may be more common than others at any given time. For this reason, we have chosen to highlight three particularly damaging scams that are currently widespread. Of course, this landscape is constantly evolving.
One thing is certain: scammers will continue to create new, more effective tactics.
As mentioned earlier, the global “Scamdemic” is far from over.
ONLINE PURCHASE SCAMS
Online purchase scams typically involve fake online stores or fraudulent listings designed to either steal payment information or trick victims into paying for goods or services that never arrive.
Scammers use increasingly sophisticated techniques, often powered by generative AI to build authentic-looking websites that impersonate real businesses. They also employ SEO poisoning to manipulate search engine rankings and drive traffic to these fraudulent sites.
These scams are especially prevalent on online marketplaces and auction platforms, where anonymity makes it easier for fraudsters to reach a wide audience. Criminals commonly advertise fake holiday rentals, concert tickets, or electronics, often at prices that seem too good to be true, to lure buyers. While many platforms offer secure payment methods, scammers frequently persuade victims to pay via direct bank transfer, bypassing protections and making it difficult to recover the funds.
IMPERSONATION SCAMS
Impersonation scams involve fraudsters posing as trusted authorities—such as banks, financial institutions, tax agencies like HMRC, or law enforcement officers—to deceive victims into transferring money or revealing sensitive information. Using increasingly convincing communication techniques, scammers exploit urgency and fear to manipulate targets. According to UK Finance, the average loss per impersonation scam incident is £7,448 in 2024.
This type of fraud is among the most well-established and dangerous. Criminals typically initiate contact via phone calls, emails, or text messages, claiming there is an urgent issue—such as a security breach or ongoing investigation. Under pressure, victims may be persuaded to share account details, download malicious software, or transfer funds to accounts supposedly “more secure.”
A common variation is the bank security team scam, where the fraudster claims to be from the victim’s bank, warning of a threat to their account. The target is instructed to urgently transfer funds to a “safe” account, often controlled by the scammer. To build credibility, they may send fake confirmation messages or reference fabricated breaches. Victims are sometimes asked to make several smaller transfers to avoid detection, making the trail harder to trace.
Interpol reports that impersonation scams are especially prevalent across the Americas and Asia. While some regions have made progress in reducing such fraud, the threat remains global. In the UK, impersonation fraud cases dropped by 37% in 2023, and the total amount stolen decreased by 28%, thanks to awareness campaigns and stronger bank protocols. However, in countries like Australia, the trend is rising. The Australian Financial Complaints Authority received over 9,000 complaints about bank impersonation scams in 2023—nearly double the number from the previous year.
Physical impersonation is also an emerging and deeply concerning tactic. In some cases, criminals pose as police officers and knock on doors to gather information about victims’ bank accounts and credit cards, which are later emptied. In response, buildings in affected areas have begun displaying public notices to raise awareness and warn residents.
INVESTMENT FRAUD
Get-rich quick, or investment scams, are possibly one of the oldest in criminal playbook, but today the internet has supercharged the reach of fraudsters to make them a far bigger problem.
Online Investment forums on cryptocurrencies, growth trends and special opportunities, as well as the ability to send emails and make phone calls using addresses and numbers stolen in data breaches, give fraudsters almost unlimited reach. AI analytics programs allow them to scrape the internet for personal information that helps target potential targets with highly personalized approaches, while Gen AI gives them the tools to create convincing false personas, websites, LinkedIn profiles and more to make it harder to spot fake from reality. These scams are about getting targets to “invest” in spurious schemes, with the target authorizing the payment transfers.
Typically, an investment scam target is groomed slowly, and the technique is combined with a romance scam. Payments often increase over time. Fraud followers are expecting 2025 to see a big increase in this, particularly in the cryptocurrency area.
Already viewed by Interpol as one of the more prevalent scams in Europe, its return on effort for the scammers is high. According to UK bank Barclays, investment scams last year netted the highest average sum per scam – £15,564 – but represented just 4 percent of all such activity. This makes it highly attractive to scammers and therefore likely to become more common.
Despite the multitude of scams, they often share common characteristics. At the start, scams rely on social engineering to manipulate the victim’s emotions, trust, or sense of urgency to override rational thinking. Scammers create a false sense of connection, authority, or opportunity, then guide the victim step by step toward a financial…loss.
All scams exploit asymmetry, i.e. the scammer knows it’s a con, but the victim doesn’t, and by the time they realize, it’s often too late.
Whether run by a solo scammer or a transnational crime ring, scams thrive on deception, pressure, and isolation, often cutting the victim off from others who might warn them. And finally, most scams are designed for scale, built to be easily replicated, automated, or adapted to different targets across borders and platforms.
From a transaction perspective, many scams show consistent behavioral and technical commonalities. There is often a new beneficiary introduced into the victim’s banking environment, not necessarily located in a high-risk or unusual country, with foreign currency involved. The transaction amount might appear unusual at first, but as the scam progresses and the victim becomes conditioned, the pattern may start to seem normal. Indeed most of the time there is a velocity aspect.
Interestingly, scammers often “coach” victims to execute the payment themselves using their usual device and session context, making the activity harder to flag as anomalous. In more advanced cases, scammers deploy remote desktop access tools (like AnyDesk or TeamViewer) to monitor or control the victim’s activity.
The beneficiary accounts receiving scam funds are frequently reused across scams and across banks, often linked to mule networks. Finally, while tactics are shared, the target profiles can vary widely from vulnerable seniors in romance scams to crypto-savvy investors, or even business finance teams in BEC frauds.
Certain scam narratives tend to resurface depending on the type of fraud such as fake Airbnb rentals, marketplace listings for popular items like Thermomix, PlayStations, or iPhones, and too-good-to-be-true investment opportunities in cryptocurrencies like Bitcoin.
Despite decades of investment in security infrastructure, traditional banks in the UK remain surprisingly vulnerable to fraud, accounting for a significant portion of the country’s £1.17 billion in annual fraud losses, as reported by UK Finance in 2023. As fintechs and criminals evolve rapidly, legacy institutions appear to lag behind, often reacting to fraud instead of preventing it. The reasons for this systemic failure are complex, rooted in institutional inertia, outdated technologies, and fragmented strategies.
1. Legacy Systems: The Weight of Technical Debt
Traditional banks are burdened by aging core systems, some dating back to the 1970s. These mainframe-based architectures make it challenging to integrate modern fraud detection tools, such as real-time behavioural analytics, biometric verification, and machine learning-based anomaly detection. Attempts to modernize are often hindered by the risk of downtime, compliance implications, and the sheer complexity of interdependent systems. Consequently, fraud prevention mechanisms tend to be layered inefficiently, leading to siloed alerts, high false positives, and poor customer experiences.
2. Reactive, Not Predictive, Risk Models
Traditional banks have traditionally relied on rules-based systems that flag known fraud patterns. While these systems can be useful for identifying historical fraud typologies, they are ineffective against emerging threats, such as synthetic identity fraud, first-party fraud, and Authorized Push Payment (APP) scams involving social engineering. Modern fraud vectors are dynamic, adaptive, and often psychological in nature. Yet, many banks still fail to integrate real-time customer behaviour analytics or predictive modelling that could intercept fraud before it occurs, resulting in a dependence on after-the-fact mitigation rather than true prevention.
3. Siloed Data and Lack of Cross-Channel Visibility
Fraud does not respect silos, yet many banks still maintain them. Data fragmentation across product lines (retail, SME, corporate), channels (online, mobile, branch), and jurisdictions hampers the comprehensive view needed to identify coordinated attacks or suspicious patterns. For example, a fraudster testing a stolen card in an e-commerce transaction might later exploit that same account through the bank’s mobile app. Without a unified fraud intelligence hub, multi-channel fraud often goes unnoticed.
4. Resource and Talent Constraints in Financial Crime Teams
While institutions allocate billions to cyber and fraud tech, many still underinvest in skilled fraud analysts, threat intelligence functions, and human-centric response teams. Those that do have such teams often overwhelm them with excessive manual reviews or disconnected tools, reducing both effectiveness and morale. The growing sophistication of fraud—exemplified by deepfakes, AI-generated phishing campaigns, and synthetic identities—demands interdisciplinary expertise that many banks struggle to attract or retain.
5. Limited Collaboration and Information Sharing
Despite banks being part of initiatives like the Joint Fraud Taskforce, CIFAS, and the National Economic Crime Centre (NECC), many of them still do not fully utilize these platforms. Concerns about reputational damage, data protection, and competitive sensitivities discourage complete transparency. This limitation hinders the industry’s ability to collectively identify and respond to fraud patterns in real time, which leaves them vulnerable to emerging threats. In contrast, fraudsters collaborate effortlessly across borders and platforms.
Effective fraud and scam detections that are available today rely on four foundational pillars: deep intelligence on both the customer and the beneficiary, a network-wide perspective(sharing and consuming), seamless integration with external scoring systems, and the intelligent application of AI.
Scam scoring models should be deployed, leveraging behavioral patterns, KYC data and transaction history, to assess the likelihood that an individual is being manipulated. This is especially important in light of the high revictimization rate seen among scam victims.
Real-time detection is a cornerstone of modern scam defense. It should draw from a combination of transactional data, digital banking session behavior, device fingerprinting, and biometric signals, all analyzed through advanced AI models. These models must have the capacity to seamlessly integrate with external risk signals or “shields”.
Additionally, detection systems should support custom business logic, such as analyzing payment narratives to flag hidden risk even when metadata appears otherwise benign.
Equally important is the sharing of scam intelligence across a trusted network of financial institutions, regulators, law enforcement, and technology providers. When aggregated and applied effectively, this collective intelligence becomes a powerful detection engine, significantly improving the detection mechanisms.
When a potential scam is detected, the remediation process must go beyond simply blocking a transaction. In many cases, freezing the entire account is necessary to prevent scammers from rapidly extracting remaining funds through alternate channels.
To respond swiftly and effectively, banks must be equipped with intuitive case management tools that enable fast investigations and informed decisions. Once a scam is flagged, teams need immediate access to contextual insights and psychological response frameworks that can help “unhook” victims, not just warning them of the threat, but explaining how manipulation occurred and how to regain control.
Finally, prevention, education, and what we refer to as “digital hygiene” or a strong “digital conscience” are essential and must not be overlooked. Empowering users to build safer online habits, recognize early warning signs, and understand the anatomy of a scam is just as critical as any technical measure. In the end, a well-informed user remains the first and strongest line of defense. On top of that, choosing the right fraud operating model—whether centralized, decentralized, or hybrid—is equally important.
To support the implementation of these measures, we’ve developed a Fraud and Scam Risk Prevention Checklist for Banks. It offers practical guidance for assessing existing controls, identifying gaps, and reinforcing your institution’s approach to scam and fraud risk.