Cyber-enabled fraud (CEF), defined as fraud schemes facilitated or significantly amplified by digital technology, has accelerated beyond the capacity of traditional fraud programs. Recent data illustrates how quickly the threat is expanding and why existing approaches are struggling to keep pace.
In February 2026, the Financial Action Task Force (FATF) released “Cyber-Enabled Fraud: Digitalization and Money Laundering, Terrorist Financing and Proliferation Financing Risks.” This report calls on governments, regulators, financial intelligence units, supervisors, and financial institutions to strengthen their capabilities to detect, prevent, and disrupt CEF.
The scale of the problem is stark. In its 2025 annual report released in April 2026, the FBI's Internet Crime Complaint Center (IC3) reported $20.8 billion in total losses from internet crime—a 26% increase year over year. Fraud schemes facilitated or significantly amplified by digital technology accounted for 85% of those losses, totaling $17.6 billion across nearly 453,000 complaints.
Even these figures underestimate the true impact. Many incidents never reach IC3, and financial institutions often absorb direct losses before they appear in public statistics. Rising dispute volumes, customer trust challenges, and operational strain are already daily realities across the sector.
Many financial institutions are running fraud programs that were never designed to detect CEF. These programs were built for an earlier threat environment where fraud was typically transactional, discrete, and visible only within a single institution’s systems. CEF breaks those assumptions. As a result, programs optimized for yesterday’s threats are poorly equipped to identify today’s most significant sources of loss.
The challenge with CEF isn’t the absence of actionable intelligence; it’s that many organizations aren’t designed to capture and operationalize it. CEF manifests differently from the fraud types many detection systems were built to address. By the time a traditional red flag appears, funds may already be in transit or irreversibly settled. Faster payments rails further compress intervention windows and fragment visibility across institutions, allowing organized CEF activity to advance before any one firm can detect the full pattern and respond with traditional controls.
The outcome is predictable. Alert engines fire on the patterns they were built to find, not the patterns that matter now. As alert volumes rise and investigation teams spend more time on low‑value cases, genuine CEF activity moves through undetected. This is a program design and calibration issue. Incremental operational improvements are insufficient without a broader assessment of whether the program is aligned with today’s threat landscape. What’s required is a broader assessment of design, governance, and calibration to ensure that programs reflect how CEF actually operates, not how fraud used to look.
CEF is systemic across all types of financial institutions. What can vary isn’t the nature of the threat but where and how it enters, and where and how to prioritize controls. Each institution’s business model, customer base, and operational design shape the CEF typologies it’s most likely to encounter.
This isn’t to suggest that CEF appears only at isolated points or that it doesn’t surface across the full customer lifecycle in all institutions. Understanding the differences in how exposure concentrates is essential to prioritizing controls. CEF detection is most effective when fraud program design reflects an institution’s specific exposure rather than relying on a one size fits all approach.
For depository institutions, CEF exposure tends to concentrate at account opening and payment authorization. At onboarding, AI-generated synthetic identities are bypassing standard identity verification, creating accounts that are then used to receive and send illicit funds. At the payment layer, account takeover allows criminals to initiate unauthorized transactions from legitimate customer accounts after stealing credentials or manipulation through social engineering.
In business banking, Business Email Compromise (BEC) drives similar risk by manipulating employees to authorize fraudulent payments. Across both typologies, mule account networks are the backbone—moving proceeds, accelerating velocity, and creating simultaneous fraud loss.
FinTechs and digital payment platforms tend to face concentrated CEF exposure at the point of onboarding, where streamlined, frictionless account opening becomes a target for identity-based fraud at scale. These institutions often collect rich signal device fingerprinting, IP geolocation, and behavioral biometrics at onboarding that can distinguish organized account opening from legitimate customer behavior.
But these signals are frequently discarded instead of being carried forward into ongoing monitoring logic. That can result in an institution possessing the data needed to detect organized fraudulent activity but lacking the downstream mechanisms to operationalize it.
For Money Services Businesses (MSB), CEF exposure sits in the core transaction product. Rapid, often cross-border transfers make MSB networks a natural channel for moving fraud proceeds before they can be traced or recovered. That can result in a very short detection window between the fraudulent event and the irreversible movement of funds. In this environment, detection logic that isn’t tuned to the velocity and destination patterns associated with CEF proceeds movement will miss the threat almost entirely.
The most common gaps in effective CEF detection fall into two categories: governance and detection architecture. Many fraud programs perform well at the transactional level but lack clear ownership of the strategic question: Is the overall program designed and calibrated to detect CEF?
Ownership of key responsibilities is frequently fragmented across business units. Proper governance includes:
A related and often overlooked gap is the disconnect between threat intelligence and fraud operations. Where threat intelligence functions exist, they’re typically housed within cybersecurity or information security teams and focus on network intrusion, data breaches, and system-level threats. These teams can often identify emerging CEF methods such as new social engineering playbooks, mule recruitment tactics, and fraud tools circulating in criminal marketplaces. But this intelligence rarely makes its way into fraud detection logic in a timely or structured way. Bridging that gap requires a deliberate governance mechanism, not just goodwill between teams.
CEF schemes generate specific, well-documented behavioral signals, including:
The central question for any fraud program is whether those signals are built into alert generation logic with detection thresholds calibrated at levels that will identify CEF activity without overwhelming investigation teams. In many institutions, alert logic has been tuned against historical loss patterns that predate the current CEF environment. That results in models triggering on patterns that no longer represent primary loss drivers, while excessive alert volumes consume investigation capacity that should be focused on genuine CEF activity.
Addressing these gaps requires:
Cyber‑enabled fraud has changed the economics, speed, and structure of fraud prevention. Many institutions have the data needed to combat it but are relying on programs designed for a simpler environment. Closing the gap requires stepping back from incremental fixes and re‑examining how fraud programs are governed, calibrated, and connected to the realities of today’s threat landscape.
Guidehouse is a global AI-led professional services firm delivering advisory, technology, and managed services to the commercial and government sectors. With an integrated business technology approach, Guidehouse drives efficiency and resilience in the healthcare, financial services, energy, infrastructure, and national security markets.