
Cyber Threat Radar – Deepfake fraud is rapidly becoming the dominant driver of biometric identity attacks across Africa, according to a new report from identity verification firm Smile ID.
The company’s 2026 Digital Identity Fraud Report reveals that artificial intelligence is reshaping the economics of identity crime.
The findings show that nearly 90 percent of rejected biometric verification attempts in Southern Africa are linked to AI assisted impersonation and spoofing techniques. These attacks rely heavily on deepfake technology, face swap tools and other synthetic identity methods designed to deceive identity verification systems.
Impersonation attacks accounted for 47 percent of rejected verifications. In these cases the selfie provided does not match the claimed identity. Spoofing attempts accounted for another 40 percent and included deepfake videos and manipulated facial images designed to bypass liveness detection.
Together these figures confirm that deepfake fraud is now the primary threat facing biometric identity systems in the region. Traditional document based fraud now represents only a small share of attempts.
Deepfake Fraud Targets the Identity Verification Pipeline
The report shows that fraudsters are no longer focusing solely on identity documents or fake photos. Instead they are attacking the infrastructure that supports digital identity verification.
Fraud networks now manipulate operating systems, mobile devices and verification sessions to bypass detection. These techniques allow attackers to interfere with the identity capture process itself rather than simply falsifying the image presented.
Smile ID recorded more than 100,000 injection style attacks each month during 2025. These attacks relied on emulators, virtual cameras and modified device environments that inject manipulated video streams into verification systems.
This represents a major shift from traditional fraud. Attackers are now exploiting weaknesses in the entire digital identity pipeline rather than attempting isolated impersonation attempts.
Deepfake Fraud Expands Beyond Onboarding
Another key finding is that authentication fraud now exceeds onboarding fraud by more than five times.
In practical terms this means attackers increasingly target existing accounts rather than attempting to create new ones. Once an account is verified, criminals attempt to take control during login processes, password resets or device changes.
The use of automated AI tools allows fraud networks to reuse stolen biometric identities across multiple platforms. These tools enable attackers to take over accounts during active sessions and transfer funds across financial platforms at scale.
Smile ID identified extreme examples of identity reuse. In one case the same face appeared more than 12,000 times across multiple services. In another incident attackers attempted over 1,000 account registrations using a single identity within just 30 minutes.
These patterns show that deepfake fraud has moved beyond individual scammers. Organized networks now run large scale automated identity operations.
Rapid Digital Growth Creates New Attack Surfaces
Africa’s digital economy is expanding at a remarkable pace. Over the past decade the percentage of adults holding financial accounts has grown from 34 percent to nearly 60 percent.
This expansion created more than 200 million new accounts across the continent. While this growth has unlocked economic opportunity, it has also created new vulnerabilities.
Many identity systems still operate on a one time verification model. Fraud networks now exploit the gaps that appear later in the customer lifecycle.
The report draws on over 200 million identity verification checks across 37 industries and more than 35 countries. The data reveals coordinated fraud networks operating across multiple platforms simultaneously.
Why Deepfake Fraud Is Becoming Industrialised
Deepfake fraud has become easier and cheaper due to rapid advances in artificial intelligence.
High quality face swap tools, synthetic video generators and voice cloning software are now widely accessible. These tools allow criminals to scale identity attacks with minimal cost or technical expertise.
The result is a new era of fraud operations that resemble industrial production lines rather than isolated criminal attempts.
Networks reuse stolen biometric data repeatedly, automate attack workflows and target the moments where financial value concentrates. Login flows, device changes and high value withdrawals are now the preferred attack points.
Protecting Identity Systems from AI Powered Data Theft
While the Smile ID report focuses on deepfake fraud within biometric systems, it highlights a broader challenge facing organizations worldwide.
Modern attackers increasingly rely on AI powered malware capable of capturing screen content and extracting sensitive data using optical character recognition and structured JSON extraction.
Even when secure systems prevent direct database access, attackers can still harvest the information visible on screen.
SentryBay’s Armored Client addresses this threat by protecting the endpoint itself. The platform blocks screen capture attempts and prevents malware from extracting visible data through OCR based surveillance techniques.
By ensuring sensitive information cannot be captured from the display layer, Armored Client prevents attackers from building the datasets required to train deepfake systems or automate identity fraud.
Tim Royston Webb, CEO of SentryBay, warns that organizations must rethink how identity systems are protected. “Deepfake fraud is only possible when attackers can gather enough visual and behavioral data to train their models,” he explains. “If you protect what appears on screen, you remove the raw material these attacks depend on.”
As identity verification continues to move online, protecting the endpoint and the screen itself will become a critical part of defending against the next generation of digital fraud.

