
The latest research from Sumsub shows that deepfake fraud is scaling fast across the Asia Pacific region. Attacks are no longer isolated scams but part of a coordinated and increasingly professional criminal ecosystem.
According to the study, deepfake activity surged 2,100 percent in the Maldives, 408 percent in Malaysia and nearly 200 percent across several other APAC markets. Hong Kong saw a 147 percent rise, despite an overall drop in general fraud levels due to stronger regulation.
While global fraud volume is stabilising, high-quality and hard-to-detect scams rose 180 percent year on year. APAC is now the primary region for advanced identity manipulation using synthetic data and AI-driven techniques.
The findings draw from millions of verification checks and over four million fraud attempts. Researchers highlight a significant trend: fraud is evolving from manual impersonation to industrial-scale deception powered by deepfake video and voice synthesis.
Synthetic Identities, Deepfake Videos and AI Fraud Agents
One in four people in APAC has been targeted for mule recruitment, and 69 percent of businesses in the region say they have been affected by fraud.
New tactics include telemetry tampering and infrastructure manipulation. Fraudsters now interfere with software development kits, device signals and APIs to bypass checks.
Emerging AI agents can complete entire verification processes, create synthetic documents, submit deepfake videos and mimic real users with convincing accuracy. These fraud agents are expected to increase in sophistication throughout 2026.
Malaysia, Pakistan, Indonesia and the Philippines remain hotspots due to rising digital adoption and insufficient regulation. Meanwhile, Hong Kong, Singapore and Australia have curbed general fraud levels, though deepfake techniques are becoming harder to detect.
The report confirms that static verification is no longer effective. Companies must move to adaptive, multi-layered systems that assess signals in real time.
SentryBay Stops Deepfake Fraud Before It Starts
While the report does not confirm how individual frauds were executed, it aligns with what cybersecurity teams are seeing globally. Threat actors increasingly steal on-screen video and voice content to build deepfake models capable of bypassing verification and defrauding users.
SentryBay’s Armored Client prevents this kind of data theft at the endpoint. It blocks video and audio capture at the OS level, stopping malware from gathering material used to train deepfake systems.
“We now know that over 50 percent of finance professionals in the US and UK have been targeted by a deepfake scam,” says SentryBay CEO Tim Royston-Webb, citing recent research findings from Medius. “Even more alarming is that 43 percent of those admitted they fell for it. These aren’t just embarrassing moments, they are costly, damaging breaches that weaken trust across the industry.”
Armored Client ensures that nothing on screen or microphone can be used to build fake personas or bypass verification. As deepfake fraud continues to scale, protecting the source material is the most important step.

