By the time you finish reading this paragraph, a new synthetic identity may have just been created and approved by a financial institution somewhere in the world. Powered by generative AI, identity fraud is no longer a slow-moving threat. It’s fast, scalable, and dangerously convincing.
Fraud losses reported by consumers and companies in 2024 topped $12.5 billion, a 25% increase over 2023, according to KPMG.
Much of this surge is driven by deepfakes and AI-generated documents that can mimic real people with alarming accuracy. Criminals can now create entirely fake personas within minutes complete with doctored IDs, AI selfies and scripted responses designed to pass live authentication checks.
And consumers are taking notice. Over three-quarters (78%) of U.S. adults are worried about deepfakes in financial fraud, according to a recent survey by IDScan.net. Yet fewer than half feel confident that today’s ID verification systems can stop them.
This technological shift is exposing faults in traditional KYC (Know Your Customer) and AML (Anti-Money Laundering) practices. Static document checks and outdated onboarding protocols are no match for AI systems that can simulate blinking or facial expressions on demand. If fraud is evolving, the tools to stop it must evolve too.
More Info