“Your voice is your password.” That was the
What was seen as a simple and foolproof authentication method is being challenged. As synthetic voice and video attacks get more convincing, banks are rethinking the degree to which they can rely on biometric authentication. That concern was raised by OpenAI CEO Sam Altman who, speaking at a Federal Reserve conference last week,
crisis in financial services due to AI impersonations.
Analysts and bankers agree: biometrics — including voice and facial recognition — are vulnerable as AI-generated voice and video clones challenge systems. The solution lies in multilayered authentication systems and better detection tools. But it’s a constant race against fraudsters, whose tool kits keep getting better.
“We’ve known for a long time that voice ID is spoofable,” said Jim Mortensen, strategic advisor in the fraud and AML practice at Datos Insights. “The fraudsters will match the capabilities that solution providers develop and get better as well. It’s going to be a constant push and pull.”
The ability to successfully impersonate voice ID has been highlighted by reporters for years. In 2023,
Managing the risk of biometric identification systems
Both voice and video-based biometrics can be used for authentication purposes — whether for signing into accounts, identifying customers who contact call centers or onboarding new users to new accounts. Users may be required to match a selfie or complete a video liveness test to verify the ID documents they’ve submitted or have on file, Mortensen added.
Bankers acknowledge that AI-based voice and video cloning pose threats, and they’re working on solutions that involve stepped-up verification, such as combining multiple forms of ID verification like voice plus PIN, or voice plus a security question — along with the use of liveness and deepfake detection technologies, Mortensen said.
A multifactor approach using the above modalities — along with behavioral biometrics, in other words, indications of how users type or swipe — are best practices.
“A layered strategy is so important. Combine voice or video with behavioral and other components — it ups the level of difficulty for the fraudster,” he said.
Banks also should be reviewing vendor choices — including their capabilities to update their toolsets to support the evolving threat environment.
“It’s a combination of getting more innovative vendors that might have better capabilities … a lot of these existing vendors have threat analysis functions in their organizations that look at failures of the solution in the past, and then try to understand it so they can provide feedback and patch those failures,” he said.
A growing concern among bankers
According to American Banker’s 2025
Predictably, larger institutions are more likely to have seen or responded to a deepfake incident. Nearly half of national banks (47%) and more than a third of midsize banks have adjusted their processes in response to deepfake or AI-driven social engineering, while 80% say they are proactively preparing for or responding to artificial intelligence-driven social engineering tactics.
For their part, banks say they are taking the threat seriously, and are evolving their defenses accordingly. Stearns Bank said it uses biometric identification at login, layered with multifactor authentication and real-time risk signals like device changes.
“Despite the increased challenges AI-generated spoofing has created, the guiding principles of verification and authentication have not changed,” said Adam Gill, director of digital banking and product at Stearns Bank.
He emphasized the importance of continuing to layer authentication factors, including “something you know,” “something you have” and “something you are,” rather than abandoning biometrics altogether.
“Integrating biometric authentication solutions is not a one-off or siloed digital transformation exercise,” Gill said. “It is the continued testing of new solutions, layering existing solutions, refining rulesets and continuously learning from use cases that help banks work smarter.”
Meanwhile, at Bankwell, Executive Vice President and Chief Risk Officer Steven Brunner said the bank is “actively evaluating biometric ID solutions that align with our existing infrastructure.”
Beyond authentication, while consumers are generally willing to provide their biometric information to initiate payments, an uptick in deepfake incidents might challenge that trust, suggests Christopher Miller, lead analyst for emerging payments at Javelin Strategy & Research.
“Consumers, broadly speaking, are not irrevocably opposed to signing up for biometric authentication … this positive attitude is one that could conceivably be turned negative,” amid a series of news stories about deepfake authentication risks, he said.
In response to emerging deepfake threats, banks should adopt multimodal authentication and real-time verification, while also exploring information-sharing across institutions, said Tiffani Montez, principal analyst at eMarketer.
“Banks must move beyond one-off fingerprint or facial scans to embrace continuous, multimodal authentication, monitoring behavioral signals like typing cadence and voice and stepping up verification in real time,” she said. “By sharing fraud intelligence across institutions and embedding privacy-by-design, they can block synthetic IDs and deepfake attacks while earning lasting customer trust.”