We spoke with Michael Engle, CSO at 1Kosmos to understand how synthetic identities are reshaping the first step of the kill chain, and the exploitation of weak identity proofing in high-risk onboarding flows.
Engle discussed how organizations should respond when credentials leak outside the perimeter and quick operational wins to reduce account takeover risk.
He previously led corporate and information security strategy at Lehman Brothers, and held leadership roles at Bastille Networks, 1414 Ventures, and Kantara.
Here is our conversation with Engle on identity threat landscape, impersonation threats, and what leadership must be asking to stand up to modern attack techniques.
Vishwa: Identity has become the first step in the kill chain, with attackers increasingly weaponizing impersonation. Can you walk us through how adversaries exploit weak identity proofing today, and what blind spots most organizations still underestimate in defending that initial breach vector?
Michael: What’s changed is that attackers don’t just steal credentials anymore, they manufacture entire identities. Weak identity proofing at the front door is the fuel for that.
We’re seeing synthetic identities built from breached data, AI-generated documents, and deepfake video used to pass remote hiring screens, KYC flows, and high-value onboarding.
In a hiring scenario, a fake candidate with a polished resume, synthetic background data, and deepfake video can slide through a remote interview, clear basic checks, and end up with admin access to critical systems.
The blind spot is that many organizations still treat identity proofing as a one-time, mostly administrative step: scan a document, run a background check, check a box. They underestimate how easy it is for AI to fake those artifacts and overestimate the strength of static checks.
Once that fake identity is enrolled, every downstream control, MFA, VPN, SSO, faithfully authenticates the wrong person. That’s why we keep saying: if you don’t get identity right at the beginning, your entire kill chain starts with an impersonator.
Vishwa: With an emphasis on the “identity-first” approach, can you share three specific operational changes organizations should make to bring identity proofing into the earliest stages of access? How does each change help reduce impersonation-based breaches?
Michael: If you want to be identity-first, you can’t bolt proofing on at the end. You have to pull it forward and make it part of how relationships start and evolve.
Each of those changes raises the bar for an impersonator. They can’t just get past one weak checkpoint, they have to be consistently convincing, across multiple high-assurance touchpoints.
Vishwa: Passwordless authentication continues to divide teams weighing security gains against usability and deployment costs. Based on what you’re hearing from the community, what are its main advantages and drawbacks? How can enterprises decide which authentication mix best suits their risk posture?
Michael: The big advantage of passwordless is that it finally lets us stop playing whack-a-mole with shared secrets. A properly implemented passwordless flow, device-bound keys, biometrics, liveness, cryptography, can meet NIST AAL2 or even AAL3 and is inherently phishing-resistant.
You’re not just adding another factor; you’re upgrading the quality of the factor to something that can’t be replayed or handed to a phisher in a fake login page.
From the user side, it’s also simpler. No one wakes up excited to remember more passwords or juggle more SMS codes. When you can tap a face or fingerprint on a trusted device and you’re in, that’s a rare case where security and usability actually align.
The drawbacks are mostly operational. There’s integration complexity if you’re sitting on a lot of legacy apps. There’s fear of vendor lock-in if you don’t anchor the program in open standards like FIDO2/WebAuthn. And there’s change management, convincing leadership that counting factors is less important than meeting an assurance level.
So how do you decide? Map your use cases to risk. For high-risk workflows like privileged admin access, high-value financial actions, access to sensitive data aim for phishing-resistant, AAL2/AAL3-grade authenticators like device-bound passkeys with biometrics and strong liveness detection.
For lower-risk scenarios, you might tolerate weaker factors temporarily, with a plan to phase them out. The key is to stop asking “Do we have MFA?” and start asking “Does this authenticator actually stand up to modern attack techniques?”
Vishwa: As biometric authentication becomes mainstream, adversaries are shifting to spoofing and synthetic-identity attacks. What emerging biometric attack vectors concern you most? How can defenders use AI or signal analysis to counter those manipulations effectively?
Michael: What worries me most isn’t a selfie with a printed photo, that’s old news. It’s AI-driven presentation and injection attacks that can feed a perfectly crafted, synthetic face or voice into your sensor or even into the camera pipeline itself.
The good news is that AI can work on our side too. Modern liveness and presentation-attack detection look at things humans can’t see: micro-texture on skin, light reflection patterns, depth, tiny movements in the eyes, even subtle blood-flow signatures.
On the backend, AI models can correlate those physical signals with behavioral and environmental context: does this login match the user’s normal device, location, and history, or is something off? But that only works if you pick vendors who can prove it, not just claim it.
I tell CISOs: insist on independently certified PAD and liveness, test against deepfake and injection scenarios, and treat “face login” as a serious security control, not a convenience feature.
Vishwa: Can you outline how identity and authentication systems should adapt when stolen credentials and tokens are harvested outside the corporate perimeter, such as via mobile malware or supply-chain breaches? What operational measures can minimize the fallout from such external exposures?
Michael: The reality today is that a lot of your compromise happens off your turf. Credentials get harvested on personal devices, in third-party SaaS, or through supply-chain attacks long before they touch your SSO. If your model assumes everything is fine as long as users present the right factor, you’re already behind.
Whether it's verifying a third-party contractor at a retail location or confirming patient identity at a healthcare facility, physical identity verification often represents a blind spot that most digital-first solutions struggle to address.
Vishwa: For security leaders building a program to prevent impersonation attacks, can you name operational changes that deliver the fastest reduction in account takeover risk?
Michael: To achieve quick wins against impersonation, start where attackers are currently succeeding.
Those three moves alone dramatically reduce the success rate of impersonation attacks without forcing every user through maximum friction, every time.
Vishwa: As identity-based attacks increasingly intersect with AI-generated deepfakes and social engineering, how do you see the verification landscape evolving over the next 12 months? What shifts should enterprises anticipate in adversary behavior?
Michael: In the next year, you’re going to see deepfakes and AI-assisted social engineering move from “interesting edge case” to “standard tool in the kit.”
Attackers will use AI to generate more convincing candidate profiles, spoof voices on help desk calls, and pass low-assurance video checks at scale. We’re already seeing fraud-as-a-service offerings that bundle deepfakes with ready-made synthetic identities.
On the defender side, verification will shift from static, binary checks to continuous, AI-assisted assurance. Instead of asking “Did this person show me a document and a selfie once?”, systems will ask “Does everything we’re seeing, documents, biometrics, device signals, behavior, consistently support the claim of who this person is, over time?”
That’s a big mindset change. Enterprises should also expect regulators and large relying parties to raise the bar on identity proofing, especially in high-risk sectors.
You’ll see more emphasis on certified liveness, PAD, standards-based assurance levels, and stronger linkage between verification and authentication.
The organizations that start building that foundation now will be in much better shape when the wave of AI-driven impersonation really crests.