Deepfakes are not a future risk, they are a present-day capability that fundamentally breaks today’s identity assumptions.
Most identity strategies are still built on static attributes, reusable credentials, and point-in-time verification, typically at onboarding or origination. These approaches were designed to stop humans pretending to be other humans, not AI systems that can convincingly impersonate any human, at scale, in real time.
In this session, we invert the problem.
Instead of asking “How do we make identity proofing stronger?”, we ask:
What if the attacker always wins the impersonation game and identity systems must assume compromise by default?
We will explore why deepfakes defeat: biometrics; document verification; knowledge-based authentication; MFA and step-up flows
…and why incremental controls only increase cost and friction without restoring trust.
The session then introduces a new control plane for trust - one that shifts from who you look like to what you can cryptographically prove, across people, organisations, devices, and AI agents.
Attendees will leave with a practical, implementable model for surviving the deepfake era — without breaking user experience