Identity proofing and fraud detection are increasingly driven by AI—scoring risk, determining legitimacy, and deciding who gains access to services, accounts, and opportunities. Many of these systems are trained on incomplete, historically biased, or narrowly representative data.
When identity AI fails, it does not fail evenly.
This session examines how bias in identity proofing and fraud models manifests as a systems accuracy problem, not simply an ethical concern. False positives, exclusionary thresholds, and feedback loops can quietly train identity systems to disproportionately flag or block certain populations—creating measurable business, regulatory, and reputational risk. Rather than framing this as a diversity conversation alone, this talk reframes the issue as a core identity engineering challenge: model quality, data integrity, and accountability in automated decision-making.
Using real-world identity scenarios, attendees will learn where bias enters identity AI pipelines, why traditional controls often miss it, and what identity leaders can do to detect and mitigate harm before it becomes embedded at scale.