Agentic AI doesn’t just answer questions, it decides, acts, and adapts. Yet we’re still trying to control it with static roles and brittle permissions designed for human users and service accounts. That mismatch is becoming a systemic risk.
This talk challenges a core assumption in access control: that who you are and what you request are enough. For autonomous agents, what matters most is why an action is taken. Intent becomes the missing control plane.
In this session, we’ll explore how intent-based access control reframes authorization for agentic systems, shifting from pre-approved access to purpose-driven decisions evaluated in real time. We’ll examine how intent can be inferred, constrained, and audited without peeking into model internals or hard-coding policies for every tool and API.
This is a mental model shift: designing guardrails that reason about goals, context, and outcomes rather than identities and endpoints. Attendees will leave with a new lens for thinking about safety, trust, and governance in AI systems that increasingly act on our behalf, often faster than we can intervene.