Agentic AI systems promise efficiency by acting on behalf of humans — scheduling meetings, provisioning access, sharing data, and coordinating tasks across systems. But as autonomy increases, a fundamental question emerges: what does it mean for a human to consent when actions are delegated, propagated, and amplified by agents?
Traditional consent models assume a human understands the scope, purpose, and consequences of an action at the moment consent is given. In agentic systems, consent is often granted once and then implicitly reused as agents act at machine speed, invoke downstream agents, and move data across domains. This raises a critical ambiguity: when a human consents, do they also consent to downstream agents — and should they have visibility into when, where, and how their data travels beyond that original consent?
This session explores how consent breaks down in agentic AI through real-world enterprise and consumer scenarios, highlighting the gap between human meaning (“what I thought I agreed to”) and machine interpretation (“what the system is allowed to do”). We examine why existing identity and authorization models are insufficient, and discuss emerging governance and technical approaches for the enforcement of explicit, auditable consent constraints, including limits on delegation, scope, purpose, and downstream data movement.
Attendees will leave with a set of critical questions and a shared mental model for reasoning about consent in autonomous systems, enabling organizations to better preserve human intent, accountability, and trust as AI agents operate at scale.