Organizations are rushing to adopt AI at scale — but a sweeping new global study reveals they are dangerously unprepared for the identity security risks it entails.
The Delinea 2026 Identity Security Report, based on a survey of 2,001 IT decision-makers across seven countries, including the US, UK, Germany, and India, paints a troubling picture: the faster companies race to deploy agentic AI, the wider their security blind spots grow — particularly around who, or what, has access to their systems.
Confidence that doesn’t hold up
The report’s most striking finding is what it calls the “AI security confidence paradox.” A full 87% of organizations say their identity security posture is ready to support AI-driven automation at scale. Yet in the same breath, 46% admit their identity governance around AI systems is deficient. Respondents were twice as likely to give poor marks to their ability to manage identities in AI environments compared to legacy systems. This gap signals overconfidence built on incomplete information. While 82% say they are very confident in discovering non-human identities with access to production systems, fewer than 1 in 3 actually validate those discoveries in real time.
Speed is winning over security
Under relentless pressure from executives to keep up with competitors, security teams are routinely being asked to loosen access controls. Nearly 90% of organizations report such pressure, with 1 in 5 describing it as strong. When security requirements conflict with business speed, fewer than 1 in 3 say those requirements are consistently enforced. The rest grant exceptions, temporarily turn off controls, or look the other way, as one expert quoted in the report notes. These controls, which are “temporarily disabled,” rarely get re-enabled.
Shadow AI: The threat already inside
The report shines a sharp light on the surge of “shadow AI” — employees deploying unapproved AI tools and agents on company systems without IT knowledge. A significant 53% of organizations say they regularly encounter unsanctioned AI accessing their systems, and that is just what they are detecting. Industry analysts at Gartner estimate that by 2030, 40% of organizations will suffer a security incident due to shadow AI risks. The problem is grassroots and fast-moving: everyday employees, not just developers, are spinning up AI agents under their own credentials, making it nearly impossible for security teams to tell legitimate from unauthorized activity. Only 28% of organizations can detect shadow AI in real-time.
The non-human identity explosion
AI agents, bots, and service accounts — collectively called non-human identities (NHIs) — now outnumber human accounts by an estimated 82 to 1, nearly double the ratio from just two years ago. Unlike traditional automation, these AI agents make contextual decisions, request additional access, and can escalate their own privileges in ways that were never anticipated. Most organizations are granting them long-lived, standing access to keep pace with deployment — even though 73% of respondents acknowledge that doing so significantly increases their risk. A staggering 90% of organizations acknowledge having at least some form of identity visibility gap.
What needs to change — now
The report’s panel of independent security experts is unambiguous: the first step is visibility. As one expert puts it, if you don’t know who is accessing your systems, you cannot stop it. From there, organizations must invest in machine-speed monitoring, move toward zero standing privilege by granting access only when needed, apply zero-trust principles to AI agents, and constrain not just what systems an agent can access but what decisions it can make independently — a concept the report calls “least permissive autonomy.”
The bottom line from Delinea’s findings is stark: organizations are knowingly trading identity control for speed, and most have not yet witnessed the consequences firsthand. As the report warns, by the time they do, the damage will already be done.
