A new Proofpoint report surveying 1,400+ security professionals paints a sobering picture of how dangerously far organizations have fallen behind in securing their AI deployments.
AI Has Taken Over the Office. Security Hasn’t.
Artificial intelligence is no longer a tech experiment, it’s the backbone of how modern businesses operate. According to Proofpoint’s 2026 AI and Human Risk Landscape report, a staggering 87% of organizations have AI assistants deployed well beyond the pilot stage, and 76% are actively rolling out autonomous agents, AI systems that can independently plan and execute tasks without human approval at each step.
The problem? Security didn’t get the memo. Only 48% of organizations say security was embedded in their AI strategy from the beginning. The remaining 52% admit security is either playing catch-up, inconsistent, or entirely reactive. AI was deployed to production before the guardrails were put in place.
Controls Exist. They Don’t Work.
Here’s where things get alarming. A confident 63% of organizations say they have AI security controls in place. That sounds reassuring — until you dig into what those controls are actually doing.
More than half (52%) of organizations are not fully confident that their controls would detect a compromised AI. And the numbers back up that fear: among organizations that have security controls deployed, 50% still reported a suspicious or confirmed AI-related incident. That’s not a minor gap in coverage. That’s a coin flip.
The report’s conclusion is blunt — coverage is being mistaken for control.
Threats Are Everywhere, Not Just in Email
When organizations that experienced AI-related incidents were asked where threats appeared, the results showed no safe harbor. Email led the list at 67%, followed by SaaS and cloud apps at 57%, AI assistants and agents at 53%, and collaboration tools, file-sharing, and social platforms at 49%.
This matters because AI is deeply embedded in exactly these channels. Companies are using AI for customer support (69%), internal chat summarization in Slack or Teams (67%), and email drafting (63%). Attackers, the report notes, follow the operating model. Where AI works, threats follow.
When Something Goes Wrong, Nobody Can Reconstruct What Happened
The most chilling finding in the report is about incident response. Only one in three organizations says they are fully prepared to investigate an AI- or agent-related incident. Nearly 95% report that managing multiple, disconnected security tools is at least moderately challenging — and 53% call it very challenging.
The consequence is structural: 41% of organizations cannot correlate threats across multiple channels. When an attacker enters via email, escalates through a collaboration tool, and exfiltrates data via an AI integration, siloed tools cannot reconstruct the chain of events.
The Industry Is Waking Up — Slowly
Organizations aren’t standing still. 61% plan to expand AI protections, 56% aim to extend collaboration coverage, and 53% intend to move to a unified security platform—only 3.9% plan to keep the status quo.
The Proofpoint report’s message is clear: the organizations that will successfully scale AI aren’t those deploying the most tools — they’re those that build security around how modern work actually happens, across people, platforms, suppliers, and AI systems alike. Anything less is a gamble most organizations are already losing.
The 2026 AI and Human Risk Landscape report is based on a survey of 1,453 security professionals conducted in January 2026, spanning 20 industries across 12 countries.
