The rapid spread of artificial intelligence across workplaces is reshaping how organizations operate—and how they are attacked. According to the Zscaler ThreatLabz 2026 AI Security Report, AI is no longer a set of tools on the side. It has become part of everyday enterprise infrastructure, embedded across productivity apps, developer workflows, and customer systems.
AI Has Become Enterprise Infrastructure
The report analyzed nearly 989 billion AI/ML transactions observed across enterprise environments in 2025—an increase of over 91% year-on-year. Data sent to AI tools also surged, crossing 18,000 terabytes, as employees used AI for writing, coding, translation, research, and operations. AI now behaves like “always-on” infrastructure, continuously moving and transforming business data.
Productivity Tools Are the Biggest Risk Surface
Widely used tools such as Grammarly, ChatGPT, Codeium, Microsoft Copilot, and DeepL dominate enterprise AI usage. Because these tools sit directly inside everyday workflows, they also handle highly sensitive information—contracts, customer data, source code, and internal communications. The report notes that 39% of AI transactions were blocked by organizations, reflecting growing concern about privacy, compliance, and unintended data exposure.
Data Leakage Is Rising—Fast
AI-related data loss prevention (DLP) violations nearly doubled year over year. ChatGPT alone accounted for hundreds of millions of blocked attempts involving sensitive data such as personal identifiers, financial records, medical information, and source code. The convenience of AI assistants makes them especially risky: employees often share sensitive content as it is created, sometimes without realizing the security implications.
Embedded AI: The Quiet Risk
Not all AI use is visible. The report highlights the rapid rise of embedded AI—AI features built into SaaS tools that summarize emails, recommend actions, or automate workflows. These features often inherit existing permissions and data access. If permissions are too broad, embedded AI can unintentionally surface sensitive information. This “quiet AI” is becoming one of the fastest-growing and least visible sources of risk for enterprises.
AI Is Being Used by Attackers, Too
Threat actors are adopting AI to scale social engineering, phishing, and malware development. The report documents campaigns in which attackers used generative AI to create fake identities, deepfake voices, and AI-assisted malware. In some cases, attackers automated large parts of the intrusion chain, signaling an early shift toward more autonomous, AI-enabled attacks.
No AI System Is Safe by Default
Red-teaming exercises revealed a sobering reality: every enterprise AI system tested had critical vulnerabilities, often exposed within minutes of testing. Common failures included bias, manipulation, privacy leaks, hallucinations, and prompt injection. The takeaway is clear—AI systems break under real-world pressure unless they are continuously tested and governed.
What Leaders Should Take Away
The ThreatLabz 2026 AI Security Report makes one message unmistakable: AI adoption without strong governance increases business risk at machine speed. Winning organizations will be those that treat AI security as core infrastructure—combining visibility into AI use, real-time data protection, and continuous testing—so they can scale innovation without scaling exposure.
