A new EY report reveals how artificial intelligence is simultaneously rewriting the rules of cyberattacks and cyber defense—and what business leaders must do right now to stay ahead.
The Double-Edged Sword
Artificial intelligence has officially moved from corporate buzzword to battlefield. EY’s February 2026 report, AI and Cybersecurity: The New Frontier of Business Resilience, makes one thing unmistakably clear: the same technology that helps companies detect threats faster is also giving criminals unprecedented firepower. Security teams can no longer afford to treat AI as a future consideration — it’s already shaping every attack and every defense happening today.
The Numbers That Should Keep CEOs Up at Night
The financial stakes laid out in the report are staggering. The average data breach now costs organizations $4.4 million. However, companies that have deeply integrated AI and automation into their security operations saved roughly $1.9 million per breach compared to those that haven’t. Microsoft’s own trials showed AI-assisted security teams cut incident resolution times by about 30%. Meanwhile, global AI infrastructure investment hit $200 billion in 2025 alone, with data center spending projected to surpass $1 trillion annually by 2030.
The Threat You Probably Aren’t Prepared For
The report devotes significant attention to deepfakes and AI-powered social engineering—and the examples are alarming. In January 2024, criminals used an AI-generated video to impersonate an engineering firm’s CFO during a live video call, successfully convincing an employee to wire $25 million to a fraudulent account. According to UNESCO data cited in the report, 46% of fraud experts have already encountered synthetic identity fraud, 37% have seen voice deepfakes used in attacks, and 29% have encountered video deepfakes. These aren’t experimental threats — they’re operational criminal tools being deployed right now.
AI Systems Themselves Are Vulnerable
One of the report’s more sobering insights is that AI models aren’t just weapons or shields — they’re targets too. Adversaries are actively attempting to manipulate AI systems through data poisoning (corrupting the data models that learn from), prompt injection (tricking AI into ignoring its own rules), and model theft (extracting proprietary AI through systematic querying). Organizations building on AI without securing the underlying models are essentially constructing fortresses on sand.
The Geopolitical Dimension
The report frames AI infrastructure as a matter of national security, not just corporate strategy. The US currently leads with computing power equivalent to 39.7 million high-end chips across 187 AI clusters. China, despite having more clusters, faces significant capability gaps due to US export restrictions on advanced semiconductors. Nations from Saudi Arabia to India are racing to build sovereign AI capacity — a dynamic directly influencing where data can flow and how companies must structure their cloud strategies.
What Leaders Must Do
The report’s recommendations for CISOs are practical and urgent: establish AI governance frameworks, build a comprehensive inventory of AI assets, invest in workforce training on adversarial threats, simulate AI-specific attacks through red teaming, and align data storage decisions with evolving sovereignty laws.
The core message is simple — organizations that treat AI security as a strategic priority today will define digital resilience tomorrow. Those who don’t may find themselves starring in the next cautionary case study.
