If you thought your bank was being careful with artificial intelligence, think again. The Netskope Threat Labs Financial Services Report April 2026 — covering data from February 2025 through February 2026 — paints a striking picture of an industry sprinting toward AI while struggling to keep its most sensitive data in check.
The AI shift is real — and fast
Just a year ago, more than three in four financial services employees were using personal AI accounts at work. Today, that figure has collapsed to barely one in three, while the share using company-approved AI tools has more than doubled. ChatGPT leads adoption, used by roughly three in four organizations, with Google Gemini and Microsoft’s Copilot products close behind. Newer entrants like AssemblyAI have gone from near-zero usage in mid-2025 to deployment by more than a third of firms, reflecting surging demand for specialized voice and transcription capabilities. This shift signals a genuine maturing of AI governance in finance — but the report is clear that the job is far from done. A growing share of users now toggle between personal and enterprise AI accounts, meaning shadow AI risks have not been fully eliminated.
The data leaking through AI’s cracks
When employees use AI tools in ways that violate company policy, regulated financial and customer data is most often exposed, driving the majority of all genAI-related violations. Intellectual property follows, then source code, and finally exposed passwords or API keys. The same pattern holds for personal cloud apps. What makes this especially concerning is the scale: while roughly seven in ten employees directly use generative AI, nearly all are indirectly affected by it through embedded features in everyday tools — and most are using applications that feed their data into AI training systems without ever realizing it.
Attackers hiding in plain sight
The threat landscape has shifted in a particularly insidious direction. Attackers are now distributing malware through the very cloud platforms financial firms trust most. GitHub tops the list as the most abused vehicle for malware delivery, with Microsoft OneDrive close behind. Because malicious traffic increasingly mirrors legitimate cloud activity, detection has become far harder for security teams to manage in real time.
Personal apps: the leakiest pipe
Despite strong corporate policies, personal cloud apps remain deeply embedded in financial workplaces. LinkedIn, Google Drive, and ChatGPT, in their personal forms, are used by the overwhelming majority of employees. To push back, firms are deploying upload blocks and real-time employee guidance, with Google Drive, ChatGPT, and Gmail being the most frequently controlled applications.
Blocked, not forgotten
ZeroGPT leads as the most banned AI tool across the sector, followed by DeepSeek and PolitePost. These choices signal that financial institutions are not merely reacting to individual app risks but building whole-category governance strategies to meet strict privacy, security, and compliance requirements.
The report’s message is pointed: as AI becomes the infrastructure of modern finance, protecting the data flowing through it is no longer optional. The firms treating AI governance as a compliance checkbox — rather than a living security discipline — are the ones most likely to appear in next year’s breach headlines.
(Source: Netskope Threat Labs Financial Services Report, April 2026. Data period: February 1, 2025 – February 28, 2026.)
