India’s BFSI sector is accelerating into an AI-driven future—leveraging GenAI for customer experience, fraud detection, and operational intelligence. But as AI embeds itself deep into banking workflows, it is also opening unprecedented exposure paths across data flows, APIs, identities, and third-party systems. Rajnish Gupta, MD & Country Manager, Tenable India, warns that the same innovations powering efficiency are simultaneously empowering attackers through Shadow AI, prompt injection, and emerging agentic AI capabilities. In this interview, he explains why compliance-first approaches are no longer enough, how AI is reshaping both offense and defense, and why BFSI leaders must adopt exposure management to move from reactive firefighting to proactive resilience. This is a critical moment for CISOs to redefine security for the AI age.

MD & Country Manager
Tenable India
CISO Forum: How is the Indian BFSI sector’s rapid adoption of Generative AI changing its threat landscape?
Rajnish Gupta: The very tools used to drive efficiency and enhance customer experience, from LLM-powered chatbots to AI-enabled fraud analytics and business intelligence, are also introducing new governance and security challenges. It’s not the models themselves but the way they are integrated into core banking systems, customer channels, and third-party services that creates risk. As GenAI becomes embedded in enterprise workflows, new exposure paths emerge through data flows, APIs, identity permissions, and model-to-system interactions, expanding the attack surface and increasing the likelihood of data leakage, misuse, or compromise.
CISO Forum: Are the same AI tools driving innovation, also empowering cybercriminals, and how?
Rajnish Gupta: The same AI tools that power innovation are also widening the threat landscape, contributing to India’s record-high average breach cost of USD $2.6 million. One major driver is Shadow AI, where employees use unapproved AI tools without proper controls. According to recent research, this unmanaged usage added an average of ₹1.8 crore to breach costs. This expanding threat landscape is no longer theoretical; it is already materialising inside organisations.
Attackers can manipulate LLMs through prompt injection, causing models to ignore original instructions and execute unintended actions. Indirect prompt injection is hazardous because the malicious instruction hides inside an external source, such as a webpage, document, or data feed that the model is asked to summarise. As employees increasingly use AI assistants that access the open web, this becomes a zero-click attack path. An attacker could embed malicious instructions inside a seemingly harmless blog or comment section, allowing the LLM to execute harmful actions while summarising the page. By tailoring the hidden prompt to specific topics or trends, attackers can selectively target users who rely on AI search or AI copilots.
As AI adoption accelerates, cybercriminals are also expected to incorporate agentic AI systems to automate reconnaissance, phishing, and exploitation at scale. Unapproved or misconfigured AI agents used by employees may become targets themselves, as compromising them can expose sensitive business data.
CISO Forum: Why is a compliance-first approach no longer enough to tackle AI-powered threats?
Rajnish Gupta: Compliance norms provide a foundation for responsible AI use, but they were never designed to keep pace with the fast-evolving threats enabled by AI. Many organisations are making progress by aligning with emerging frameworks such as the EU AI Act and the NIST AI RMF. Yet, research from Tenable shows that while 51% of organisations report following these frameworks, only 22% encrypt their AI training or inference data, and just 26% conduct AI-specific security testing, such as red-teaming. This gap shows that compliance alone does not translate into absolute protection.
Compliance and security overlap, but they serve fundamentally different goals. Compliance ensures organisations meet minimum obligations, while security must defend against adaptive adversaries. A compliance-first mindset often leads to checkbox controls, basic IAM protections, and reactive remediation—leaving critical AI models, data pipelines, and integrations exposed to attack.
CISO Forum: What does cybersecurity look like in the age of agentic AI?
Rajnish Gupta: We are moving past the novelty phase of Generative AI and into the early utility phase of Agentic AI. As this capability matures, we can expect a growing number of CISOs, particularly in more mature or highly regulated sectors, to shift from simply buying AI tools to selectively building their own AI agents tailored to their environments. When designed with strong governance, security, and data controls, these custom-built AI capabilities can meaningfully streamline security operations and help address chronic pain points that contribute to analyst burnout.
CISO Forum: How can exposure management help BFSI leaders shift from reactive defense to proactive resilience?
Rajnish Gupta: Exposure management helps BFSI leaders move from reactive defence to proactive resilience by unifying siloed security data and providing contextual insight into how vulnerabilities, misconfigurations, identities, and external threats intersect. Instead of manually stitching together findings from multiple point tools, exposure management maps the relationships between risks to reveal how attackers could realistically move through an environment. This delivers a complete, continuously updated view of the attack surface, enabling teams to prioritise what matters, anticipate emerging threats, and communicate risk clearly to leadership, shifting the organisation from firefighting incidents to preventing them.
CISO Forum: What immediate steps should CISOs take to prepare for the rise of “Shadow AI” within their organizations?
Rajnish Gupta: CISOs should begin by establishing a clear AI acceptable use policy that defines which AI tools are approved, what data may be shared with them, and how AI-generated content should be handled. This must be supported by a cross-functional AI governance council, spanning risk, compliance, security, IT, and business units, to maintain and enforce the policy. To control Shadow AI, organisations also need immediate visibility into which AI tools employees are using, along with technical controls to block unapproved services and educate staff on safe AI use. Together, these steps reduce accidental data leakage and create a foundation for secure, responsible AI adoption.
