When Agents Go Rogue 

Indian enterprises are deploying AI agents faster than they can govern them. Autonomous systems are making irreversible decisions nobody authorized, and nobody can fully audit. 

A Scenario That Could Happen Today 

It is 11:47 p.m. in a mid-sized Indian insurance company’s Mumbai headquarters. No human is in the office. But an AI voice agent is working without pause, handling inbound policy queries, cross-selling products, and processing consent-based debits for premium payments.  

A new customer, frustrated after a claims dispute, calls in. Through a carefully worded complaint laced with indirect prompt injection, the caller manipulates the agent’s instructions, convincing the system that the customer’s debit mandate covers a “policy upgrade.” Within minutes, the agent has debited four accounts, issued three duplicate policies, and sent confirmation emails to all parties. By the time the operations team notices the anomaly, the damage has already cascaded across multiple downstream systems. No human authorized any of it. No traditional firewall caught it. And the audit trail is nearly impossible to reconstruct. 

This is not a speculative future. It is the kind of scenario that is already keeping India’s enterprise security leaders awake at night. This is the new reality of enterprise risk—one that fundamentally alters how cybersecurity must be understood. 

“Trust AI for speed, trust the human for judgment, and trust the CISO for guardrails.” – Abhishek Jha, Global CISO, Tata Technologies 

Abhishek Jha
Global CISO
Tata Technologies 

From Deterministic to Autonomous Threats 

For decades, cybersecurity has operated on a broadly predictable universe. Threats were rule-based, human-originated, and largely reactive. Firewalls, DLP tools, SIEM platforms—the entire security stack was engineered around the assumption that you could define what “normal” looked like and detect deviations from it. Agentic AI breaks that assumption entirely. 

A deterministic threat landscape has traditionally driven the security landscape. With the advent of AI, it’s becoming increasingly non-deterministic and more autonomous. 

What makes this shift so concerning is not just the scale or speed at which AI agents operate, but the nature of the actions they can take. Modern agentic systems do not merely respond to queries; they call APIs, trigger financial transactions, send communications, modify databases, and coordinate with other agents. According to the State of Agentic AI Security 2025 report, 69% of enterprises are already piloting or running agentic AI in production. Yet only 21% have the visibility needed to secure those deployments. 

For Indian enterprises the challenge is particularly acute. A recent Delinea survey found that 68% of Indian respondents had discovered unsanctioned AI tools or agents accessing company systems in the past twelve months—well above the global average of 53%. Only 27.6% of respondents said they could detect shadow AI in real time. 

“One incorrect autonomous injection will result in different following injections.” – Dr. Pawan Chawla, SVP, CISO & DPPO, Tata AIA Life Insurance 

Dr. Pawan Chawla
SVP, CISO & DPPO
Tata AIA Life Insurance 

The Risk Taxonomy of Agentic AI 

Dr. Pawan Chawla, SVP, CISO & DPPO at Tata AIA Life Insurance, articulates a clear taxonomy of the risks that enterprise CISOs must now contend with. 

The first is autonomy risk—systems acting before sufficient verification or validation are completed. The second is cascading errors, where one incorrect autonomous action triggers a chain of downstream consequences across multi-agent pipelines. “One incorrect autonomous injection will result in different following injections,” he notes, making multi-agent coordination a critical risk. 

Then there is prompt injection and control takeover—arguably the most insidious risk. Unlike a traditional cyberattack that exploits code vulnerabilities; prompt injection exploits the semantic layer of an AI agent, manipulating its understanding of what it should do rather than breaking the rules of what it can do. A 2024 real-world incident in financial services demonstrated this pattern: an attacker tricked a reconciliation agent into exporting an entire customer database by framing the request as a routine business task. 

Irreversibility is another critical dimension. When an AI agent executes a transaction, sends an email, or deletes data, that action may not be undoable. And finally, there is the problem of traceability—once prompt cascades through multiple agents, reconstructing what happened, and proving who or what was responsible, becomes extraordinarily complex. 

This last point carries profound legal implications that the industry has not yet resolved. A pressing question has emerged around agentic AI: if it takes a wrong decision responding to a customer’s question, who bears the contractual obligation? It is an open question—and regulators around the world are only beginning to formulate answers. 

“We understand what they want, why they want it, and what dataset they are going to use. That is the first and foremost control.” – Himachal Jothinarasimhan, Head of Cybersecurity, Ashok Leyland 

Himachal Jothinarasimhan
Head of Cybersecurity
Ashok Leyland

Shadow AI and Vibe Coding Time Bombs 

One of the most underappreciated risks in Indian enterprises today is shadow AI—the proliferation of AI tools adopted by employees and business units without governance oversight or security review. 

Abhishek Jha, Global CISO at Tata Technologies, describes a scenario that will resonate with anyone who has worked in a large organization. An Azure AI agent, built by a well-meaning developer to measure workforce productivity, inadvertently exposed compensation benchmarks, PII, and salary structures—information that would have taken months to extract using conventional methods. “Gone are the days of focusing only on network segmentation,” he observes. “Now it is the age of data segmentation.” 

Jha also raises the specter of “vibe coding”—a term for AI-generated code that bypasses every conventional security control in the software development lifecycle: no SAST, no DAST, and no three-tier architecture review. Code written by an AI at 1 a.m. and pushed into production by 5 a.m., with no human judgment applied between the two events. “There is no ISO or compliance standard for vibe coding,” he notes. “Your CIO or CTO asks for a piece of code, and it is productionized before the security team even knows it exists.” 

His prescription is simple but profound: “Trust AI for speed, trust the human for judgment, and trust the CISO for guardrails.” 

A Governance Framework That Works 

The emerging consensus among India’s security leadership is that AI risk cannot be owned solely by the CISO. Himachal Jothinarasimhan, Head of Cybersecurity at Ashok Leyland, describes the AI governance committee his organization has established—a cross-functional body that interrogates every new AI tool request: What data will it access? What is the business needed? What is the risk of exposure? 

“We understand what they want, why they want it, and what dataset they are going to use,” he explains. “That is the first and foremost control.” 

But governance must extend well beyond the security team. Legal counsel must be involved—not as an afterthought, but as a core participant—given the rapidly evolving regulatory landscape around AI liability in India and globally. HR must engage, as the workforce transformation driven by AI creates its own class of human-machine interface risks. Business leaders must own their AI deployments, not delegate risk to the CISO, and walk away. “CISO’s role is governance and compliance,” says Jha. “Let it stick to that. The owner of the AI risk is the organization.” 

Operationally, the emerging best practices for agentic AI security converge around several principles: maintaining a live inventory of every AI agent deployed in the enterprise (since you cannot control what you cannot see); implementing strict privilege boundaries and least-privilege access for non-human identities; establishing real-time behavioral monitoring with the ability to isolate and quarantine anomalous agents; designing for reversibility wherever possible; and building explainability into agent workflows so that every autonomous action can be traced, audited, and attributed. 

Security by Design, Not Repair 

Perhaps the most important mindset shift that is emerging in India’s security community is moving from treating security as an afterthought to treating it as a future thought—something designed into AI systems at their inception, not retrofitted after an incident. 

“Security is not the afterthought like yesterday,” says Jha. “Now, security has to be a future thought, and you have to be ahead of the race to cope with AI.” 

Traditional DLP tools cannot intercept a voice query to ChatGPT. Traditional SIEMs cannot flag a cascading multi-agent prompt injection. Traditional compliance frameworks have not yet caught up with the realities of autonomous code generation or AI-driven financial transactions. The tooling gap is real, and the organizations that acknowledge it honestly—rather than retrofitting old controls onto new risks—will be the ones that govern AI responsibly. 

India is building on AI faster than almost any other economy. The enterprises that will lead are not those that restrict innovation in the name of security, nor those that deploy autonomy without guardrails. They are the ones building cross-functional trust, governance maturity, and security imagination to ensure that when the agent acts, it does so within bounds humans have deliberately chosen. 

The agent is already in your company. The real question is whether you are prepared to govern it. 

This article draws on a panel discussion featuring Abhishek Jha (Tata Technologies), Dr. Pawan Chawla (Tata Life Insurance), and Himanshu Joth (Ashok Leyland), as well as findings from the State of Agentic AI Security 2025, Delinea’s India AI Security Survey 2026, and Acuvity’s 2025 State of AI Security report. 

Author