In an era where cyber threats evolve at machine speed, traditional security approaches are rapidly becoming obsolete. India witnessed a staggering doubling of cybersecurity incidents to 22.68 lakh in 2024, while attackers leverage AI to automate phishing, identity abuse, and lateral movement at an unprecedented scale. For Chief Information Security Officers navigating this volatile landscape, the challenge is stark: human-in-the-loop defenses cannot keep pace with attacker velocity.

Managing Director and Area Vice President
SentinelOne India & SAARC
In an interaction with CISO Forum, Diwakar Dayal, Managing Director and Area Vice President at SentinelOne India & SAARC, shares critical insights on operationalizing autonomous AI defenses, tackling shadow AI proliferation, and executing Zero Trust at scale. As regulatory frameworks like CERT-In directives and the DPDP Act reshape compliance expectations, Dayal outlines how CISOs can balance automation with accountability while building truly cyber-resilient organizations for 2026 and beyond.
CISO Forum: AI-driven attacks are now operating at machine speed. How should CISOs rethink detection and response when human-in-the-loop is no longer fast enough?
Diwakar Dayal: CISOs must pivot to fully autonomous, agentic AI platforms that operate at attacker velocity to stay ahead. Embrace autonomous AI defenses by transitioning to agentic platforms like SentinelOne’s Singularity Platform for instant triage, investigation, and remediation, bypassing human delays for routine threats.
Enable machine-speed reasoning with real-time exploit prediction and correlation to match attacker velocity, especially amid India’s cybersecurity incidents, which doubled to 22.68 lakh in 2024. Streamline SOC operations by automating alert overload (thousands per minute), freeing analysts for strategic focus amid escalating AI threats.
CISO Forum: Shadow AI and unsanctioned models are proliferating inside enterprises. What new risk vectors does this create for CISOs, and how can they regain visibility and control?
Diwakar Dayal: Shadow AI creates serious governance and data protection risks. Employees using unsanctioned generative AI tools may unknowingly push sensitive data beyond enterprise controls, increasing the risk of IP leakage and non-compliance with the DPDP Act. For CISOs, the core challenge is the loss of visibility into where data is going, how it is processed, and which AI models are involved. Regaining control requires continuous discovery, stronger governance, and security controls that work at machine speed without slowing innovation.
Key focus areas for CISOs:
• New risk vectors: Unapproved AI tools may store prompts and files externally with little logging, increasing the risk of sensitive data and IP exposure.
• AI-powered discovery: Use agent-based visibility to continuously monitor network traffic, APIs, and SaaS environments to detect shadow AI usage in real time.
• Integrated controls: Combine discovery with DLP and CASB to enforce data protection across hybrid and cloud environments.
• Clear governance: Define approved AI tools, usage guidelines, and regular reviews to turn shadow AI from a hidden risk into governed innovation.
CISO Forum: Attackers are using AI to automate phishing, identity abuse, and lateral movement. How does this shift the economics of cyber defence for Indian organisations with limited security talent?
Diwakar Dayal: AI has commoditised cyberattacks, making phishing, impersonation, and credential abuse cheap, fast, and highly scalable. This directly challenges Indian organisations already facing a chronic cybersecurity skills shortage. The economics of defence now favour automation. Autonomous SOC platforms can handle the majority of investigations without constant human intervention, reducing dependence on scarce talent while significantly improving response speed. With CERT-In consistently flagging phishing and credential abuse as dominant attack vectors, early detection and automated containment are essential to limit business disruption, financial impact, and regulatory exposure.
CISO Forum: Zero Trust is a regulatory priority in India, but it is hard to execute at scale. How can autonomous, agentic AI operationalize Zero Trust in the SOC?
Diwakar Dayal: Zero Trust often fails in practice due to legacy systems and outdated assumptions that internal environments are inherently safe. That model no longer holds.
Autonomous, agentic AI allows Zero Trust to move from theory to daily operation. Platforms like Singularity continuously validate identities, devices, and behaviours across endpoints, cloud workloads, and SaaS environments, using live risk signals and device posture rather than static rules.
By automating investigations and responses, these platforms reduce reliance on manual processes, which is critical given India’s cybersecurity talent gap. More importantly, this approach brings enterprise-grade Zero Trust capabilities to SMBs at a time when India’s rapid AI adoption is placing unprecedented stress on existing security models.
CISO Forum: Identity has become the primary attack surface. How should CISOs evolve identity security as AI agents, bots, and non-human identities multiply across the enterprise?
Diwakar Dayal: As AI agents, bots, APIs, and service accounts multiply, identity security must extend beyond humans. Non-human identities often operate with excessive privileges and limited oversight, making them attractive targets for attackers. CISOs should adopt AI-driven platforms that continuously monitor identity behaviour, automatically detect compromise, enforce credential rotation, and limit blast radius. Built-in audit trails and explainable AI actions are critical to support CERT-In reporting and compliance with the DPDP Act. Given that phishing and credential abuse remain leading attack vectors in India, CISOs should assume identity compromise is inevitable and design controls accordingly.
CISO Forum: With CERT-In directives and the DPDP Act in play, how can CISOs balance autonomous security controls with compliance, auditability, and accountability?
Diwakar Dayal: Indian regulations do not restrict automation; they demand explainable automation. CISOs should deploy autonomous controls within clearly defined policies, thresholds, and escalation paths so every AI-driven action is traceable and auditable. Strong data discovery and classification are foundational under the DPDP Act. Without knowing what data exists and who can access it, compliance breaks down. Consolidated visibility across endpoint, cloud, identity, and network layers ensures faster response and cleaner regulatory reporting.
The organisations that modernise early won’t just meet compliance requirements, they’ll build long-term resilience, stronger cyber hygiene, and trust in an increasingly AI-driven digital economy.
CISO Forum: Looking ahead to 2026, what will distinguish truly cyber-resilient organisations in India, and how will AI reshape the CISO’s role over the next two years?
Diwakar Dayal: By 2026, cyber-resilient organisations will operate at machine speed with human accountability. Success will be measured not by alert volumes, but by how quickly threats are detected, validated, and contained. AI will execute defence actions autonomously, while humans retain oversight, policy control, and judgment.
SOCs will shift from reactive, log-based models to continuous telemetry and AI-driven decision engines that stop attacks as they unfold. Identity and SaaS sprawl will define the primary attack surface, with CISOs increasingly judged on how well they secure access and privilege across human and non-human identities.
Culture will matter as much as technology. Security embedded into everyday workflows will outperform policy-heavy approaches. Resilience will also extend beyond the enterprise, with intelligence sharing through CERT-In and ISACs becoming essential. The CISO role will evolve from tool management to resilience architecture, governance, and business alignment.
By 2026, cyber resilience in India will be about responding faster than attackers can adapt, with AI doing the heavy lifting and humans taking accountability.
***
