Scaling AI safely: Why India needs an AI-first approach to security

India’s artificial intelligence journey is entering a decisive new phase. What was once experimentation is quickly becoming execution at scale. AI is now deeply integrated into the financial services sector, digital public platforms, recruitment platforms, healthcare, and consumer apps. This is a reflection of India’s aspirations to use AI as a catalyst for productivity, inclusion, and growth.

Reuben Koh
Director of Security Technology and Strategy
Akamai Technologies

Yet as the adoption of AI is accelerating, the uncomfortable truth is that the readiness of security is not at par with the ambition of AI.

The majority of the current security models were designed for a more traditional digital world that was slower and perimeter-focused. In that world applications were static, data flows were predictable, and human monitoring was always involved. This is no longer the case with AI. Today’s AI systems are machine-speed, always consuming data, and communicating through complex networks of APIs, platforms, and automated agents. Often, this happens without human intervention.

This leads to a clash between the non-deterministic nature of AI and the static nature of traditional security.

Security in the age of machine speed

The traditional approach to cybersecurity has been mainly centered on breach prevention, keeping the attackers out and keeping sensitive data safe. This is still important but no longer adequate in the context of AI-driven systems. AI systems are inherently dynamic and increasingly autonomous. They learn, adapt, and evolve constantly. This means that the attack surface is also constantly changing.

APIs have become the foundation of AI ecosystems, enabling the connection of models to data sources, applications, and users, all essential in Agentic AI operations. APIs are also being exploited by malicious users who seek to take advantage of logic flaws, input manipulation, and malicious automation. However, threats such as data poisoning, model manipulation, and malicious automation pose risks that are subtle, fast-moving, and hard to detect using traditional methods.

In this regard, “security-ready” must mean more than just adherence to standards or response plans following an incident. It must mean the capability to safeguard AI systems in real-time as they operate at scale with regard to data flows, decision pipelines, and automated actions.

Data integrity is the new security frontier

Protecting data is important but protecting the integrity of the data is also just as critical. This is because the reliability of an AI system is only as good as the data it is using. This means that if the training data, inference data, or feedback data is tainted, the results can be biased or even toxic, even if no breach has occurred.

It becomes especially important in contexts where AI outcome has a direct impact on the citizens, customers, and institutions. Whether it is credit approval and fraud analysis, healthcare diagnosis, or public service delivery, the trust in the outcome of AI is based on the integrity of the data.

Security, therefore, needs to transform from just disabling threats, to an enabler of innovation. One that guarantees AI systems function as expected, even when they are abused, manipulated, or scaled in unexpected ways.

Operationalizing trust, rather than just regulating it

India has taken significant steps in laying the policy and regulatory foundations for its digital and AI future. However, regulation by itself cannot keep up with the pace of threats that operate at machine speed. Trust in AI cannot be regulated through frameworks and guidelines; it has to be implemented through technology and architecture.

What this means is that security needs to be baked into AI systems, rather than being an add-on. This also means that there needs to be a shift from reactive controls to continuous protection that is able to detect and respond to anomalous behavior in real-time.

Crucially, security teams must be working alongside AI developers and business leaders, not as a business afterthought. The measure of security readiness should be how confidently organizations can scale AI without posing systemic risk, not how quickly they can respond after something has gone wrong.

The next phase of India’s AI journey

The potential in AI for India is vast, but the long-term effects of AI will depend on trust. If AI is viewed as opaque, unreliable, and susceptible to misuse, then adoption may continue in visibility but be tenuous in reality.

What it means to be “security-ready” has to be re-thought, as it will not be about slowing down innovation but about allowing it to accelerate in a secure manner. By transforming security into a core capability that is able to safeguard data integrity, control automation, and function at the speed of machines, India can make sure that AI adoption can flourish in a responsible manner. The future of AI in India will be shaped not by the pace of deployment of models, but by the security and robustness with which they are trusted to function. This is the only measure of AI readiness in a growing digital-first country like India.

Authored by Reuben Koh, Director of Security Technology and Strategy, Akamai Technologies

Author