As Agentic AI evolves from experimental to enterprise-ready, the need for robust security, governance, and operational clarity has become paramount. In an exclusive interview with CISO Forum, the CTO of Onix unveils how enterprises can responsibly scale autonomous AI systems without compromising control. From real-time observability in AI agents to privacy-preserving synthetic data and cloud-native compliance, Onix is pioneering a trust-first approach. With the integration of UJET’s professional services and innovations like the Wingspan platform, the company is shaping the next chapter of intelligent enterprise transformation—where AI doesn’t just act. Still, it acts accountably, securely, and in alignment with business goals.

Chief Technology Officer
Onix
CISO Forum: Agentic AI is emerging as the next significant advancement in AI. However, several security concerns make its implementation challenging within the enterprise ecosystem. What are your thoughts on this, and how do you see Agentic AI evolving in the coming years?
Niraj Kumar: Agentic AI represents a significant leap in capability, but it also introduces a new class of complexity, particularly around control, compliance, and operational risk. Unlike traditional models that follow static logic, agentic systems are designed to plan, decide, and act with autonomy. This autonomy brings immense potential, but it also demands precision, especially in regulated sectors such as healthcare, finance, and public services, where actions must align with clearly defined policies and accountability frameworks.
At Onix, we see Agentic AI not as a black box but as a high-performance enterprise tool that must function within well-defined boundaries. Wingspan, our enterprise AI platform, is built with trust at its foundation. Every agent operates with granular access controls, policy-based decision frameworks, and real-time observability to ensure all actions remain auditable, compliant, and aligned with organisational goals.
Responsible autonomy will define the next phase of enterprise AI. Transparency, continuous policy validation, and contextual explainability will be essential. The question is not just what agents are capable of, but whether they can act reliably, safely, and in complete alignment with enterprise intent. With Wingspan, the answer is yes.
Also, we have recently acquired the professional services business unit of UJET, a cloud-native Contact Center as a Service (CCaaS) software provider. This strategic acquisition enhances our position as a premier Google Cloud partner, accelerating the delivery of next-generation, AI-driven customer engagement solutions and advancing the modernization of enterprise contact centers.
This integration will bring together UJET’s CCaaS capabilities with Onix’s unique IP and Agentic AI solutions across more than 10 industries.
CISO Forum: What future trends do you foresee in the cloud security space, and which areas are currently the most significant focus for CISOs?
Niraj Kumar: As cloud environments continue to grow in complexity, CISOs are increasingly focused on integrating security earlier in the development process. We see a strong future for embedding security directly into DevSecOps practices, particularly with automated infrastructure as code (IaC) scanning. This proactive approach enables organisations to detect and resolve security risks before they escalate.
Another key focus area is identity security. With Zero Trust frameworks and passwordless authentication gaining traction, it’s clear that continuous verification of users and workloads is non-negotiable. We’re also seeing greater reliance on Cloud Security Posture Management tools, which provide unified visibility and automated remediation across multi-cloud environments.
In addition, supply chain security is at the top of the mind, with more enterprises adopting the Software Bill of Materials (SBOM) to track third-party vulnerabilities. As AI-driven threats and evolving data privacy laws, like India’s DPDP Act, create new challenges, advanced threat intelligence and compliance automation have become essential.
Looking ahead, standards like the Model Context Protocol (MCP) are expected to play a key role in enhancing cloud security by enabling better visibility into how AI models operate within cloud-native architectures. By providing a structured framework for documenting model behavior, lineage, and access patterns, it helps ensure transparency and control—critical for regulated industries and zero-trust implementations alike.
CISO Forum: How do you address the security concerns that typically arise when integrating diverse data streams and knowledge graphs within enterprise environments?
Niraj Kumar: Integrating data from multiple systems is a necessary step in enabling enterprise-wide AI, but it also introduces risks that need to be managed with precision. We address this challenge by embedding security into every layer of the integration process. When organisations bring together structured, semi-structured, and unstructured data from across departments and platforms, the most significant concerns often relate to unauthorised access, loss of data fidelity, and gaps in regulatory compliance.
To manage this, our agentic AI relies on an intelligent context engine, powered by our Eagle agent. It creates a knowledge graph that is not only intelligent but also security-aware, identifying ownership, access rules, and dependencies from the beginning. Instead of pushing all data into a single pool, Wingspan maintains data boundaries where necessary. By combining automated oversight with policy-driven controls, we help enterprises unify their data without increasing exposure.
CISO Forum: What security governance frameworks have you implemented to ensure that autonomous agents operate within defined enterprise security boundaries and meet compliance requirements?
Niraj Kumar: Autonomous AI can only succeed in the enterprise if it operates within strict governance boundaries. We’ve made this a priority from the start. Our autonomous agents are governed by a framework that enforces role-based access, tracks every decision through audit logs, and includes fallback mechanisms to handle unexpected scenarios. Every action taken by an agent is logged and traceable, which allows teams to review, understand, and explain how outcomes were reached. That’s especially important in regulated sectors, where compliance isn’t optional. Whether in healthcare, finance, or public services, organisations need to ensure their AI systems don’t act outside approved parameters. This level of control allows enterprises to scale their AI operations with confidence, knowing that the system is working within the limits they’ve defined.
CISO Forum: Data privacy is paramount for CISOs considering AI implementations. How do you help organizations maintain data security while still providing meaningful training datasets for AI models?
Niraj Kumar: Every enterprise faces the challenge of balancing data privacy with AI performance. We help organisations navigate this challenge by enabling privacy-preserving AI development through synthetic data. Our proprietary tool, Kingfisher, creates synthetic datasets that mirror the structure and statistical properties of real data without exposing any sensitive or identifiable information. This approach is efficient in regulated industries, where using live data for training or testing is often not an option. In these cases, we’ve built static, production-like environments that replicate real-world conditions, allowing teams to test and validate AI models safely.
Data privacy for us isn’t just about compliance — it’s about maintaining control over how data is accessed, used, and shared. By integrating privacy into the data preparation and model training process, we help organisations move forward with AI without taking unnecessary risks.