Enterprise GenAI demands governance, strategy, and continuous human-in-the-loop validation.
Our journey with AI and GenAI has taught us that expectation management is as critical as the technology itself. In boardroom conversations, there is often a perception that AI is a plug-and-play tool—especially because public GenAI tools like ChatGPT make it appear so effortless. But in a highly regulated environment like ours, the reality is far more complex. As a CIO or CISO, I often find myself explaining why enterprise AI adoption is not as straightforward as it seems.
One of the first things we had to clarify— both internally and with stakeholders—was that GenAI is not the same as traditional AI. Classical AI has long existed in banking, powering rule-based systems and analytics. GenAI, however, is non-deterministic, context-driven, and inherently less predictable. It requires a completely different strategic lens.
We quickly realized GenAI isn’t plug and-play—it demands strategy, guardrails, and human oversight.
Before implementation, we established an Innovation Hub to evaluate emerging technologies. An early GenAI experiment revealed unreliable and inconsistent outputs—even in controlled settings—highlighting the need for a more strategic, enterprise-grade approach. We responded by building a structured framework across three parallel tracks:
• A dedicated Center of Excellence (CoE)
• A use case–driven model
• A platform-based approach
Given our regulatory obligations, we intentionally limited GenAI use to internal-facing applications.
A key principle of our GenAI adoption is the human-in-the-loop approach, embedded across all workflows to ensure accuracy and accountability. Our field staff use GenAI to address queries related to products, processes, HR, and in some cases, customer interactions. However, every GenAI response is reviewed by a human before action is taken.
We’ve also embedded an explainability framework, which has helped identify gaps and refine response quality. In the early stages—even with Retrieval-Augmented Generation (RAG)—we saw only around 50% accuracy. But through iterative feedback and improvement, we’ve increased this to approximately 70–75%.
One of the biggest challenges we face is data governance and model auditability. Most organizations still lack robust frameworks to identify and mitigate vulnerabilities in GenAI systems. While the promise is immense, enterprise GenAI is not the same as personal use. It requires quality data, explainability, human oversight, and a governance structure that aligns not only with business needs, but also with ethical and regulatory standards.
–Authored by Anil Kuril, Chief Information Security Officer & Head – Data Protection Office, Union Bank of India