
CISO
HCL Technologies
Rishi Mehta on redefining the CISO role, embracing AI, and embedding cybersecurity as a way of life across enterprises.
The cybersecurity landscape has undergone a seismic transformation, moving far beyond the traditional boundaries of firewalls and incident response. In an era where cyber threats evolve daily, and the line between digital and physical realms continues to blur, the role of Chief Information Security Officers has fundamentally shifted—from compliance enforcers to strategic business enablers.
As geopolitical tensions intensify and cyber risks escalate, AI reshapes the threat landscape; the modern CISO must navigate an intricate web of challenges that span from boardroom conversations to everyday employee behaviors. The question is no longer whether organizations will face cyber threats but how prepared their culture is to respond, adapt, and thrive in an increasingly complex digital ecosystem.
In an exclusive conversation with CISO Forum magazine, Rishi Mehta, CISO at HCL Technologies, shares his insights on this transformative journey and the strategic imperatives shaping the future of cybersecurity leadership.
Rishi Mehta, Chief Information Security Officer at HCLTech, is a seasoned cybersecurity leader who brings a “keep-it-simple” mindset to drive secure, user-friendly innovation. With over two decades of experience across global enterprises, he has led large-scale security transformations, balancing risk, technology, and usability. His leadership is grounded in continuous learning, embracing change, and fostering meaningful human connections.
CISO Forum: What are the most significant cybersecurity threats you foresee in 2025? And what strategies should CISOs adopt to mitigate them effectively?
Rishi Mehta: When we say this every year, it’s a clichéd term, but we tell ourselves that the cybersecurity ecosystem has changed yearly. And that means when you say this every year, it changes continually, right? It is not like a seismic shift is happening, but it’s changing every day, not just every year, but every day. However, you have specific fundamental learnings throughout the year, which essentially translate into priorities and areas of focus for the following year.
When we look at the past and, you know, think about some of the areas where you would like to invest more time this year, in my mind, there’ll be—I’ll probably name three. There could be many more, but I think I’ll go with three. One, the big elephant in the room is AI. Without talking about AI, you cannot list the top three, five, or top 10. I’ll put it right up in the center as the first one. From two perspectives: one, of course, you want to understand how we can better secure AI. What does it entail? What are some aspects of the data, the application, the model, and the usage? One of the significant focus areas this year is how we can bring a larger security perspective to it.
On the same note, there is a substantial element of FOMO in using AI in security. As security professionals, you don’t want to be in a position where you fear missing out on using AI to improve security. So, if we are not using AI today, let’s say, for better correlation, detection of events, or even better phishing recognition patterns, and so on—I think not doing this will set us back significantly as an industry, an organization, and a capability. This notion of AI will be a super important priority.
The second priority, which has been an area of focus over the past couple of years, is also increasing in maturity among organizations every year. This year, we’ll see more significant changes in the whole capability build-out of cyber resilience. Cyber resilience is viewed differently than what you would consider the old-age BCP-DR and that setup. While there may be lessons and areas of leverage between the two, it has its place under the sun. It is a critical parameter; if history has taught us anything, it is that things will go wrong and happen. What you will practice and get better at is the ability to get back on your feet so that the business can do what it needs to do—and that’s not an easy task.
Understanding what your crown jewel process is and what the key things that matter the most in an organization are is not an easy thing. You know, a large organization like ourselves—200,000-plus people, so many different geographies, different business lines, a very diverse, complex set of ecosystems—it is not as easy to identify, “Here are the top five things.” But I think that is the crux of the matter. If you can determine that, then build the cyber resilience cover on it to ensure that we have the proper mechanisms—both pre-breach, during the breach, and post-breach—if you look at these three spectrums in a cyber resilience strategy. Maturely testing it out more realistically will be a significant area of focus.
The third area, which remains as relevant as ever, is building the culture of cybersecurity. What does the culture of cybersecurity mean? Look, the workforce is changing—you’ve got younger folks, Gen Zs, and all, who are in the workforce. What cybersecurity would mean to you may mean something different to someone else. But the risk or the perceived risk to the organization, to our customers, and to our larger stakeholders—with whom creating that capability of trust is not dependent on whether you’re a Gen Z or a millennial or whoever else—that is just core to the business itself: that we will continue to build trust with our customers.
Thus, the overarching layer of that cybersecurity culture also needs to evolve. What are the right nudges that we do? What are the proper reinforcements we do for the correct behavior, for employees to look at risk, report risk, and help remediate risk? Because it has always been and will continue to be a shared responsibility.
Cybersecurity is not, of course, a person’s or a team’s responsibility. It is a shared responsibility. Thus, the culture of cybersecurity in an organization like ours is all-pervasive. It encompasses our customers, employees, partners, and broader stakeholders—think of our leadership and board—all of which comprise the ecosystem of cybersecurity: our partnerships with vendors, industry, and all that entails. It will be critical to do that and be better aware of the context in which it’s being done.
While those are my top three, I’ll add one more, which is topical and contemporary in today’s geopolitical situations. We’ve all seen that fairly closely over the year and in various scenarios. And I think it is a stark reminder that conflict—geopolitical or otherwise—has its bearing on cybersecurity as well. How do we, as cybersecurity professionals in the cybersecurity industry, come together with a newer lens on what cyber threat intelligence even means? The distinction between cyber intelligence and geopolitical intelligence is blurring. It might not be very different. At a certain point, it is intelligence. I think the nature of how we create and consume intelligence is also bound to change.
Some of these distinctions between how we’ve siloed ourselves—to say, this is only cyber intelligence, this is only geopolitical, or another category—I think those boundaries will merge. Thus, our ability to generate and utilize cyber intelligence for enhanced security will also undergo significant changes this year.
CISO Forum: How do you envision the evolution of the Zero Trust model in the coming years, and how can AI and automation enhance its implementation?
Rishi Mehta: The Zero Trust model has been around for some time. It’s not new. But all of us, to a large extent, now have the maturity to understand that this is not a tool or a set of technology controls that you throw at someone and say, “Hey, listen, I have gotten into Zero Trust.” If you look at the Zero Trust continuum—the various pillars of Zero Trust, including think identity, think data, and all of that—you will see that multiple organizations are evolving their Zero Trust journey. We are all on different maturity paths with Zero Trust.
During that journey, a few things are becoming—or constantly reminding us—that, hey, listen, this is equally important, if not more, than some of the other pillars. In my mind, two are front and center today in the Zero Trust journey. One is, of course, identity. It has come a long way—from what it was in the past to what it is today. As an industry, we are far, far better at understanding identity risk. However, the identity ecosystem is changing so rapidly and drastically that our ability to focus on identity as a vector in the Zero Trust journey is unparalleled.
Think about all the machine identities, agenting AI, and all the bots we are talking about. All of these will lead to identities. And then, of course, you’ve got the more traditional set of identities—with users, service accounts, and all that privileged access. The ability to have visibility into that ecosystem and correlate it with the other factors—whether it is your network, the endpoint, or something else—is even more relevant today.
How will we shine a light on those identities, how they are consumed, and how do you find that needle in the haystack? Because identities—you’re talking millions and millions and millions, right? To find out that anomalous behavior is a concern today, I return to my earlier point: the fear of missing out on AI will significantly impact areas like this.
Because when you’ve got such a massive, disparate set of data to deal with, that is when the creation of those kinds of rules, those kinds of alerts, those kinds of intelligence—to figure out what is the differential between good versus not so good and act on it immediately—is where I think AI will play a very, very significant role.
The data ecosystem is closely correlated to the identity story. Again, today, the ability to understand where all my data is, who is consuming it, and how it is has become core to an effective cybersecurity posture management solution. You have to understand how people consume data because it is the singular most important asset we all deal with. And if we can get better at figuring out the presence of data in various shapes, forms, locations—the usage of data—and again, looking for that delta which tells us, “Hey, listen, this is the risk associated with this data,” because either:
- The inherent nature of that data,
- How people are consuming it, or
- How people are not consuming it—think of stale data that has not been used for a long time.
However, when examining a massive enterprise, today’s maturity journey focuses on a systemic approach to doing this better. Additionally, automation and AI are helping to illuminate this issue because it can’t be done manually. It will be counterproductive, time-consuming, and prone to errors.
The usage patterns are also changing so fast that you cannot create manual rules for the game because the rules are changing. Because the game is changing, the rules, which are the rules, the correlation, and the intelligence, are also dynamic. To create those dynamic rules, you need AI-enabled systems that can generate those rules and take action in a contextualized world because the context changes daily, whether we like it or not. Enabling these vectors of identity and data with AI will be super helpful in that context.
CISO Forum: Regarding AI, what is your current approach—are you leveraging it primarily for operational efficiency or focusing more on mitigating its potential misuse in cybersecurity?
Rishi Mehta: It’s a sum of both because we want to get better at security, and that’s a continuous journey. Every day, we consider what more we can do and how to incorporate it, and we believe that leveraging AI capabilities in certain areas will enable us to improve.
This could be inbuilt models, the ability to correlate, or copilots on various technologies to help interface the analyst and the system. If, in a regular scenario, the analyst has to hop across three or four different planes of glass to get to that point of understanding the threat, by leveraging AI, we can bring that together and help the analyst generate the correct output faster—100% we should!
There are other productivity plays as well. Think about incident summarization and incident learning, where analysts spend much time and effort. There is an efficiency play that can come in. That is the bottom of the pyramid, which is where the efficiency play is. Slightly higher up on the pyramid is the data play.
In my mind, it’s: “Listen, I have disparate data sources—can I create the right correlation rules on the fly?” I see the context changing because I am leveraging technologies such as AI.
A new bunch of threat intel is coming in, and something is changing in the environment—externally or internally—based on various triggers.
The second layer in AI’s evolution is the ability to generate better detection capabilities via rules, which is an effective play. While the base layer might be an efficiency play, it is also an effectiveness play, which helps us do our work more effectively. As one converges that pyramid, we want to ensure that our teams are capacitated, trained, and skilled in handling cybersecurity threats.
Thus, there is a human element that brings all of this together with efficiency and effectiveness. It is all converging with the analyst. It is all about converging with cybersecurity professionals, which helps them become better professionals. It provides them with the right nudges to ask the right questions, undergo proper training, and acquire the necessary skills, which will enable them to become better cybersecurity professionals.
Because there is also a shortage of skills in general, it isn’t easy to acquire the proper skill set at the right time. Thus, we build efficiency, effectiveness, and humans on top of efficiency. This range of efficiency, effectiveness, and human, where it all converges, is a massive opportunity for us.
And why do we do all of this? Businesses want to experiment. While this may be security-specific, there are many things people are experimenting with in the broader business world. We are experimenting internally. We are working on solving many of our customers’ challenges with AI solutions for them as well. We endeavor to do this most securely, which means being involved from the stage of use case identification through to design, implementation, and regular monitoring of those capabilities. This is where security is embedded into the overall AI development ecosystem to ensure we are in lockstep with the business, not just building AI solutions but also securing them.
Remember, there’s a larger aspect to it where all of this dovetails into a Responsible AI framework—which we today have—and we are plugging all these components into the overall Responsible AI framework, which then helps customers and ourselves internally as well, not just to generate the right AI capabilities but to adopt them with a sense of assurance that we’re doing the right thing.
CISO Forum: With the introduction of the DPDP Act, how is your cybersecurity strategy evolving to meet compliance requirements, and what challenges do you anticipate?
Rishi Mehta: The DPDP Act, like any other regulation, is meant to create better trust with its consumers. In a DPDP Act-like scenario, as consumers of various services, it is intended to give more power, authority, and control to the consumer of the data they handle. Essentially, that is the intent.
The regulators have put forward a welcome intent. Much work has been done to ensure that we understand its implications, and we work closely with government and industry representation bodies, such as NASSCOM and DSCI, to develop this regulation. We’ve adopted that approach because it benefits all of us to be a part of the process rather than come in later.
One thing we’ve seen is that we’ve worked through that journey with the industry and the regulator. It’s not—in my mind—an on-off switch. What I mean by that is that over the years, organizations like ours have been following various best practices geared towards multiple regulations, GDPR or CCPA, and all of that, but also with the lens that we are doing the correct thing: what is right for the consumers, what is right for our customers, and so on.
Thus, when a new regulation like the DPDP Act comes into effect, because the intent is the same, it is not a complete shift in approach from what you’ve been doing to now. Some tweaks and enhancements are required, and specific processes need to be established to ensure optimal performance. A whole body of work has been engaged long in advance to handle it, and we are well in line with that approach.
So it’s a welcome change. The industry has also participated in helping build it, and I’m sure there will be challenges—there will be, I guess, teething issues and understanding and all of that. However, I’m pretty confident that with the approach we have taken, this will be a significant step forward. We will continue to work with the regulator to strengthen this.
CISO Forum: How is the industry adapting its data governance frameworks to align with the DPDP Act, particularly when handling sensitive personal data across multiple jurisdictions?
Rishi Mehta: When we think of data governance—and to my earlier point—that data is really at the heart of it. You must believe that data-centric approaches will be more relevant and specific. When we think of data governance, we consider it a full lifecycle. Please consider the ability to see my data. Where is it stored? Knowing where my data is and where it is stored is the first part: visibility. The second part concerns access. Do we know how people are accessing it? Who is accessing it? The level of assurance we have that access to the data is equally important.
The third element would be the usage patterns. Yes, you’ve got access, but think—maybe you can call it authorization—about the different modes in which people are accessing it, which could be based on usage patterns. And is that telling us something? Is that telling us whether people are doing it the right way or not the right way? How can we make it better? Once you’ve obtained the access, part of that access is determining whether it is internal or external. Remember, many of your vendors and supply chains share data. Thus, you need to be cognizant of when it is moving beyond your perceived perimeter.
Lastly, we tend not to spend enough time on the following: What are your processes, and how can you eliminate them when they are no longer needed? An effective data retention policy and schedule, as well as the ability to eliminate it, will be equally important. This lifecycle approach will provide us with better clarity and visibility into various aspects.
What am I masking? How am I segregating data in my production environments and non-production environments? What am I encrypting? What am I enforcing as rules of data moving from segment A to segment B in my environment? What am I sharing with third parties? Once we understand this lifecycle, the approach to data governance can become much more grounded.
Because at the heart of it, when you look at acts such as DPDP and its other avatars in different geographies, the heart of it is to give that level of assurance to the consumers that, “Listen, your data is being collected, used for the right purpose—for the purpose that you know, you are aware of—and around that use itself.”
Taking that lifecycle approach will help us better determine this and then establish the right level of control internally and in how we engage with the larger ecosystem for that data.
CISO Forum: What best practices are emerging to address the challenges of data localization and cross-border transfer under the DPDP Act, and how are organizations ensuring compliance?
Rishi Mehta: Cross-border transactions are related to this. The ability of organizations to develop a strategy around it is equally important. It is not a project-based or a user-based kind of thing. You have to have a more strategic approach to the choices that you make—in the partners, the choices you make contractually, the choices you make technically—and also in conjunction with your privacy teams and legal teams so that there is a complete, comprehensive view of how compliance can be ensured to some of these regulations.
The aspects of localization, etc., are not new. This has also been a part of the conversation in various other regulations. Thus, it is part of the contracting conversation when considering the partner ecosystem, which includes multiple cloud service providers and other vendors today.
These aspects are discussed and agreed upon constructively to ensure transparency about what is happening today. If something were to change tomorrow, what process would people know? There will be a change, a new data center, or whatever else has happened.
People can then correlate it back to the minimum requirements and the commitments that organizations would have made based on those contractual requirements with someone else. It’s a more continuous process, but that’s something being resolved in the design phase itself today.
CISO Forum: As a veteran in this area of cybersecurity who has seen its evolution, how do you know the culture of cybersecurity is evolving, particularly in organizations?
Rishi Mehta: It’s very close to the heart. Way back, I’d written a paper called The Moral Science of Cybersecurity or something—and here is what I was referring to!
When I was in school, we used to have a subject—I think once a week—and the name of that subject or that period was called Moral Science, where they talked about some good stories, how to be a good human being—you know, stealing is bad, don’t do this, greed is bad, stuff like that. But, you know, inculcating the right way of living will make you a good human being.
The subject of moral science was constructed based on how humans would live and the challenges they would face over various millennia—thousands of years. But in the last 20, 30, 40, 50 years, how we have lived has changed.
Yet, our process of learning what good moral science means in this changed world has not happened. That change has not occurred to the extent that it remains equally important. We’re still learning, “Hey, listen, greed is bad, don’t do this, don’t do that,” which is all good.
But the context in which we live has changed drastically. We need to consider this a lifestyle issue—not just a way of working but a way of living. Today, I will discuss this a lot—when you think of a term called cybercrime, there is no cybercrime. It’s a crime. The mode of perpetuating that crime could be using some cyber or a mix of cyber and physical tools—who knows! But is that important? Or is it essential that there is a crime? You have to see what you’re getting into.
Thus, building the culture of cybersecurity is not about saying, “Hey, listen, I’ve got some sort of controls on your endpoints, so you’re okay.” Or that, “Hey, listen, just don’t click on the link that says I’ll promise you a free lunch at the Karnataka Golf Association course, and then we’re all sorted.” It is about the way of working. Unless you can get that mind space with the employee or the last person standing, who has to choose to say, “Should I choose an easy password or a difficult-to-guess password? Is it okay to share it with someone?”—It’s a choice.
Eventually, people will make it, no matter what level of controls you put in place. It’s a choice people will make: “Do I raise my hand or not when I see something wrong?” That comes with trust and knowing the ecosystem. Building a cybersecurity culture means embedding it into working and living practices. Of course, you’ll conduct training sessions, email sessions, the annual refresher course, and other such activities. However, individuals’ message comes from their teams, businesses, and homes.
Today, the boundary between work and non-work is merged. As individuals, we should be equally concerned with cybersecurity for our kids and our parents, who are highly vulnerable, right? Because once we inculcate the right behaviors, say, for our kids, these are the same kids coming into the workforce. How about this as an ongoing thing? That’s how I see this shaping up. I hope that many colleges today focus on cybersecurity, but this is a primary education issue. It has to start there. And, just as we used to do, Moral Science was once in that category; cybersecurity is in that category now. It becomes a way of living more securely in a cyber world rather than thinking of it as cybersecurity training. And to me, that’s cybersecurity culture.
CISO Forum: How have you seen the role of the CISO evolve? Where do you think it’s going? Is it going to fragment or consolidate?
Rishi Mehta: The role of a CISO has always been envisioned as a partnership role. It was never an audit role. At least, that’s how we thought this role would grow—but it went through its maturity cycle. It did start by saying, “Hey, listen, I’ll tell you what to do and what not to do. If you don’t do it, I’ll come and audit you. Then I’ll give you a red scorecard. Then you give me something, I’ll work on it, we’ll make it green, and then I’ll come and test it every six months.”
However, during the initial maturity of this role and escape function, the onus of responsibility was with the cybersecurity team and the CISO rather than the business delivering it. Thus, it was always somebody else’s problem—never the business’s problem. They were in the company of doing business, and cybersecurity was relegated to somebody else in technology or wherever else the person may be sitting.
From there on, it is a very different landscape. Today, when we think of cybersecurity professionals and the role of a CISO, it is no longer about saying, “Hey, this is your problem, and let me just go do the business of business.” It’s about saying, “Listen, I want to do better business. I aim to foster stronger customer trust. I want to do better customer engagement. Let’s work together to ensure the customer continues building that trust with us.”
And what does it take to do that? How do we respond? How do we communicate our posture? How do we communicate assurance that you are fine doing business with us? I think that has changed again, to make it a more shared responsibility rather than, “Hey, listen, not my worry—you fix this, I’ll focus on business.” That paradigm has changed. Of course, the function’s maturity, capabilities, and incidents have contributed to its development.
We’ve seen that when incidents happen, it’s not just a tech impact. It’s a brand impact. It has a reputational impact. It’s a business impact. An incident could happen with customer A, but hundreds of other customers could also get rattled by saying, “Hey, what’s going on?” The impact has also become more broad-based.
Thankfully, today, this is a board-level conversation. The board is asking the right questions to help build that assurance—to make sure you have the right resources and capabilities to do the right thing for the organization.
The level of conversation has also gotten uplifted, and that is where I see this progressing more directionally. To know where you’re headed, you must reflect on where you were. Looking back on that journey over the past few decades, we are headed in the right direction for the CISO role to become increasingly a business enabler. Thankfully, we’ve left behind all those—in my mind—very petty issues of the past, like “Where does the CISO report?” All of that is irrelevant right now. You work as an enabler of the business, which is what the job is.