As artificial intelligence reshapes the threat landscape faster than most enterprises can respond, the question of who — or what — gets access has never been more consequential. Mathew Graham, Chief Security Officer for Asia Pacific at Okta, sits at the intersection of these pressures daily. With Okta’s Bengaluru Innovation Center on track to become the company’s largest hub outside the United States, and AI agents increasingly acting autonomously within enterprise systems, Graham brings rare clarity to the thorniest challenges facing modern CISOs. From governing non-human identities to elevating security from an IT function to a board-level doctrine, his perspectives cut through the noise with both technical depth and strategic urgency.

Chief Security Officer for Asia Pacific
Okta
CISO Forum: Okta is growing its India headcount by 50% in 2026 — what specific security and R&D capabilities is the Bengaluru Innovation Center being built to own globally, not just support?
Mathew Graham: For us, when it comes to our development and growth in India, it isn’t just about a support organization. It is a genuine hub for us, and we’ve got an innovation center there.
So there are products of ours in our governance space, our logging and security monitoring space, and what we’re doing with MCPs, our model context protocol. A lot of these are being developed out of our Indian center. And we’re seeing quite a tremendous growth there, too.
So it is on target this year to be our largest hub outside of the US. You mentioned in your question about a 50% increase in headcount in India.
CISO Forum: As Bengaluru takes on a larger share of global product development, how do you architect security governance across distributed engineering teams without creating friction that slows innovation?
Mathew Graham: We have a global model when it comes to how we handle security within our organization.
So within Okta, we’ve got a very robust set of rules that we apply across the globe. So whether you’re in the US or whether you’re in India, whether you’re in Australia or Europe, we do have this consistent framework when it comes to how you treat security. We do have security staff in India as well.
And we’ve also got several security staff in the region. I don’t have the exact numbers on hand, but we do have a large security organization that spans Asia Pacific. So for us, it’s not just this development in the US; we’re adding sites around the world.
We actually have a genuine talent community across the Asia Pacific, particularly in India.
CISO Forum: AI agents can now request access, make decisions, and act autonomously — how does Okta’s identity framework distinguish between a trusted human, a trusted agent, and a compromised one?
Mathew Graham: I mean, it comes down to the configuration of the organization and what that agent is trying to do. So it’s not necessarily a case of looking for what the compromised agent is or who the compromised person is. And we do have several technologies for that. For us, it’s more about right at the start, how are you making sure that your non-human identities or AI agents are appropriately managed? And we do have discovery tooling.
So when you are running AI agents within the organization, we have discovery tooling to ensure they’re all captured and that you know exactly what they’re doing. We do have cross-app access as part of our product suite. So, that applies to both human and non-human identities.
So what that means is that it gives you principled and controlled access between different applications within your organization. So that way you can limit, and you can mitigate any issues that may happen in terms of what we call a blast radius, in what it can access, and what it can see. So by limiting that before you get to the application and doing it via Okta, you’ve got this gateway within your organization before it actually reaches any of the endpoints.
CISO Forum: When an AI agent inherits a user’s permissions to complete a task, where does accountability sit if that agent is exploited mid-workflow?
Mathew Graham: Yeah, and that’s, that’s a good governance question too, is that it’s a lot of these things are organizational specific. So you could have a company with a system owner who’s responsible for the agents. So they could have, you know, the system owner may have 10 humans that work for them, 10 people.
And then they’ve also got several AI agents, and they could be the ones responsible for those AI agents within that team and system. Or the other pattern that I see is that it comes down to the individual. So, who’s that one person who owns this little slice of what this agent is trying to do?
And that applies not just to a development lens, but also to organizations: just talking about, like, our customers here, you could have an HR operator who is responsible for the AI agent. And it’s also important to remember that you could have the owner of the agent, who actually architects and designs it, and then there are also the operators of the agent. And so you must have got these identities all sewn up so that the right people can access the right AI agents.
And then also that those AI agents are monitored. What can they do? How do you follow the identity of that AI agent? You know, the example I was giving where you could have a business owner in charge of an AI agent. If that’s an individual and that agent is assigned to that individual, then you’re going to have a problem if they move on.
You must look at job roles and how they play to the persona within an organization, rather than the individual themselves.
CISO Forum: What does a security engineer’s job actually look like in two years inside Okta — which skills become obsolete and which become non-negotiable?
Mathew Graham: A security engineer’s role has always been one that’s evolved.
My background is technical, and I was a security engineer for many years. And throughout my career, what I was doing and what I would do day to day would be changing, and you’d always be upskilling. And that’s no different today.
AI agents will be used by security engineers. They already are, and they’re building AI engineers. Sorry, they’re building AI agents.
And what it’s really allowed now is for the typical security engineer to be able to 10x themselves, to 10x their capability. A security engineer’s job is often looking at things like log files, and it could be pages and pages, gigabytes or terabytes of log files, poring through them, doing regular expression searches across the log files, and really trying hard to find things. But now with the use of AI, generative AI, and large language models, what I see now is that a lot of these log files are getting applied to those models.
And simply asking, where did this IP address access this environment? And that information is coming out in a report. So, over the next couple of years, for the security engineer, a lot of it is going to be upskilling in how to actually use AI, being AI-first and AI-forward when they think about the work they’re doing. And a security engineer is a broad term as well.
It’s going to be used in different ways. What I was talking about then is more than detection and response capability. But it’s going to be used a lot more, say, for penetration testing as well.
It can just consume so much more data and ensure that any blind spot is recognized immediately.
CISO Forum: Beyond training programs, how do you structurally embed AI literacy into security roles so people are designing AI systems rather than just reacting to their outputs?
Mathew Graham: We have at Okta, and I can probably talk about how so many customers are doing this from a cultural perspective. We have many AI-first capabilities and extensive AI training within Okta. So every employee had mandatory training earlier in the year on the use of AI and how to use it effectively.
And then within each team, there are initiatives on how we’re using AI. We certainly have that within the security organization. What are we doing internally to harness AI? And it’s one of our top priorities. And that’s the same across the whole organization.
And I see that, too, when I talk to CISOs in India and across Asia Pacific: they want to know how we’re using AI so they can learn how to use it in their organizations. And the feedback that I get is that so often it’s that they’re now just leading first, almost with curiosity, which I think is quite nice when it comes to technology these days, has really opened this window of curiosity that we’ve got now, that people can do things and do things like they haven’t been able to do before. It’s more like you would come up with an idea on the bus on the way home.
Then the next morning is something you can actually put in motion, you can actually start to do. And that’s a really wonderful thing. It’s a great thing.
And it’s really good for the industry.
CISO Forum: Social engineering and credential attacks still account for the majority of breaches — with AI-generated phishing now near-indistinguishable from real communication, is the “human firewall” concept effectively dead?
Mathew Graham: No, I wouldn’t say it’s dead.
And in fact, in a lot of ways, it’s more important than ever, because now what you see is more crafted phishing attacks. You see more crafted phishing attacks that use voice and video. AI has enabled so many of these services.
You see AI that can access so much data, identify people within an organization, and then find their phone numbers. And then that’s somebody that they want to target with a text that’s targeted to that individual. So the culture piece is probably more important than ever, people being aware of these sorts of attacks.
It’s so important now that your training and the organization’s processes are up to date. But then, you also back that up with the technical aspects, like making sure you’re using passkeys rather than passwords, because if the password is compromised, you don’t have an issue with access to the environment. That’s key.
It’s now more than ever a combination of culture. It’s a combination of technology. And then also a combination of governance, because legislators around the world, particularly in the Asia Pacific, are catching up to this.
And they want organizations to have robust policies in place. And they want organizations to be reporting back to the government, saying, what are we doing? And it’s actually more important than ever that the human element is under control.
CISO Forum: What does it take to move an enterprise from treating identity as an IT function to treating it as a board-level security doctrine?
Mathew Graham: That’s probably one of the main topics that I get, because it is so important now. You know, identity is the perimeter when it comes to IT security these days. And it is so much more than an IT conversation.
It’s so much more than a technology conversation. It needs to be elevated to the board. It needs to be elevated to the board by a, what’s the return on security investment? What’s the, you know, why is this so important? But then also, how does this enable work? And how does this enable productivity? How does this enable growth? That’s where you need to be, really talking the language of the board.
You know, the board has always been interested in profitability and, you know, how the organization runs. And is it running effectively? So now more than ever, that’s what needs to be articulated: the business impact. That’s certainly absolutely key when it comes to these board conversations.

