Expert Insights with Teleport, featuring Ev Kontsevoy, Co-Founder and CEO at Teleport, examines how infrastructure identity is evolving as organizations secure access for humans, machines, workloads, and AI agents.
Kontsevoy co-founded Mailgun, led product development at Rackspace Technology, and held engineering roles at GE Security and National Instruments, building cloud and infrastructure systems.
Agentic AI shifted in 2025 from experimental deployments to production systems that take actions, often without direct human involvement. SaaS providers will strengthen API authentication and access controls as AI agents interact directly with platforms beyond traditional user interfaces.
Kontsevoy argues that the “AI replaces jobs” narrative misses the deeper issue of a growing skill gap, especially for AI-native security engineering talent. He urges CEOs to rethink recruiting and training so security engineers can design guardrails, understand model behavior, and govern non-deterministic systems as identity becomes the core control layer.
Vishwa: What is your view about the role of agentic AI in business today, and what does it actually involve in terms of capabilities and behavior? What are your thoughts on whether the industry needs more granular classifications for different types of AI agents?
Ev: The role of agentic AI in business changed significantly in 2025. Before, it was more experimental, but now it’s moving rapidly into production. These systems aren’t just making recommendations - they’re taking actions, coordinating across systems, and operating continuously without direct human involvement.
That autonomy is what makes agentic AI powerful, but it’s also what makes it fundamentally different from other software. In my view, the industry absolutely needs more granular classifications.
Last year, the term ‘AI agent’ became a catch-all for anything using an LLM to make decisions, but that glosses over major differences. Some agents run centrally in data centers, others are local, some act explicitly on behalf of a human owner, and some operate autonomously across environments - it’s a huge range!
Ultimately, you can’t secure what you can’t identify, so without clearer distinctions, organizations will be unable to apply consistent governance and security controls when it comes to governing agentic behavior at scale.
Vishwa: Do you see SaaS companies taking steps to restrict access to their APIs from arising AI threats? If so, what kinds of restrictions are most likely, and how might they reshape enterprise workflows or system integrations?
Ev: Yes - we’re already seeing early signs of this. With AI agents becoming more capable, traditional user interfaces matter less, so AI can interact directly with SaaS platforms via APIs. For a lot of vendors, that’s an existential threat, as it risks reducing them to little more than a data backend.
To counter that, I expect that SaaS providers will tighten restrictions on their APIs through stricter authentication, tighter rate limits, and more granular access controls. Over time, this will reshape enterprise workflows by forcing organisations to think more carefully about how AI agents are identified, authenticated, and governed when interacting with third-party systems.
Vishwa: Several organizations have reduced security operations roles, including SOC teams. How do you see highly skilled employees reabsorbed into AI-native positions that require more specialized expertise?
Ev: In my view, the ongoing narrative that “AI replaces jobs” is missing the real issue.
What’s actually happening is a shortage of AI-native talent, particularly in security and security engineering. And as AI becomes embedded directly into production infrastructure, organizations will need people who understand how models behave, where automation should stop, and how to design effective guardrails.
To combat this, every CEO should be thinking hard about how to recruit and train AI-native security engineers who understand how to use AI tools, how models behave, where automation should stop, and how to design guardrails.
The new engineering role will combine knowledge of identity, infrastructure, and automation with a deep understanding of non-deterministic systems.
Vishwa: What forms of consolidation in cybersecurity technology do you expect as computing environments grow more complex and threat volumes increase? How do you see this consolidation changing how organizations structure identity, access, and control models as AI accelerates these trends?
Ev: We’re heading toward significant consolidation, particularly around identity security. Treating every new identity type (human, machine, AI, or other) as a separate problem has created silos that we’ve come to realize simply don’t scale. AI has exposed just how broken that approach is.
As environments grow more complex, organizations will demand a unified identity layer that treats all identities as variations of the same fundamental concept.
Vishwa: How should organizations approach their security strategies as human, machine, and AI identities begin to blur? What is your view on tackling identity types when machines behave increasingly similar to humans?
Ev: The distinction between human and non-human identity is becoming obsolete. When AI systems behave in non-deterministic ways and interact across systems just like humans do, separating them into different identity silos doesn’t make sense anymore.
Organizations should stop thinking in terms of identity types and start thinking in terms of identity behaviors, risk, and context. A unified approach doesn’t mean treating everything the same; it just means governing everything from a single source of truth, with policies that adapt to how identities actually behave in the real world.
Vishwa: Based on your observations, how do you expect the role of engineering in cybersecurity to expand in the future? What factors are driving the shift you’re seeing now, and what changes may be required to support it?
Ev: From my experience, security isn’t something that can sit neatly in the IT corner of the org chart anymore. The rise of AI identities has added layers of complexity to infrastructure, so much so that identity and access control have become engineering problems.
We’re already seeing IT and engineering functions converge, with engineers taking on greater responsibility for securing systems by design. Supporting this shift will require changes in tooling, culture, and education, so that security controls can be programmable, composable, and understandable to the people building the systems, not just bolted on after deployment.
Vishwa: When AI tools interact with infrastructure at privileged levels, what operational risks are organizations running into, particularly around unintended actions? What controls do you believe matter most to prevent AI from escalating risk?
Ev: One of the biggest risks is that AI automates mistakes at scale. With AI making decisions around access, remediation, and response, misconfigurations and privilege creep become far more dangerous, but also often harder to detect.
The most important controls are centered around identity. Organizations need strong visibility into what an AI can access, why it has that access, and how its privileges change over time. Guardrails should be designed with the understanding that AI is non-deterministic, like a human, but at the end of the day, it’s still software.
Without a unified identity foundation, AI can increase - and accelerate - risk.