The Risks of AI Agents as High-Privilege Users That Never Pause

Published
Written by:
Vishwa Pandagle
Vishwa Pandagle
Cybersecurity Staff Editor

Question: As AI agents become more autonomous, what new security risks do you foresee emerging, and how should engineers begin preparing for them?


Vincent Danen, Vice President, Product Security at Red Hat

The shift from passive LLMs to autonomous agents is fundamentally changing the security landscape because it merges the control and the data planes in ways we’ve never had to defend before. 

We are moving past simple prompt injection into a world of unintended agency. When an agent can interact with databases or cloud infrastructure, it becomes a high-privileged user that never sleeps. 

However, the most immediate change is the speed of discovery. We’ve already seen a major turning point with the release of Mythos Preview. These AI models are draining the reservoir of undiscovered bugs by surfacing the flaws in core software. 

This technology accelerates the discovery of issues, but at a scale and frequency that a traditional human team has ever had to manage or match. This collapses the gap between a bug being found and a bug being exploited, making agency a double-edged sword. 

To prepare, engineers should treat AI agents as untrusted third-party software. This means adopting a risk-based approach to the architecture: 

Ultimately, this is about reducing risk and maintaining integrity through defense in depth. These shouldn’t be unfamiliar concepts, but in an agent-powered world, we must act much faster. 

While AI finds bugs, human expertise is what prioritizes and resolves the risks that actually impact the business. Limit access, trust no one, defend in layers; those principles are as true today as they will be tomorrow.


For a better user experience we recommend using a more modern browser. We support the latest version of the following browsers: For a better user experience we recommend using the latest version of the following browsers: