Threats Redefine Security Context: AI-Ready Operations Will Define Next-Gen SOC AI
- Most enterprise SOC teams don’t have time to review about 40 percent of the alerts they receive.
- Autonomous AI agents may work across SOC, threat hunting, and pentesting functions to detect attack patterns.
- Context, including operational knowledge and organizational signals, will play a central role in improving AI-driven detection.
- Kumar says AI presents an opportunity to fundamentally change how organizations approach security.
- Simbian uses AI models to address emerging AI-armed threats in security operations.
Ambuj Kumar, CEO and Co-Founder of Simbian, says that while reducing the number of tools remains important, the more immediate priority is closing security gaps in current operations and addressing emerging threats. Kumar previously held engineering roles at NVIDIA and Cryptography Research (Rambus) and co-founded the cloud security firm Fortanix.
Security operations centers face a growing imbalance between the volume of alerts generated by modern security tools and the limited capacity of analysts to investigate them.
As AI-driven attacks increase and enterprise environments grow more complex, organizations are exploring how automation and AI-based reasoning can help reduce alert fatigue and improve response times.
Most customers begin by applying AI to high-volume, low-priority alerts, such as DLP alerts, to reduce the noise reaching SOC teams. As confidence in AI grows, they extend its use to high-priority alerts to enable deeper investigations and stronger correlations.
In this Expert Insights discussion, Kumar examines how autonomous AI agents could change SOC workflows, and the need for context in identifying threats.
Vishwa: You’ve engineered across NVIDIA and Fortanix before Simbian. How did those experiences shape your view of the role of AI in detection and response?
Ambuj: It has been a fascinating journey to get to this point. At NVIDIA, I built the foundations of security directly into the chips. At Fortanix, I built security into the enterprise infrastructure. Both companies were response to the classes of security risks emerging at that time.
Now, as new AI-armed attacks become real, I get to combine those technologies with the capabilities of the new AI models to solve the problems of security operations here at Simbian. AI really is an opportunity to fundamentally change how we approach security. Hackers will always find a new way to attack, and my job is to keep figuring out how to stay ahead of them.
Vishwa: Simbian tackles the AI-driven alert fatigue problem. What AI improvement would make SOC automation more effective?
Ambuj: Alert fatigue is a very real problem. Most enterprise SOC teams routinely don’t have time to review 40% or more of the security alerts that they receive.
Part of the problem is that investigating and responding to alerts required a lot of manual, time-consuming research to figure out what to do, even if the alert turns out to be a false positive.
Where AI can help is by automating this data collection, triage, investigation, and even assessment of alerts, letting the SOC team focus on analysing what happened and what they want to do about it.
Customers tell me that SOC can automate at least 50% of the process, making it possible to reduce alert fatigue and improve alert coverage.
Vishwa: You mentioned autonomous AI agents for SOC, threat hunt, and pentest. What’s one capability that could be further exploited to make these agents more effective in real-time operations?
Ambuj: I’m excited about what is possible when these agents work together to identify, investigate, and block security threats. Human security teams regularly work across functions, and SOC analysts work with pen testers and threat hunters to solve complex threats. AI agents need to do the same to identify the emerging classes of automated, AI-powered attacks, and do so much faster than any human team can respond.
Vishwa: Many mid-sized enterprises struggle with tool fatigue from vendor sprawl. Which capabilities — autonomous investigation, contextual correlation, or AI-powered triage, do you think are most essential? How should they be prioritised?
Ambuj: In my opinion, it does not make sense to try to separate those functions. They are all part of one workflow and can be performed by one tool, which helps contain the sprawl of introducing new tools.
While we should always be looking to reduce the number of tools, right now, I think the higher priority is to fill security gaps in current operations and new threats. For example, Simbian’s architecture is to use the security tools that are already in place to capture security alerts. You may not need all of those tools over time, but for now, building on what is already in place is the fastest way to get started.
Vishwa: Simbian’s core value is triaging all alerts autonomously. Which alert type do you see most benefiting from this — false positives, unknown anomalies, delayed escalations, or any other? And why?
Ambuj: False positives are a particularly frustrating pain point and a good place to start, as they are frequent and require the same amount of time to review as true positives, but that time is just wasted since, at the end of the review, it is still a false positive. We have seen 95% accuracy from AI to identify false positives.
Most customers I see start with their high-volume, low-priority alerts, like DLP alerts, as a way to cut the noise and the volume coming at the SOC team. As customers get more confident in what AI can do, they start processing high-priority alerts to get deeper investigations and better correlations. You ultimately want to use AI to review all alerts, but you don’t need to start there.
Vishwa: SOCs struggle with context, noise, and alert volume. As human analysts increasingly rely on AI to connect signals, which of these three will evolve most in the next year, keeping attackers in mind?
Ambuj: The three are interconnected. Alert volumes continue to increase, which brings more noise. Frequent application releases, organization changes, and new security threats continually change the context, which makes it harder to find the signal in the noise. The breadth and volume of data points exceed what even experienced analysts can manage.
I think the biggest evolution over the next year will be around context. Context is the key that enables AI tools to make better decisions and find the security issues that matter. For example, the Simbian Context Lake™ captures standard operation procedures, employee training materials, “tribal knowledge”, and even internal messages to understand what is happening across the organization.
A user logging in from the other side of the world is not an issue if we know that person is traveling. A low-priority informational alert should immediately be elevated to a high-priority alert if the device where the alert was detected belongs to the CEO. Every alert, resolution, and feedback from human analysts is also fed back into the Context Lake to make AI more intelligent and more prepared for future alerts.
Vishwa: As AI-governance and data-transparency rules tighten, many SOC leaders struggle to balance innovation with compliance. What’s the biggest pain point you see in aligning AI-driven operations with these regulations, and how should it be addressed?
Ambuj: The short answer is that regulators are too far behind in guiding AI strategies. Regulators have always struggled to keep pace, and given the rate at which new security threats are evolving, the gap between regulations and real-world requirements is only getting wider. Regulations should be thought of as a minimum structure for a security program, not a roadmap of how to stay secure.
If you are implementing emerging practices around using AI for security, you will likely exceed what the regulators want. For example, AI-speed response tools will always be faster to find a problem than the notification required for compliance. Continually running pentests on new application releases will exceed the commonly mandated “once a year” testing.
Do what you need to make your security operations “AI Ready”, and regulatory compliance will follow.










