In this interview, Norman Gottschalk, Global CIO & CISO at Visionet Systems, explains how generative comes to the rescue of defenders and attackers, filtering noise and scanning for vulnerabilities for both.
With more than a decade of leading technology and security programs at Visionet, Gottschalk has overseen infrastructure, security, and advanced technology initiatives across the organization.
The dual-edged sword accelerates malware adaptation, shrinking attack lifecycles and weakening signature-based defenses.
He outlines guardrails needed to keep AI effective without becoming reckless, a balance many CISOs now face. Read on to learn how AI reduced detection and investigation time from hours to minutes during a real cloud incident.
Vishwa: What specific types of threats do you see generative AI helping detect faster in real environments?
Norman: Generative AI is making the largest impact in areas where patterns have always existed but were buried under massive amounts of noise. A good example is identity-based anomalies—things like lateral movement through compromised credentials, privilege escalation, or unusual access paths.
Traditionally, these signals were scattered and weak, but AI can correlate dozens of them and say, “This looks like early-stage credential misuse.”
It’s also proving invaluable in spotting insider-driven data exfiltration. Subtle behaviors such as unusual file access, suspicious downloads, or off-hours data transfers often slip past rule-based systems, but AI picks up those deviations.
And then there’s cloud misconfiguration exploitation—AI can analyze infrastructure drift in real time and flag exploitable gaps before attackers do.
In short, AI shines wherever there’s high event volume and the need to aggregate weak signals into a meaningful picture.
Vishwa: From your perspective, which attack techniques are evolving the quickest because of AI automation?
Norman: Phishing and social engineering are evolving faster than anything else.
Vishwa: Can you share one concrete example where AI reduced detection time inside a cloud environment you worked on?
Norman: Certainly. In one cloud environment, we were dealing with a series of low-confidence alerts that, on their own, didn’t warrant escalation.
Traditionally, an analyst would spend hours piecing these signals together to determine if they formed a real threat.
We changed that by passing the entire incident chain—all related events and context—into a GenAI model. Instead of just correlating data, the model synthesized the activity into a clear narrative and provided a recommended response for a Level 1 SOC engineer.
This approach cut investigation time from hours down to minutes, allowing the team to act quickly and focus on the actual risks rather than getting bogged down in manual triage. It was a significant efficiency gain and a great example of how GenAI can transform operational workflows.
Vishwa: Where do you see AI governance breaking down inside organizations today, based on your experience?
Norman: Governance usually breaks down in three places.
The common thread is that companies treat AI governance as a one-time setup instead of a continuous discipline.
Vishwa: What is one area where defenders mistakenly overestimate AI’s capabilities?
For example,
AI provides probability scores, not the business context required for confident decision-making.
That’s why human-in-the-loop oversight remains critical. AI can accelerate detection and triage, but final judgment calls—especially those that impact operations or risk posture—must involve human review.
Overreliance on AI without that safeguard leads to alert fatigue or, worse, false confidence that puts the organization at risk.
Vishwa: Which security tasks should never be automated using AI, and why?
Norman: There are several areas where full automation introduces unacceptable risk.
Final decision-making in incident containment such as:
Regulatory and legal interpretations are another critical area. AI hallucinations or misinterpretations can create significant liabilities. Risk acceptance is also off-limits for automation; defining what constitutes “acceptable risk” is a business decision, not an algorithmic one.
What’s more, we’re now seeing legal requirements from customers mandating indemnification if AI is used without human oversight for any decision-making.
This underscores the growing consensus that while automation is powerful, autonomy without human validation is where risk skyrockets—not just operationally, but contractually and legally.
Vishwa: What practical guardrails should companies implement before rolling out AI in threat detection pipelines?
Norman: Before deploying AI, companies need strong guardrails.
These measures keep AI powerful but not reckless, which is the balance every CISO is chasing right now.