Judgment, Governance, and Accountability: A Founder’s Perspective on What Boards Worry About, AI Defense, and Mentorship

Published
Written by:
Vishwa Pandagle
Vishwa Pandagle
Cybersecurity Staff Editor
Key Takeaways
  • Security teams think the engineering team owns the fix, the engineering team thinks the security team owns the policy, and the gap becomes the breach path. 
  • AI agents make it possible to automate defensive security by continuously testing and enforcing controls at machine speed
  • Boards worry about the quieter structural issues that compound over time rather than the headline-grabbing “AI goes rogue” scenarios.
  • Arlene believes that in cybersecurity, there’s no room for vague ownership.
  • At Bltz AI, human judgment remains critical for context, ambiguity, and strategic tradeoffs.

Addressing how boards and security leaders should approach AI risks, Arlene Watson, CEO and Founder of Bltz AI, shares her perspective with TechNadu as part of our International Women’s Day interactions. She emphasizes the importance of embedding governance directly into systems so policies

Many of the risks boards worry about aren’t the headline-grabbing “AI goes rogue” scenarios. They’re the quieter structural issues that compound over time. Having an “AI policy” may look reassuring, but it does not automatically translate into enforceable runtime controls. 

Watson previously worked in engineering and product roles at CrowdStrike, ServiceNow, Sysdig, and Tenable, building cloud security and vulnerability management platforms. Watson also reflects on leadership and mentorship, noting that her most influential mentors were not only technically strong but also decisive, teaching her that clarity under pressure is a leadership muscle.

She encourages women entering cybersecurity, emphasizing that the field needs their perspective because AI security involves human behavior and governance as much as technology.

The conversation explores why ownership gaps often become breach paths, and how strong leadership and accountability form the foundation for secure AI adoption.

Vishwa: In your experience, what does successful AI-driven defense look like?

Arlene: Successful AI-driven defense looks like closed-loop security: 

Practically, that means the system can see all AI usage (agents, copilots, chatbots, APIs), apply the right guardrails in real time, and automatically turn lessons from incidents and testing into stronger policies. The real measure of success isn’t “more alerts”. 

It’s less risk per the area of AI adoption: 

Vishwa: Where do AI agents outperform human-led defense? Where do they still fall short?

Arlene: AI agents outperform humans in scale and consistency: 

They also excel at speed, triaging, enriching, and drafting remediations in seconds.

Where they fall short is judgment under ambiguity: 

Agents can be overconfident, misinterpret intent, or “solve the wrong problem” if the guardrails and oversight aren’t designed well. The best model is human-led strategy + agent-led execution.

Vishwa: Do you think there are security failures around AI that become visible only after incidents occur?

Arlene: Yes, many AI security failures are latent until the exact conditions line up. Examples include:

Another category is “quiet drift”: 

Vishwa: Are there guardrails that are essential to prevent defensive AI from becoming a source of risk?

Arlene: Absolutely. Defensive AI can become risky if it has too much power, too little visibility, or weak governance. 

Essential guardrails include:

Vishwa: What are the common risks small to large enterprises must be prepared for in the AI Era?

Arlene: Across company size, the biggest risks cluster into a few buckets:

Vishwa: Which AI-driven attack techniques are quietly moving from experimentation to real-world impact?

Arlene: Two trends are becoming very real:

  1. Prompt injection evolving into “agent hijacking”, especially when agents can call tools, browse, access files, or take actions.
    1. It’s no longer just “get the model to say something,” it’s “get the system to do something.”
  2. Data exfiltration through indirect paths like getting a model to summarize sensitive context, leak through logs, or reconstruct restricted information via retrieval or long conversations.

We’re also seeing attackers operationalize social engineering at scale using AI:

Vishwa: In breach reviews you have seen, are there human decisions that fail, even with automation in place?

Arlene: Yes. Automation often breaks down at the decision points humans control: 

Another common failure is assuming ownership is clear when it isn’t. Security teams think the engineering team owns the fix, the engineering team thinks the security team owns the policy, and the gap becomes the breach path. 

The lesson is that automation must be paired with clear accountability, measurable controls, and guardrails that make the safe path the easy path.

Vishwa: How do AI Agents make it possible to automate defensive security?

Arlene: AI agents make it possible to automate defensive security by continuously testing and enforcing controls at machine speed, rather than relying on periodic reviews and reactive alert handling. 

They can:

This shifts security from static checklists and manual triage to adaptive, continuously improving protection, while humans stay focused on oversight, risk tolerance, and high-impact decisions.

Vishwa: Are there risks that security teams raise with boards that rarely make it into public AI security narratives?

Arlene: Absolutely. Many of the risks boards worry about aren’t the headline-grabbing “AI goes rogue” scenarios. They’re the quieter structural issues that compound over time.

The board-level question is shifting from “Is AI risky?” to “How does risk change when autonomy, sensitive data, and tool access intersect and can we prove our controls actually work?

Vishwa: What advice would you give women entering AI and security today? What should they focus on, and how can they work together with the male counterparts?

Arlene: Focus on building technical credibility plus strategic clarity. 

The field needs your perspective, especially because AI security is as much about human behavior and governance as it is about technology.

Vishwa: Could you tell us about your mentors or someone who influenced you, and the lesson from them that guided you?

Arlene: I’ve been fortunate to work with leaders who operated at very high standards, especially during my time scaling security businesses at large public companies. The most influential mentors weren’t just technically strong; they were decisive. They taught me that clarity under pressure is a leadership muscle.

Arlene Watson

The most influential mentors weren’t just technically strong; they were decisive. They taught me that clarity under pressure is a leadership muscle.

Arlene Watson
Bltz AI CEO & Founder

One lesson that stayed with me is this: build for the long game, but execute in the short term. It’s easy in security to get caught reacting to noise. The best leaders focus on structural impact, building systems, teams, and products that compound over time, while still delivering measurable results every quarter.

Another lesson was accountability. In cybersecurity, there’s no room for vague ownership. If something breaks, someone owns it. That mindset shaped how I lead today. Clear responsibility, measurable outcomes, and no ambiguity about who is accountable.

Finally, I learned that credibility comes from doing the hard things repeatedly such as shipping, scaling, making tough calls, and standing by them. Leadership isn’t about titles. It’s about consistency and integrity over time.

Those lessons continue to guide me as we navigate the AI era, where the stakes are high and the decisions we make now will define the security foundations of the next decade.


For a better user experience we recommend using a more modern browser. We support the latest version of the following browsers: For a better user experience we recommend using the latest version of the following browsers: