Judgment, Governance, and Accountability: A Founder’s Perspective on What Boards Worry About, AI Defense, and Mentorship
- Security teams think the engineering team owns the fix, the engineering team thinks the security team owns the policy, and the gap becomes the breach path.
- AI agents make it possible to automate defensive security by continuously testing and enforcing controls at machine speed
- Boards worry about the quieter structural issues that compound over time rather than the headline-grabbing “AI goes rogue” scenarios.
- Arlene believes that in cybersecurity, there’s no room for vague ownership.
- At Bltz AI, human judgment remains critical for context, ambiguity, and strategic tradeoffs.
Addressing how boards and security leaders should approach AI risks, Arlene Watson, CEO and Founder of Bltz AI, shares her perspective with TechNadu as part of our International Women’s Day interactions. She emphasizes the importance of embedding governance directly into systems, so policies translate into practical safeguards that function during real-world AI operations rather than remaining reassuring documents on paper.
Many of the risks boards worry about aren’t the headline-grabbing “AI goes rogue” scenarios. They’re the quieter structural issues that compound over time. Having an “AI policy” may look reassuring, but it does not automatically translate into technical safeguards that operate during live AI use.
Watson previously worked in engineering and product roles at CrowdStrike, ServiceNow, Sysdig, and Tenable, building cloud security and vulnerability management platforms. Watson also reflects on leadership and mentorship, noting that her most influential mentors were not only technically strong but also decisive, teaching her that clarity under pressure is a leadership muscle.
She encourages women entering cybersecurity, emphasizing that the field needs their perspective because AI security involves human behavior and governance as much as technology.
The conversation explores why ownership gaps often become breach paths, and how strong leadership and accountability form the foundation for secure AI adoption.
Vishwa: In your experience, what does successful AI-driven defense look like?
Arlene: Successful AI-driven defense looks like closed-loop security:
- continuous discovery,
- prevention,
- detection, and
- remediation that improves with every attempt without slowing the business down.
Practically, that means the system can see all AI usage (agents, copilots, chatbots, APIs), apply the right guardrails in real time, and automatically turn lessons from incidents and testing into stronger policies. The real measure of success isn’t “more alerts”.
It’s less risk per the area of AI adoption:
- fewer leaks,
- fewer bypasses,
- faster response, and
- fewer recurring failures.
Vishwa: Where do AI agents outperform human-led defense? Where do they still fall short?
Arlene: AI agents outperform humans in scale and consistency:
- combing through huge volumes of telemetry,
- monitoring conversations and tool calls,
- spotting patterns across sessions, and
- running repeatable tests (like continuous red teaming) without fatigue.
They also excel at speed, triaging, enriching, and drafting remediations in seconds.
Where they fall short is judgment under ambiguity:
- understanding business context,
- handling novel edge cases, and
- making tradeoffs when signals conflict.
Agents can be overconfident, misinterpret intent, or “solve the wrong problem” if the guardrails and oversight aren’t designed well. The best model is human-led strategy + agent-led execution.
Vishwa: Do you think there are security failures around AI that become visible only after incidents occur?
Arlene: Yes, many AI security failures are latent until the exact conditions line up. Examples include:
- prompt-injection paths that only appear when agents have tool access,
- data exposure that only happens when a user uploads a certain document type, or
- brand risks that only surface when generated outputs go viral.
Another category is “quiet drift”:
- Small changes in prompts, models, plugins, or policies can introduce risk gradually, and teams only recognize it after something breaks.
- That’s why continuous validation (not one-time reviews) is essential.
Vishwa: Are there guardrails that are essential to prevent defensive AI from becoming a source of risk?
Arlene: Absolutely. Defensive AI can become risky if it has too much power, too little visibility, or weak governance.
Essential guardrails include:
- Least-privilege tool access (agents should only do what they must, nothing more)
- Strong identity and authorization (clear separation of tenants, roles, and admin actions)
- Policy enforcement at runtime (not just documentation—real controls in the path)
- Auditability and tamper-resistant logs (every action is attributable and reviewable)
- Human approval for high-impact actions (block, delete, rotate credentials, notify regulators, etc.)
- Safe failure modes (when uncertain, degrade safely rather than “guess”)
Vishwa: What are the common risks small to large enterprises must be prepared for in the AI Era?
Arlene: Across company size, the biggest risks cluster into a few buckets:
- Data exposure: sensitive inputs copied into prompts, files uploaded to copilots, retrieval systems pulling the wrong data
- Prompt injection + tool abuse: attackers steering AI systems to reveal secrets, take actions, or exfiltrate data through connected tools
- Shadow AI: employees adopting public LLMs and unsanctioned agents faster than governance can keep up
- Model/agent supply chain risk: plugins, agent frameworks, prompt libraries, and third-party APIs expanding the attack surface
- Integrity and brand risk: confident-but-wrong outputs, toxic content, or policy-violating responses that damage trust
- Operational risk: uncontrolled cost, latency, and reliability issues that turn into security and availability problems
Vishwa: Which AI-driven attack techniques are quietly moving from experimentation to real-world impact?
Arlene: Two trends are becoming very real:
- Prompt injection evolving into “agent hijacking”, especially when agents can call tools, browse, access files, or take actions.
- It’s no longer just “get the model to say something,” it’s “get the system to do something.”
- Data exfiltration through indirect paths like getting a model to summarize sensitive context, leak through logs, or reconstruct restricted information via retrieval or long conversations.
We’re also seeing attackers operationalize social engineering at scale using AI:
- hyper-personalized phishing,
- fake support tickets, and
- believable internal messaging that accelerates credential theft and access abuse.
Vishwa: In breach reviews you have seen, are there human decisions that fail, even with automation in place?
Arlene: Yes. Automation often breaks down at the decision points humans control:
- misconfigured access,
- exceptions granted without expiration,
- “temporary” permissions that become permanent, and
- alerts that are tuned for noise reduction rather than risk reduction
Another common failure is assuming ownership is clear when it isn’t. Security teams think the engineering team owns the fix, the engineering team thinks the security team owns the policy, and the gap becomes the breach path.
The lesson is that automation must be paired with clear accountability, measurable controls, and guardrails that make the safe path the easy path.
Vishwa: How do AI Agents make it possible to automate defensive security?
Arlene: AI agents make it possible to automate defensive security by continuously testing and enforcing controls at machine speed, rather than relying on periodic reviews and reactive alert handling.
They can:
- Simulate adversarial behavior,
- Monitor live AI interactions,
- Evaluate context like identity and data sensitivity, and
- Immediately validate whether safeguards hold up under pressure.
- When a weakness is detected, the system can adjust controls, update policies, and retest automatically, creating a closed feedback loop.
This shifts security from static checklists and manual triage to adaptive, continuously improving protection, while humans stay focused on oversight, risk tolerance, and high-impact decisions.
Vishwa: Are there risks that security teams raise with boards that rarely make it into public AI security narratives?
Arlene: Absolutely. Many of the risks boards worry about aren’t the headline-grabbing “AI goes rogue” scenarios. They’re the quieter structural issues that compound over time.
- One is shadow AI: employees adopting tools and agents faster than governance can keep up, creating invisible data exposure and unclear accountability.
- Another is access creep. When AI systems are connected to email, code repositories, ticketing systems, or cloud consoles, a single weakness can suddenly have a much larger blast radius than anyone anticipated.
- There’s also the gap between policy and reality. Having an “AI policy” looks reassuring on paper, but it doesn’t automatically translate into enforceable runtime controls.
- Boards often discover too late that governance wasn’t technically embedded.
- Third-party exposure is another under-discussed issue. Plugins, agent frameworks, prompt libraries, and model providers expand the attack surface in ways traditional vendor risk processes weren’t designed to evaluate.
- And finally, there’s integrity risk. The damage caused by AI that is confidently wrong.
- Inaccurate financial analysis, flawed customer communication, or incorrect automated actions can erode trust just as quickly as a breach.
The board-level question is shifting from “Is AI risky?” to “How does risk change when autonomy, sensitive data, and tool access intersect and can we prove our controls actually work?”
Vishwa: What advice would you give women entering AI and security today? What should they focus on, and how can they work together with the male counterparts?
Arlene: Focus on building technical credibility plus strategic clarity.
- Learn how AI systems actually work in production, such as data flows, access control, tooling, and failure modes.
- Pair that with the ability to translate risk into business outcomes.
- Don’t wait to be invited into the room: publish, speak, demo, and ship.
- Working well with male counterparts is about forming high-trust teams:
- assume positive intent,
- be direct,
- ask for clarity, and
- make decisions based on evidence.
- Find allies who value outcomes over ego, and build networks with other women so you’re not navigating growth, leadership, and visibility alone.
The field needs your perspective, especially because AI security is as much about human behavior and governance as it is about technology.
Vishwa: Could you tell us about your mentors or someone who influenced you, and the lesson from them that guided you?
Arlene: I’ve been fortunate to work with leaders who operated at very high standards, especially during my time scaling security businesses at large public companies. The most influential mentors weren’t just technically strong; they were decisive. They taught me that clarity under pressure is a leadership muscle.
The most influential mentors weren’t just technically strong; they were decisive. They taught me that clarity under pressure is a leadership muscle.
One lesson that stayed with me is this: build for the long game, but execute in the short term. It’s easy in security to get caught reacting to noise. The best leaders focus on structural impact, building systems, teams, and products that compound over time, while still delivering measurable results every quarter.
Another lesson was accountability. In cybersecurity, there’s no room for vague ownership. If something breaks, someone owns it. That mindset shaped how I lead today. Clear responsibility, measurable outcomes, and no ambiguity about who is accountable.
Finally, I learned that credibility comes from doing the hard things repeatedly such as shipping, scaling, making tough calls, and standing by them. Leadership isn’t about titles. It’s about consistency and integrity over time.
Those lessons continue to guide me as we navigate the AI era, where the stakes are high and the decisions we make now will define the security foundations of the next decade.










