As part of TechNadu’s International Women’s Day campaign highlighting women in cybersecurity we, spoke with Ashley M. Rose, CEO and Founder of Living Security, about human risk, social engineering, and security leadership.
Rose serves as a board member and chief advisor with WiCyS Austin and contributes to the Forbes Technology Council. She emphasizes that organizations already possess more risk data than they realize.
The problem is not collection, it is connection. Behavioral signals, identity, and external threats are often treated as separate domains, resulting in reactive decisions and a fragmented view of risk instead of holistic.
In this LeadHer in Security conversation, Rose outlines how Human Risk Management (HRM) moves from static policies to adaptive controls, where enforcement adjusts according to who the person is, what they can access, and what they are targeting at that moment.
She speaks clearly about leadership, and the invisible tax women pay in cybersecurity. Her advice to aspiring women professionals is to anchor themselves in evidence and outcomes, not permission.
Vishwa: You emphasize using behavioral data already inside organizations. What kinds of data could be useful, and where do teams often go wrong?
Ashley: Organizations already have more risk signals than they realize, but the real value comes from connecting three dimensions at once:
Behavioral signals like repeated MFA friction, risky sharing, slow reporting, or incomplete remediation are important, but they only become meaningful when viewed alongside identity context, such as privilege level, authentication strength, and access to critical systems, and external signals showing what attackers are actively targeting.
Teams often go wrong by treating these as separate domains, which leads to reactive decisions and a narrow view of risk.
A practical HRM example is a phishing response. A click on its own should not drive action. But when that click comes from someone with elevated access, operating under high workload patterns, at the same time a phishing campaign is actively targeting that role, the risk profile changes immediately.
That is when controls should adapt:
Human Risk Management succeeds when organizations stop asking “what happened” and start asking “given who this person is, what they can access, and what attackers are doing right now, what is the safest next step.
Vishwa: When social engineering extends into collaboration tools like Teams and Slack, how should security teams adapt controls?
Ashley: When social engineering moves into Teams and Slack, it is no longer just a channel shift; it is a trust shift. Attackers are increasingly using AI to craft messages that mirror internal language, impersonate familiar roles, and adapt in real time based on responses.
That’s why Teams and Slack are now some of the most dangerous places in the enterprise, because trust is assumed, and attackers exploit that assumption with surgical precision.
Security teams need to treat these platforms as primary attack surfaces and respond with a more contextual approach. That starts with tightening external access and app permissions, but it also means paying attention to in-channel signals such as unusual requests, new external contacts, or communication patterns that do not align with how teams normally work.
When risk increases, the response should be proportionate and timely:
This kind of approach helps organizations preserve collaboration while making manipulation harder to sustain, even as social engineering becomes more automated and convincing.
Vishwa: From your perspective, where does Human Risk Management (HRM) overlap with identity and access management?
Ashley: At the leadership level, Human Risk Management isn’t a bolt-on; it’s a strategic enabler for adaptive access and intelligent controls. It allows CISOs and identity leaders to move from static policies to dynamic enforcement based on live behavioral context.
That shift doesn’t just improve controls, it increases trust, accountability, and risk resilience at scale. HRM observes what people are doing across systems, adds behavioral and situational insight, and then feeds that context back into identity controls where enforcement already lives. IAM remains the decision and control layer, but HRM helps determine when and how those decisions should adapt.
In practice, this means access does not change because of a single event, but because a pattern emerges. When activity signals indicate elevated risk, that context can be used to trigger step-up authentication, reduce privileges, enforce just-in-time access, or slow down sensitive actions until confidence is restored.
This orchestration allows organizations to move beyond static access models and apply identity controls more precisely, based on how people are actually operating in real time.
Vishwa: When attackers rely on fatigue, urgency, or trust to bypass controls, what defensive approaches appear promising based on what you have observed?
Ashley: Attackers succeed because they exploit how people actually work, especially under pressure. Fatigue, urgency, and trust are not edge cases; they are daily operating conditions. The most promising defenses are the ones that reduce decision-making at high-risk moments.
Strong authentication that cannot be socially engineered, clear verification paths for sensitive requests, and removing unnecessary prompts all matter. So does reducing noise. When employees are overwhelmed with generic warnings, the signal that matters gets lost. Targeted, timely interventions at the moment of risk consistently outperform broad awareness campaigns.
Vishwa: When ransomware incidents occur, how often does human behavior play a role compared to unpatched systems or misconfigurations?
Ashley: In real ransomware incidents, it is rarely useful to separate human behavior from technical failure. Most events involve a combination of exposure, access, and action. What is changing is that those actions are increasingly mediated by automation and agents.
A human may approve an access change, connect an integration, or authorize an automated workflow that then executes at scale, often faster than traditional controls can react.
This shifts the risk model. A single misjudgment can now cascade through systems, expanding impact before anyone notices. That makes it even more important to understand not just what systems are exposed, but who can trigger automated actions and under what conditions.
Organizations that reduce ransomware impact tend to focus on limiting blast radius, applying friction to high-risk approvals, and monitoring for abnormal automated behavior, not just patching systems or training users. As automation increases, the human decision points around it become some of the most critical places to manage risk.
Vishwa: As organizations adopt AI tools across workflows, where do you see new human-risk blind spots emerging?
Ashley: As AI tools spread into everyday workflows, the biggest risks are emerging around assumptions. People assume the tool is safe, the output is correct, or the permissions are reasonable because it feels automated and official. In reality, sensitive data is being shared through prompts, powerful integrations are being over-permissioned, and employees are relying on AI outputs without appropriate verification.
Banning tools won’t work, especially when innovation is moving faster than policy. What does work, and what protects both data and momentum, is a risk-intelligent approach grounded in how people are using AI in real time. This is where Human Risk Management delivers strategic value: it equips leaders to adapt safely while driving transformation.
Vishwa: What role does leadership behavior play in shaping employee security habits, based on what you have observed?
Ashley: Leadership behavior sets the ceiling for security culture. Employees take their cues from what leaders tolerate and model, not from policy documents. If leaders bypass controls, demand exceptions, or treat security as an obstacle, the organization absorbs that lesson immediately.
When leaders use the same secure workflows, talk openly about tradeoffs, and frame security as part of delivering the business safely, behavior shifts across teams. Human risk is not just a workforce issue; it is a leadership one.
Vishwa: As a founder and CEO, are there challenges that you have faced that are rarely discussed openly by women in cybersecurity? What is your advice to aspiring professionals?
Ashley: As a founder and CEO, one challenge that is still not discussed enough is the invisible tax many women pay to be perceived as credible and ambitious at the same time. You are often navigating higher scrutiny, different expectations, and fewer allowances for mistakes.
My advice is to anchor yourself in evidence and outcomes, not permission. Build a clear point of view, stay close to customer reality, and seek sponsors who will advocate for you when you are not in the room. Progress comes faster when you stop trying to fit an image and focus on delivering impact.
Vishwa: Based on your experience, what practices in mentorship programs tend to succeed, and where could they improve?
Ashley: Mentorship works best when it is intentional and outcome-driven. Programs succeed when there is a clear purpose, a defined timeframe, and accountability on both sides. They fall short when they are informal, unstructured, or disconnected from real opportunities.
The biggest improvement most organizations can make is pairing mentorship with sponsorship, creating pathways where guidance turns into access, visibility, and advancement. Conversations matter, but opened doors matter more.
Vishwa: What skills or experiences most helped you move from founder to long-term operator and leader?
Ashley: The biggest shift for me was learning how to operate at speed without losing clarity. I tend to move fast, challenge assumptions, and think a few steps ahead, which only works if you surround yourself with people who thrive in that kind of environment.
Over time, I learned to hire leaders who are comfortable with ambiguity, who can translate a broad direction into concrete execution, and who are empowered to make decisions without waiting for permission.
My role evolved from being close to every decision to holding the full picture in my head, setting the trajectory, and removing friction so the team can move quickly and confidently.
That means trusting strong operators, staying outcome-focused, and creating an environment where unconventional ideas are tested, not dismissed.
Leadership at scale is not about control - it’s about clarity. My job is to clear the path, keep the mission visible, and empower the kind of operators who turn velocity into value. That’s how you scale impact without losing the spark that started it all.