We interacted with Alex Spivakovsky, Vice President of Research and Cybersecurity at Pentera. Spivakovsky previously served the Israel Defense Forces where he led incident response operations, penetration testing, and software development training.
Spivakovsky details how teams could gain continuous visibility into what is exploitable, and not just theoretically vulnerable. He unfolds what turns security performance into a business narrative.
He explains how pentester testing capabilities could be expanded, turning validation into a sustainable part of the testing program. Read on to know why security leaders are shifting from vulnerability management to exposure management and more in this interview.
Vishwa: How can organizations move from annual red-team testing to continuous validation without overwhelming their existing security resources?
Alex: Moving from annual testing to continuous validation isn’t just a tooling decision; it’s a mindset shift. Human testers offer depth and creativity, but the downside is that their assessments are point-in-time and limited in scope. Meanwhile, exposures evolve daily and can be anywhere across your attack surface, from endpoints to containerized workloads and SaaS configurations.
The key is operationalizing the attacker’s perspective without adding headcount. That means automating real-world attack scenarios safely in production, using the same TTPs adversaries rely on for lateral movement, privilege escalation, or data exfiltration.
Done right, this gives teams continuous visibility into what’s actually exploitable, not just theoretically vulnerable. You’re not replacing red teams or pentesters. You're expanding their testing capabilities and turning validation into a sustainable, always-on part of your adversarial testing program.
Vishwa: How can security leaders ensure their metrics reflect real resilience instead of compliance, and communicate meaningfully to executives?
Alex: Compliance is the baseline for a security program, not a measure of whether security is effective. It ensures the fundamentals are in place, but it does not show how those controls perform under real-world conditions.
To measure effectiveness, you need to understand your exposure: what an attacker could exploit in your environment today, how they could move through it, and which controls would stop them.
That is why security leaders are shifting from vulnerability management to exposure management. Vulnerability management highlights potential weaknesses like missing patches or misconfigurations.
Exposure management goes further by validating exploitability, mapping attack paths, and quantifying the blast radius if a control fails. It provides measurable insight into the effectiveness of EDR, MFA, segmentation, and identity governance under real attack scenarios.
When engaging with executives, the goal is to tell the story behind the data. They do not need more metrics; they need a clear view of risk, what is likely to happen, what it could cost, and how effectively the organization can respond.
Showing that validated exposures are shrinking, detection speed is improving, and the potential financial impact of incidents is decreasing turns security performance into a business narrative. It builds shared understanding and positions security as a measurable contributor to the organization’s resilience and success.
Vishwa: You often emphasize the attacker mindset in shaping defenses. What influenced that perspective, and was there a moment or series of insights that redefined how you see adversarial testing?
Alex: My career began on the offensive side of cybersecurity, and that mindset still shapes how I approach defense today. The biggest lesson from years of leading pentesting and red-teaming assessments is that most breaches don’t hinge on zero-days; hackers rely on far more common techniques such as misconfigurations, over-permissioned identities, and process gaps.
Now, sitting in a CISO seat, I see both sides of the equation. Defenders build controls to manage risk, while attackers look for the seams between them: unmonitored service accounts, overly trusted API tokens, or stale privilege paths in hybrid AD environments. Adversarial testing bridges that gap. It delivers proof, not assumptions, about how your environment actually holds up when challenged.
That is what “thinking like an attacker” really means in a defensive role: not just anticipating threats, but continuously validating that your defenses work against the tactics that matter most.
Vishwa: Many CISOs face tool sprawl across security stacks. How can teams consolidate or rationalize tools effectively without creating blind spots, and what practical criteria help decide what stays or goes?
Alex: Tool sprawl is often the byproduct of good intentions. Each tool solves a specific problem, but over time, overlapping capabilities, data silos, and maintenance overhead start to slow the team down instead of strengthening it. The key is to evaluate tools not by how many vulnerabilities they find, but by how much risk they actually help reduce.
A practical starting point is mapping where tools overlap versus where they complement each other — for instance, identifying overlap between CSPM and CNAPP solutions, or where EDR and NDR coverage intersect.
Does each tool deliver unique telemetry or detection logic, or is its entire benefit already covered elsewhere? From there, prioritize the ones that provide actionable, validated outcomes rather than theoretical indicators.
Consolidation shouldn’t mean doing less; it should mean doing what matters, better. The goal is a leaner, smarter stack that produces proof of security effectiveness, not just collect more data about it.
Vishwa: As AI becomes embedded in defense tools, how can teams adopt it responsibly while balancing automation benefits with the risks of over-trusting algorithmic outputs, especially after recent AI-assisted breaches?
Alex: AI is already transforming cybersecurity. It can accelerate analysis, strengthen defenses, and free up teams to focus on higher-value work. But it can just as easily introduce new risks, especially when decisions start getting made faster than they can be validated.
Responsible adoption starts with governance, not technology. AI should be treated like any other part of your security stack: measured, tested, and accountable. At Pentera, we adopted the ISO 42001 framework to ensure that every AI initiative, whether in research, product, or operations, is developed with oversight, transparency, and alignment to business and security objectives.
That structure helps us innovate safely while keeping visibility and control intact. Security leaders need to build similar guardrails inside their organizations. Define how AI is evaluated, who owns the risk, and what validation processes are in place before outputs influence production or decision-making.
And don’t forget to educate your employees. All the policies in the world mean nothing if Shadow AI runs rampant in your organization and you have no visibility of your attack surface.
Vishwa: You often speak about aligning security with business outcomes. What led you to that approach, and was it shaped more by a specific experience or a series of recurring observations?
Alex: Security cannot operate in isolation from the business it protects. Every decision we make, whether about controls, tooling, or validation, has to connect back to operational and financial impact.
It is not just about reducing risk. It is about demonstrating measurable value. When you can show how improved readiness translates to money saved against potential breaches, reduced downtime, or greater efficiency, the conversation changes.
It shifts from defending budgets to driving strategy. Security stops being seen as a cost center and becomes recognized as a measurable enabler of resilience and performance.
At Pentera, we approach validation with that mindset. By proving the effectiveness of defenses and quantifying exposure reduction, we give organizations the ability to translate security posture into business outcomes that matter.
Vishwa: With Cybersecurity Awareness Month highlighting education, how can professionals bridge the AI-security skill gap? What kind of experience or training should beginners and seasoned experts pursue to grow effectively in this space?
Alex: Bridging the AI-security gap starts with understanding how AI changes both attack and defense. It expands the threat surface through model manipulation, data exposure, and automated exploitation, so defenders need to know how to test and secure these systems, not just use them.
For newcomers, the focus should be on the fundamentals: identity, cloud security, and adversarial thinking.
For experienced practitioners, it’s about learning to validate AI-driven outputs, assess model integrity, and apply the same rigor to AI systems that we already apply to critical infrastructure.
AI is not an inherently harder attack surface. It is simply the newest one. Complex systems break in complex ways, and we need to understand where those weak points are, whether that means protecting model control plane (MCP) servers, securing API endpoints, or hardening data pipelines against injection and manipulation.
The tools to defend the AI ecosystem are still catching up, which means for now, we need to work harder to defend it, and that starts with understanding.