Avery Pennarun, CEO and Co-Founder of Tailscale, explains how attackers bypass security controls through identity gaps and how breaches typically unfold after legitimate access is obtained.
Pennarun has built and operated networking and security systems across startups and large technology companies, with a long-standing focus on practical controls, usability, and operational reliability.
He describes how AI accelerates targeting, social engineering, and ransomware preparation, while quieter, research-driven access has replaced noisy credential attacks.
When attackers use valid credentials and nothing initially appears suspicious, distinguishing legitimate activity from abuse becomes the central challenge, shifting detection from perimeter alerts to behavior, context, and intent.
Pennarun also outlines how stolen credentials, support-desk impersonation, and operational bypasses enable rapid movement after access, and what security teams need to change to contain damage early.
Vishwa: Which security controls do threat actors consistently bypass or exploit due to weak implementation?
Avery: When you look at what attackers consistently bypass, it is rarely the math or the cryptography. It is almost always the messy human and process layer around identity. MFA exists, but it is uneven.
Conditional access exists, but there are exceptions for legacy systems or “temporary” workflows that never quite go away. Device trust exists, but it is often easy to mark a device compliant without really proving it is.
Attackers do not need to defeat the strongest control. They just route around it using whatever the organization made optional.
AI mostly accelerates this. It does not create a new category of failure so much as it puts pressure on the existing ones. Social engineering gets cheaper and more personalized, so helpdesks, contractors, and edge workflows get hit harder. At the same time, shadow AI becomes a quiet data control problem.
People paste sensitive information into whatever tool helps them get work done. That is not malicious. It is rational behavior. But if it is unaudited and unmanaged, it becomes another bypass.
Vishwa: Where do you see the biggest mismatch between how organizations prepare for attacks and how they actually occur?
Avery: One of the biggest mismatches I see is that organizations still prepare for attacks as if they are about breaking in, when most real incidents are about logging in. We spend a lot of effort hardening the perimeter and tracking vulnerabilities, but the actual breach often starts with legitimate access.
A phish, a stolen token, a password reset, or a support interaction that went a little too smoothly. From there, everything unfolds using normal tools, normal permissions, and normal workflows.
AI makes that mismatch more visible. A lot of companies talk about model risk or AI governance, but cannot answer very basic questions.
If you cannot audit it, you cannot really defend it. And you definitely cannot explain it to anyone later.
Vishwa: What indicators suggest attackers are shifting from broad credential spraying to more research-driven targeting because of the increased difficulties they face during initial access attempts?
Avery: You can see the shift in how the activity feels. Credential spraying is noisy and uniform, and it is getting less effective, so more attackers are doing quieter, research driven targeting instead. You see fewer attempts, but they are better chosen.
They focus on executives, finance teams, IT admins, or contractors with standing access. They time things around travel, holidays, or organizational changes.
The lures themselves are more context aware. They reference real projects, internal language, and vendor relationships because someone did the homework. AI makes that homework faster to turn into convincing messages.
Once a foothold is gained, the pivot is immediate toward control points like the identity provider, session tokens, password reset flows, device enrollment, and support processes. That is a strong signal you are dealing with intent, not noise.
Vishwa: Could you share what this year’s ransomware activities say about attacker preparation, initial access vectors and lateral movement?
Avery: Ransomware today looks less like a single attack and more like a supply chain. Initial access is often purchased. The intrusion itself follows a playbook. There is clear specialization between the groups involved. Attackers show up prepared, with plans for persistence, privilege escalation, and data exfiltration long before encryption ever happens.
AI does not change the goal, but it makes the preparation faster and the execution smoother. Initial access still clusters around identity and remote access. Stolen credentials, token theft, misconfigurations, abused VPN or RDP.
Lateral movement still tends to collapse into controlling identity, because once you control the directory, segmentation becomes mostly theoretical. And because encryption alone is not enough leverage anymore, quiet data theft becomes central to the business model.
Vishwa: What did 2025 attacks reveal about support-desk impersonation tactics as organizations strengthen authentication for 2026?
Avery: As authentication gets stronger, attackers naturally shift to the place where humans are allowed to override it. The support desk. What stood out in 2025 was not that impersonation happened, but how polished it became.
Attackers arrived with breached personal data, internal org charts, spoofed caller IDs, and believable stories that matched real internal processes and timelines.
AI helps here by making those interactions more consistent and scalable. The goal is not to defeat MFA technically. It is to convince someone to bypass it operationally. If your recovery and exception paths are weaker than your login path, attackers will find them. And if those actions are not tightly logged and reviewable, you will not know you have been compromised until much later.
Vishwa: What strategic shift should technology leaders prioritize in 2026 to strengthen resilience against identity-driven and infrastructure-level attacks?
Avery: The biggest shift is to stop treating the internal network as a trust boundary and start treating identity and workload access as the unit of security. One compromised laptop or account should not imply broad reach across your infrastructure. Ambient access needs to disappear, replaced by narrow, explicit permissions that are tied to real needs and expire naturally.
At the same time, monitoring and auditability have to become first class features, especially as AI becomes part of everyday work. Shadow AI is inevitable if the sanctioned tools are too slow or too restrictive, so the answer is not pretending it will not happen.
The answer is providing usable, approved workflows with clear data boundaries and logs you can actually rely on. If you can tell who accessed what, when, and through which system, AI becomes just another tool you can reason about instead of a blind spot you hope does not matter.
AI does not magically create new vulnerabilities. It scales the old ones. And the defense scales the same way too. Less ambient trust, fewer exception paths, and audit trails that work when you actually need them.