Blake Entrekin, Deputy CISO at HackerOne, discusses initial access paths, attacker mimicry, and AI driven threats. Blake brings 20 years of experience and leads the company’s Security, Governance, Risk, Compliance (GRC), and Privacy programs.Â
Entrekin specializes in building compliance-driven programs and held leadership roles at Podium, Palo Alto Networks, and Adobe. In this interview he shares insight into how researcher behavior and attacker adaptation intersect inside disclosure workflows.Â
Incidents of attackers evading detection by blending in with legitimate researcher activity, and using AI-driven automation draw attention to where the threat landscape is moving.Â
Entrekin explains the importance of communication between the security, development teams, and at times, the vendor, as part of the bug bounty process, ensuring the fix indeed solves the problem.
Vishwa: What are the most common initial access paths you’ve seen recently? What does that say about security gaps?
Blake: Phishing and social engineering remain the most common initial access paths. What’s shifting is how attackers are now exploiting AI-driven processes and automated workflows, where human oversight is minimal.Â
According to HackerOne’s Hacker-Powered Security Report, valid AI vulnerabilities increased 210%, and prompt injection rose 540%. These patterns show where gaps form as AI becomes part of business processes.Â
It signals a broader issue: that organizations often focus on traditional perimeter defenses, yet gaps emerge when AI operates without clear security guardrails. Organizations that focus on continuous threat exposure management and crowdsourced security identify these weaknesses earlier and reduce risk faster.
Vishwa: How does a bug-bounty finding move from a researcher's report to validated remediation?
Blake: Reports move through a structured review process. As we’ve seen with our triage teams, the first step is to confirm the issue and understand its impact on the business.Â
Teams then decide whether the fix requires:
We rely on clear communication between the security and development teams, and sometimes the vendor, to make sure the fix actually solves the problem and doesn’t introduce new risk.
Vishwa: What telemetry and signals help you detect attacker activity inside a bug-bounty or triage platform?
Blake: It’s tricky because what many organizations label as suspicious is normal for us. Traffic coming from TOR exit nodes, for example, is a routine part of how researchers work, while most companies would see that as a red flag and block it outright.Â
With that context in mind, we focus on patterns that fall outside expected researcher behavior. We look for anomalous patterns in report submissions, unusual timing of activity, and signals from authentication or access management tools.Â
Security Information and Event Management (SIEM)–an essential tool for any SOC– and other automation platforms can help keep track and surface anomalous activities faster.Â
When we combine automation with human analysis, we can quickly distinguish legitimate researcher behavior from potential malicious activity.
Vishwa: Without disclosing sensitive details, how do attackers mimic researcher behavior to evade detection?
Blake: Attackers are getting better at blending in with legitimate researcher activity, often mimicking responsible reporting patterns or using AI-driven automation to mirror the cadence of genuine vulnerability submissions.Â
They’re essentially using the same playbook, just with different intent. The best defense is a combination of AI-powered detection and human oversight, correlating behavioral signals like timing, authentication, and interaction patterns to tell curiosity from compromise.Â
Attackers may learn to look like researchers, but defenders who adapt faster keep the advantage.
Vishwa: Which attacker techniques grew fastest in 2024–2025, and how can employees and organizations curb them moving forward?
Blake: Generative AI accelerated several attack techniques at once. Phishing became more personalized, automated vulnerability discovery grew quickly, and prompt-injection-style attacks surged.Â
Sensitive information disclosures rose 152%, and programs testing AI grew 270%, which shows how fast the surface is expanding. Curbing this requires a mix of awareness, layered defenses, and tools that improve detection and prioritization.Â
AI helps reduce noise; skilled analysts focus on the exposures that matter most.
Vishwa: Which new attack surfaces should organizations prepare for keeping the threat landscape in mind?
Blake: Emerging attack surfaces are AI agents, automated workflows, and integrations that allow machine-to-machine interaction. Any system where AI can make decisions, process data, or interact with other services creates a potential threat vector.Â
That’s why it’s critical to strengthen identity and access management, continuous monitoring, and embedding security guardrails in these processes. As AI scales, so must governance and oversight.
Vishwa: What tooling or platform features are most effective for triage and prioritization in a bug-bounty program?
Blake: Platforms that integrate natural language queries or AI-assisted triage are starting to show real promise. The ability to validate and prioritize findings is important to reduce risk exposure.Â
That said, the most effective tools integrate AI with human expertise—leveraging AI for speed while relying on security judgment to focus on vulnerabilities that matter most to the business.  Â