As Organizations Fall Into Patient Zero Mode, Only Adaptive Threat Intelligence Can Keep Pace by Detecting Reused Attack Patterns Faster

Published
Written by:
Vishwa Pandagle
Vishwa Pandagle
Cybersecurity Staff Editor

We interviewed John Watters, CEO and Managing Partner at iCOUNTER, a cyber risk intelligence firm, to understand what is next for defenders facing machine-speed adversaries.

Organizations are dangerously unprepared for the next wave of threats like AI-crafted, never-before-seen TTPs that are purpose-built for specific victims and executed at unprecedented speed. 

Watters returned from retirement to dismantle old models, the legacy detection playbook, and reactive defense frameworks that struggle to keep pace with adversaries trained on the very tools meant to stop them. 

AI allows adversaries to impersonate, morph, and mislead. As defenders navigate this new AI battlefield, Watters argues for a pivot toward customer-centric risk intelligence because static detection models will not survive what’s coming next.

At the center of this shift is the unraveling of attribution. Attribution was once the backbone of threat intelligence, used to link attacks to known groups or nation-states. Today, it is at risk of becoming obsolete as AI-impersonation weakens its reliability. 

In this interview, TechNadu tries not only to explain what to expect but also what defenders can do, from building agile threat response teams to reshaping how we prioritize and intercept AI-generated attacks before they strike.

As adversaries continue their reuse patterns, they become discoverable faster and can be defeated by specific detection rules and threat mitigation steps by defenders.

Read the full interview for a complete breakdown, deeper context, and defender-focused strategies.

Vishwa: While iCOUNTER is initially focused on large enterprises with high-value assets, how do you see its capabilities eventually benefiting mid-sized organizations facing the same AI-accelerated threats, often with fewer resources? 

John: We partner with large enterprises to deliver our Counter Threat Operating System, covering a wide array of threat categories from fraud to third-party risk to product security. Not only do we deliver risk intelligence, but we also work hand in hand with their internal teams to create/deploy counter-threat strategies against specific targeted attacks. 

For midsize companies, we productize each threat category and deliver specific risk intelligence to the customer that they can act on. In these circumstances, we deliver the actionable risk intelligence – it’s up to customers to execute the counter threat strategy.  A simple example of how our risk intelligence products differ from what’s in the marketplace is in the third-party risk category.   

While serving as a major attack vector and source of data leakage for customers, their options today include risk scoring from vendors that simply leverage attack surface management tools and questionnaires to concoct a ‘risk score’ for their customers. 

At iCOUNTER, we discover which of your third parties are discoverable by adversaries, then track specific compromises of your third parties so you know when your data they hold or access into your environment is at risk – this is a much more precise way to manage third party risk vs reviewing a generic ‘risk score’ that only assesses security posture of a third party. 

Customer care about third parties as attack vectors and data loss risks – and that’s what we focus on.

Vishwa: You came out of retirement to tackle a problem you believe the industry isn’t ready for. What warning signs are still being overlooked, and what’s at stake if we keep relying on legacy models in this AI-driven threat era? 

John: The cybersecurity industry relies on adversarial reuse of TTPs and their associated Indicators of Compromise (IOCs) to discover threat activity in their environment. 

There are 391 threat intelligence companies that document every actor, TTP, IOC, breach, vulnerability, etc., so that you can learn from others’ mistakes.  

As adversaries have continued their reuse patterns, they become discoverable faster and can be defeated by specific detection rules and threat mitigation steps by defenders. If you roll the clock back, this industry started with anti-virus relying on an adversary to reuse a virus vs create new ones… that didn’t work for long as actors leveraged polymorphic virus generators where every virus was new. 

Then, defenders moved out a layer of the attacking infrastructure to block downloads of viruses and malware. Then, adversaries launched Fast Flux in 2007 to randomly rotate infrastructure, so this approach would simply be blocking an IP that would never be used again to download viruses/malware.  

Then, we tried it again at the Malware layer, creating firewall rules, Yara rules, etc., to block specific malware and their variants. By 2015, 80% of all malware seen was seen for the first time.

You see a trend, right? Today, AI is enabling the creation of zero-day TTPs, so the actual attack methodology AND the tools are brand new and precision-built to compromise a specific target company. 

In this scenario, the intelligence from the 391 intel companies reporting on what’s been seen before goes out the window, and an increasing number of victims become Patient Zero.  

Defending against this reality will require a new approach from defenders and a very agile intelligence capability to support them – that’s what we’ve built at iCOUNTER in preparation for this reality.

Vishwa: As someone who helped shape the foundations of cyber threat intelligence, what blind spots do you see in how current CTI frameworks are handling AI-augmented adversaries? 

John: As I mentioned, the intelligence framework we pioneered at iSIGHT, which was widely adopted by competitors and defenders alike, was to link actor groups by threat category (cyber espionage/crime/hacktivism), by country linked to specific TTPs and their associated IOCs.  

A customer would then correlate the IOCs against their alerts ingested into their SIEM or data lake to determine if there was an active threat against them. So, if a defender saw a certain set of IOCs, they could automatically link those IOCs to their TTPs and to their actor groups to assess risk and prioritize mitigations. 

This approach worked great for the better part of the last 20 years, but will not work in a world of new and novel attack methods constructed for every target, where every target is essentially “Patient Zero”. 

The composition of threats that defenders competed against has largely been comprised of general threats like ransomware, Distributed Denial of Service (DDoS), etc, and sector-level threats like the Scattered Spider attacks against retail and transportation sectors, where a specific attack method was used against companies in a specific sector. 

There has been very little focus on targeted attacks against specific companies unless they were very mature and part of critical infrastructure, having to compete against nation-state attackers. 

Even in this case, they relied largely on the reuse of TTPs to assess whether they were seeing anything that had been seen before and could therefore defend against it. Imagine playing a football game against a team where you had access to their playbooks, studied film of all of their plays, and knew what audibles linked to which plays.  

Now, the team shows up, and every single play is brand new and never seen before. You might stop it, but it requires you to be in the right place at the right time, coincidentally.

Vishwa: You’ve mentioned that traditional cyber intelligence is losing relevance. What do you think will happen to legacy CTI vendors over the next 2–3 years, and where is the opportunity for them to pivot? 

John: Cyber threat intelligence companies will naturally evolve to try and rebuild their collection and analysis processes to do what we do – start the process with a customer profile. 

Today, their collection and analysis processes focus on adversaries, TTPs, and IOCs. However, they have no idea whether any of these actors are going to target a specific company….until they do.  

They’ll get there; however, we have a competitive moat that was built over a long time. If the current intel providers are able to pivot twice as fast as it took us to build and refine ours, we have a 3-4 year head start – and we’re turning up the innovation pace, not waiting to be caught.

Vishwa: You’ve warned that every organization could soon be “Patient Zero” for new AI-driven attacks. What can defenders do today to prepare for threats no one has seen before? 

John: For starters, they’ll need to anoint their internal ‘Special Ops’ team that is agile, funded, and enabled through a risk intelligence organization to help them detect/deflect/defeat targeted threats.  

And, they’ll need to discover these threats while they’re in their development cycle of target selection, reconnaissance, plan, build, and launch phases, where there is still latency and an ability to intersect the process. 

I think of traditional security organizations as Army/Navy/Air Force, and these “Counter threat teams” as Delta/Seals/Rangers. This Interim step we’re in as an industry, where we’ve moved from human on human to machine/AI-enabled human on human to eventually AI on AI at some point in the future, will require constant innovation and agility.  

I don’t see any other way other than to recognize the challenge and get in the fight. Technology alone will not win this battle.

Vishwa: We’ve seen adversarial use of AI in impersonation and spear phishing, but less discussion around AI-generated TTPs. Can you explain how these “zero-day TTPs” work and how defenders can detect what’s never been seen before? 

John: AI-enabled tools of the trade, like impersonation and spear phishing, have been around for a while, although it’s become pretty mainstream at this juncture. That said, there are AI-enabled tools to create better success within a similar attack methodology.  

For example, the effectiveness of phishing has gone way up using AI. However, today we’re witnessing the early days of ‘polymorphic TTPs’ generated by LLMs tuned with a multitude of prior TTPS and their associated tools.   

Here’s an example: If you go to the Museum of Modern Art in NYC, they have a 576 square foot wall that generates never-before-seen art every 30 seconds. They imported high-resolution photos of close to 250,000 pieces of art displayed at the museum over the past 200 years into a large LLM tasked with creating new and novel pieces of art based on what’s been used before every 30 seconds. 

Now, take every TTP/IOC ever used and import them into a large LLM…you get the point. 

Vishwa: Can AI-generated attacks now “custom build” around a target’s own defensive stack? If so, are we approaching the end of security through product-based detection models? 

John: Yes and no. Yes, AI-generated attacks can custom-build an attack born out of their AI-enabled reconnaissance of a target company where they’ve already built a durable advantage over the target in light of their defensive stack.  

Then, they leverage AI to create the TTPs to exploit that advantage. I don’t believe that this is the end of security through product-based detection models that leverage heuristics and AI – they will simply have to be consistently tuned, and they reflect our first layer of defense in the emerging domain of AI on AI competition.  

I don’t believe we’ll have full AI on AI security for some time to come….but can’t imagine we’re not headed in that direction.

Vishwa: As AI-enabled attackers adapt faster than security teams can respond, what specific steps in the detection and response cycle can automation now handle effectively? And where does human judgment remain essential to staying ahead of these adaptive threats? 

John: Several defenders are leveraging AI to accelerate the detection and response cycle in a variety of ways. In keeping with the threat intelligence discussion, defenders are beginning to use AI to generate and deploy new detection rules based on new threat intelligence reflecting new TTP/IOCs.

This will dramatically shrink the timeline to synthesize the new intelligence into an action plan, versus pushing the requirement to a team to build the detection rules and another to upload it. 

Another area getting quite a bit of focus is on what folks are referring to as ‘Autonomous SOC,’ which I don’t believe will happen for quite a few years. AI-enabled alert triage is certainly a viable approach to widening the investigation of alerts, leveraging AI to triage all alerts vs just the ones that get escalated by legacy processes.  

This list goes on, and I want to make sure it’s clear that defenders aren’t sitting in place today. Everyone I know is actively engaged in either using or assessing how to use AI to simplify and accelerate their security operations.  

That said, most all these efforts rely on the fundamental reuse strategy of adversaries, which will become a lower and lower percentage of attacks over time.

Vishwa: If threat actor identity is no longer stable in an AI-driven threat landscape, what replaces it as the foundation for defensive strategy? Are security teams now forced to prioritize anticipating behavior over profiling adversaries?   

John: Attribution was always the crown jewel of the cyber threat intelligence and intelligence-led security programs. It allowed defenders to personalize the fight, making it tangible, less ephemeral, and way more fun.  

Most importantly, knowing who you were up against was the principal driver of assessing risk and the likelihood of having a persistent adversary. In particular, knowledge that a specific nation state was targeting your IP and/or business upped the ante versus a cyber-criminal group that is more likely to shift to another target if they’re easier prey. 

As sophistication amongst adversaries exploded, misattribution became increasingly simple, allowing a nation-state to make the attack look like a cybercrime or hacktivist group. Solid threat intelligence could typically see through this misattribution approach, investigating the hours when the attacks were launched, the tools and infrastructure that were leveraged, etc., and were able to dispel the myth created by the misattribution attempt.

Today, it’s quite easy to spoof the entirety of an adversary to make it much more difficult to dissect and discredit. At some point, defenders begin to simply step back and either disregard attribution in its entirety or simply attribute to an objective, which is what you need to assess risk. 

If a home intruder is at the gates of your home and is preparing to enter, you certainly want to know whether they’re after your electronics, paint graffiti on the walls, or murder your family and burn down your house. Once you can assess what their objective is, you have enough data to prioritize your response.

Vishwa: Is AI rendering attribution obsolete? If threat actors are constantly reconfiguring their methods and hiding behind generative tools, what value does threat actor profiling still hold for enterprises or governments? 

John: Governments have a set of resources that traditional cyber threat intelligence providers and defenders will never have, and will always have a mandate for attribution. 

I believe specific attribution to a group will become less of the focus for commercial entities, so long as they’re able to assess the objective of the adversary, which informs risk and therefore mitigation prioritization.  

The days of pictures of hackers on the cover of a report are numbered.


For a better user experience we recommend using a more modern browser. We support the latest version of the following browsers: For a better user experience we recommend using the latest version of the following browsers: