Threat Detection: Attackers Can Hide Their Tools, But Not Their Habits
John Laliberte, CEO and Founder of ClearVector, joins TechNadu to analyze how today’s production-level attacks succeed, from compromised developer identities and CI/CD abuse to help-desk weaknesses and vendor access gaps.
Laliberte brings a mix of hands-on engineering depth and national-level security experience, beginning as a Gentoo Linux developer and later serving as a network exploitation analyst at the National Security Agency before leading major detection and response programs at Mandiant and FireEye.
He explains the missed detections allowing intrusions to escalate, and the behavioral clues aiding attribution. We learned that very few organizations have tested, isolated, immutable backups, and that difference is extremely important.
This interaction explores the defensive investments that consistently aid faster recovery and why the time to figure out communication workflows is not during an active breach.
Vishwa: Can you walk us through a recent incident you studied, and explain what the most important missed detection or control failure was?
John: One incident that stands out involved a compromised developer credential, GitHub, a CI/CD pipeline, and the production environment. The organization had all the "right" tools - SIEM, EDR, CSPM - but each only saw fragments of the attack.
The critical missed detection was connecting the dots across the environment - activity from the control plane, to inside a running container, back to the CI/CD role and ultimately GitHub. The SIEM logged the API calls, the CSPM discovered the new infrastructure, but no tool connected it back to the compromised GitHub identity.
The adversary used legitimate CI/CD infrastructure to deploy a modified container image that established persistence through service account manipulation.
What made this incident particularly instructive was that every individual event looked normal in isolation.
- A developer approving a PR?
- Normal.
- CI/CD assuming roles?
- Normal.
- New service accounts in production?
- Happens regularly.
The pattern only became obvious when you could see the complete identity chain - from the initial GitHub compromise through the entire attack sequence.
This is exactly why we built ClearVector with identity attribution as the foundation. Traditional tools are designed to find "known bad" indicators, but modern production infrastructure attacks abuse legitimate identities and workflows.
You need to understand who did what across your entire stack, not just see that something happened inside one silo.
Vishwa: In breaches you’ve seen, how often did human factors, including poor admin hygiene, help-desk weaknesses, or social engineering play a larger role than technical flaws?
John: I'd estimate 70-80% of successful recent breaches have a significant human factor component - but not in the way most people think. It's rarely solely about the technical exploitation of a vulnerability.
In my experience, the attacks we see target the intersection of human behavior and production infrastructure.
For example:
- Credential theft for an administrator via a “look-alike” IdP page
- Hiring a remote employee that turns out to be an adversary
- Developers storing AWS credentials in GitHub repos
- Third-party vendors granted broad access that nobody tracks
- Help desk employees with excessive IAM privileges who are socially engineered
- Credential sharing between team members "to move faster"
What's changed is that these human mistakes now have immediate, automated consequences in production environments.
A developer's stolen laptop used to be an endpoint problem - now an adversary with that same credential can spin up 50 Bitcoin mining instances in minutes or steal terabytes of data from S3 in seconds.
Another emerging issue is that identity is being treated as a compliance checkbox rather than a runtime security control.
Organizations invest heavily in vulnerability scanning but can't answer basic questions like: Which third-party vendors accessed production last week? (or) Who approved the PR that deployed this backdoor into production?
This is why we’re focused on identity-driven security. We're not trying to change human behavior - we've built software that understands identity activity in real-time and stops misuse immediately, whether it's intentional or accidental.
Vishwa: When attribution is later attempted, what non-technical evidence, such as timing, victim selection, or extortion style, proves most helpful to investigators?
John: The adversary is human and operates similarly to any of us. We like to do things a certain way, and they are no different. So if you think about an adversary “assuming” someone else's identity, there's absolutely going to be "tells" just like in a game of poker.
These behavioral patterns are often more reliable for attribution than technical artifacts because sophisticated adversaries can mask their infrastructure and avoid triggering signatures, but they can't easily change their operational habits.
At the end of the day, the adversary has a mission, and it usually ends with data theft, establishing persistence, moving laterally, destroying or modifying data, etc.
One of the interesting trends I see at ClearVector is that the rate of re-use of technical indicators from one breach to another, especially in production environments, is increasingly very low, near zero in many targeted production environments.
This means traditional threat intelligence is not as useful in production environments, and you need to use your own data to protect yourself. This is exactly why we built ClearVector with identity modeling at its core. Our platform models, learns, and makes predictions about every identity in your environment - humans, machines, third-parties, and AI.
When someone uses credentials in a way that doesn't make sense, we surface this immediately. It's like having a poker dealer who knows exactly how each player typically bets and can spot when someone else is “playing their chips” (unconscious, hard-to-control handling of the chips).
The technical playing indicators might look great, but the behaviors reveal the truth.
Vishwa: Are there repeatable lessons around incident response communication, for example, how SOCs, legal, and PR should coordinate that you’d recommend based on real cases?
John: Absolutely - the old adage "practice makes perfect" applies here more than anywhere else in security. The time to figure out communication workflows is not during an active breach.
Regular cross-functional tabletop exercises are non-negotiable - and I don't mean perfunctory annual exercises where everyone reads from a script.
- I mean, realistic scenarios informed by actual incidents
- with your SOC
- legal
- PR
- engineering leadership and
- executives
- in the room, working through the messy tensions that happen in real breaches.
For example:
- A developer’s GitHub credentials were compromised 72 hours ago and used to deploy malicious code.
- Legal is asking questions
- Your CEO wants to know if customer data was accessed
- And the adversary just contacted you that your data will go live on the dark web in 3 hours
- What do you do in the next 30 minutes?
The specificity forces real decisions and reveals gaps in accountability, technical visibility, and communication.
Further, complement tabletops with technical red team exercises that test whether your security controls actually work.
- Have your red team focus on identity-driven attacks
- Compromise a developer credential
- Use legitimate CI/CD pipelines to deploy backdoors
- Abuse third-party vendor access
- Create persistence through service accounts
These exercises should answer critical questions:
- How long until we detected it?
- Could we identify the compromised identity?
- Could we isolate it without breaking production?
- Did we have complete visibility into what was accessed?
Most organizations discover they can detect "something happened" but can't quickly answer
- Who did it?
- Is this expected?
- Do they still have access?
Vishwa: From the attacks you’ve reviewed, what defensive investments, including visibility, backup strategy, and identity controls, delivered the clearest return during recovery?
John: Generally speaking there are 3 that come to mind:
- Backups
- Logging
- Business continuity or resilience
First, almost everyone has backups.
Very few organizations have tested, isolated, immutable backups - and that difference is extremely important during ransomware incidents.
Organizations that recover quickly have backups that adversaries can’t delete or encrypt because they are truly immutable and tested for successful restoration.
But here's what most miss:
- Having immutable and tested backups isn't enough - If the adversary can steal credentials and continue to access your backups, they can continue to access your data
- Understanding who’s accessing your backups or who can - is just as important.
Second, the first question at the start of most incidents is “where are the logs?”
This simple question aims to answer questions such as:
- When did this start?
- What did the adversary access?
- How did the adversary get in?
Organizations that have comprehensive logging within their own environment often find these logs have been deleted or tampered with - the adversary had the same access they did.
Organizations that recovered effectively had comprehensive log data completely outside their production environment, protected from tampering.
But here's the critical distinction:
- Even with log data outside of production, this can take weeks to analyze and re-hydrate raw data
- Organizations with a purpose-built solution like our CloudDVR can provide answers in minutes.
And finally, most business continuity plans assume infrastructure failure -
"Our hyperscaler is down" or "our SaaS firewall provider is down."
Very few account for identity compromise, which is how modern attacks actually succeed.
- The difference is crucial:
- You can't just failover to a different environment if the adversary has credentials that work there too.
Organizations that maintain business operations during incidents have continuity plans specifically designed for identity compromise scenarios.
These organizations have pre-established processes to rapidly rotate credentials and maintain isolated administrative access that does not rely on potentially compromised identity providers.
Vishwa: Finally, based on recent attack postmortems, what single change would you prioritize for most enterprises to reduce their risk in the next 12 months?
John: Once you have the basics covered, implement real-time, identity-driven visibility for your production environment. If you already have this, expand to isolation and containment.
In most cases,
- The adversary didn't exploit a zero-day, they used stolen developer credentials.
- They didn't find a novel vulnerability - they moved laterally using legitimate machine identities.
- They didn't deploy sophisticated malware - they used cloud-native tools with compromised service accounts.
Make sure you can answer basic questions quickly such as:
- Which third-party vendors accessed our environment last week?
- Who approved the GitHub PR that deployed this malicious code?
- When did this developer credential start behaving abnormally?
- What did this compromised CI/CD role actually access?










