How an Attack Hid in Encrypted Traffic and Evaded Traditional Security
Question: Can you walk us step-by-step through a real attack your system detected, from the first observation to the final alert, and what made it possible to catch it ahead of traditional tools?
Mayank Kumar, Founding AI Engineer at DeepTempo
Gen-AI gave attackers cheaper campaigns and faster iteration. Defense hasn’t caught up.
The LogLM flagged it first. A cluster of activity from an employee’s phone on a BYOD program, mapped to command and control with high confidence, then a second alert a minute later for exfiltration from the same device.
Nothing else in the stack had said anything. The next-generation firewall in the path was passing the traffic clean:
- session opened
- policy allowed it
- every security profile applied as configured
- event log empty
This was exactly the kind of moment the LogLM was built for - a foundation model that flags behaviors as they emerge across a timeline of events, not as they match a known pattern.
We dove deeper into understanding why the next-generation firewall could not. And found:
- The command channel was fixed-size POST, leaving the device every few seconds, riding 443 alongside every other legitimate web and SaaS session on the network.
- The exfiltration was a live audio and video stream from the same device going to Agora - a real-time communications platform thousands of consumer apps already use.
- The destination was allow-listed.
- The traffic was encrypted.
- The bytes were shaped like an ordinary video call, because it was a video call, but one party was never consulted.
Agora is just an example. The same pattern works with anything allow-listed on standard encrypted ports:
- Cloudflare Workers,
- Discord,
- Telegram,
- Zoom,
- GitHub
The whole game is picking a destination that the enterprise can't block without breaking the business, on a port that's encrypted by default. Once you do that, your traffic looks like everyone else's.
- There's no malicious domain to block,
- no known indicator to match,
- no signature to write,
- no policy to violate, and
- even JA3/JA4 fingerprinting doesn't help,
- because the implant uses the same TLS stack as every other app on the device.
Mobile makes this worse: a phone is a general-purpose computer in a pocket that can run SSH, Tor, WireGuard, or anything else, and most of the device fleet has no EDR equivalent at all.
WatchGuard's latest data has roughly 70% of malware moving through encrypted channels and evasive variants up 40% quarter-over-quarter. This is what those numbers look like at the wire.
The attacker spent nothing exotic to pull this off:
- a generative-AI subscription for the video lure,
- a malware-as-a-service kit for the implant,
- a free consumer streaming account for exfiltration.
- No zero-day, no custom malware, no bespoke C2.
Generative AI didn't invent social engineering. What it did was collapse two things at once, the cost of running a campaign and the time it takes to iterate on one. Sift’s Q2 2025 Digital Trust Index reported GenAI enabled scams up 456% in the year ending April 2025.
A lure that doesn't convert can be regenerated in minutes with a different voice, a different face, a different pretext.
A C2 cadence that gets flagged in one environment can be retuned and redeployed before the defender finishes writing the rule.
An implant that gets caught can be rebuilt overnight. The attacker is now running the same kind of fast-feedback loop product teams run, except the product is compromised. Cost reduction expanded who could attack.
The speed of adaptation changes how fast they get better at it. Mobile threats went from roughly 5 million detected events in 2019 to something like 75 million in 2025, and the rate keeps accelerating because the iteration cycle keeps shrinking.
What surfaced this attack wasn’t the packet content, nor pattern matching with known attacks. It was sequence behavior. Our LogLM is a foundation model trained on billions of logs. It looks at long windows of activity together and asks whether the trajectory matches what looks like normal in that context.
- The fixed-size POSTs landed in a tight cluster in embedding space that legitimate apps just don't occupy.
- The streaming session looked like a video call, but the directionality and session structure didn't match a human on the other end.
- Two independent behavioral signals
- neither one sufficient alone,
- both visible to a model that learned what normal sequences look like instead of what known-bad ones look like.
Here's the uncomfortable part. Defense built around static artifacts - domains, hashes, signatures - assumes the attacker iterates on a human timeline. They no longer do.
The leverage we still have is the behavioral signature of post-compromise activity, because changing what an attack does on the network is harder than changing what it looks like.
Anyone still budgeting for more inline inspection, more domain blocklists, and more IOC feeds is racing an attacker who can retool faster than the rules can be written. The organizations getting ahead of this are rebuilding detection around behavior over sequence, not signature over packet.




