Where Simulation Ends: Attackers are Using Social Engineering To Target Employee Decision-Making Beyond Training Scenarios
- Coz notes that threat actors are innovating in ways that are becoming genuinely difficult to anticipate.
- Phishing tactics change constantly, and training that doesn't keep pace becomes a liability rather than a safeguard.
- Security awareness programs need to give organizations a complete picture of the threat landscape, not just the threats that were relevant eighteen months ago.
- Employees learn to spot the scenarios they've seen before, but they're unprepared for anything outside that playbook.
- According to Arsen Security, AI adds unprecedented realism to social engineering, making the human layer one of the fastest defenses to strengthen.
Thomas Le Coz, CEO and Co-Founder of Arsen Security, points to a growing disconnect between how employees are trained and how attacks actually unfold. Security awareness training is failing in ways teams may not be measuring.
Before Arsen, Le Coz built and led digital transformation initiatives, working directly with organizations on user behavior and training systems
Coz notes that threat actors are no longer relying on obvious payloads or broken phishing emails. They are using pressure, conversation, and AI-generated realism to bypass both technical controls and learned user behavior.
If they can put enough time-sensitive or hierarchical pressure on someone, and walk them through a carefully crafted attack, they don't need a clever payload. From QR code attacks to social engineering, the problem is outdated training that needs to improve real-world responses.
This interview breaks down where security programs are going wrong, what employees actually do under pressure, and why the human layer is both the weakest link and the fastest defense.
Vishwa: When you simulate phishing or vishing attacks, what is the most common real mistake employees make that security teams consistently underestimate?
Thomas: The most underestimated factor is definitely that emotional, gut-level reaction that just overrides rational thinking. People don't think the same way under pressure. Even the most obvious red flags, like a suspicious domain or a weird sender, get completely ignored the moment urgency or authority enters the picture.
But that's not the only trigger. There are actually more mechanisms for putting someone in that irrational state than just pressure and authority. Curiosity, the lure of a reward, reciprocity: attackers use all of these. They'll offer advice, something exclusive, or make it feel personal, so the target feels like they owe something back.
A 2025 study found that only half of suspected SMS phishing messages even contained a malicious URL. The other half? Pure manipulation: conversation, trust-building, no link needed. If you can put enough time-sensitive or hierarchical pressure on someone, and walk them through a carefully crafted multi-step attack, you don't need a clever payload.
That combo (urgency plus authority) consistently short-circuits even well-trained employees.
Vishwa: Can you walk us through a simulation result that surprised your clients, even after they believed their awareness training was working?
Thomas: The most telling results happen when organizations move away from traditional simulations, like the classic credential-harvesting scenario, impersonating a well-known vendor like Microsoft.
The moment you introduce something slightly different, like a QR code attack, or a conversational scenario impersonating HR, click rates go through the roof. This is a clear sign of over-training on basic threat types, which is a direct consequence of the limitations of most awareness programs.
Employees learn to spot the scenarios they've seen before, but they're completely unprepared for anything outside that narrow playbook.
Vishwa: In your experience, which type of social engineering attack still succeeds despite repeated training, and why does it persist?
Thomas: Honestly, even the most basic attacks succeed, particularly those with a more human touch. Classic brand-impersonation phishing is being filtered more effectively at the technical level, so attackers have adapted: they keep the phishing framework but add a conversational layer to dramatically increase their success rates.
Generative AI has become a significant force multiplier for this approach. It enables attackers to produce phishing and business email compromise messages that sound polished, confident, and entirely legitimate, at scale.
Malicious language models can convincingly replicate a CEO's tone, slip past email filters, and generate waves of 'Urgent - Verify Your Account' messages with fake links, all without the broken spelling or obvious red flags that employees are trained to catch. When those traditional signals disappear, so does much of the learned vigilance.
Vishwa: What signals or behaviors do you look for to distinguish between trained compliance and actual user understanding?
Thomas: Simulation behavior tells you far more than any multiple-choice questionnaire. Theoretical training produces answers; real-world simulation produces behavior, and behavior is what matters when an attack lands. One of the most revealing metrics is the reporting rate.
Are people actively contributing to the organization's security posture, or are they simply ignoring threats they weren't trained to recognize? The goal isn't to produce employees who can pass a test. It's to build a structured, actionable defense that works within the specific operations, rules, and culture of each organization.
That means asking honest questions about your current posture, identifying the gaps in your awareness program, and deploying the right simulations, playbooks, and training, before attackers find those gaps first.
Vishwa: If a company wants to test its human layer realistically, what should it stop doing immediately, and what should it start doing instead?
Thomas: Stop running quarterly or bi-annual simulation blasts with two or three recycled scenarios. That approach creates pattern recognition, not genuine resilience. Simulations need to be spread out over time, using a broad variety of scenarios to avoid over-training and repetition. The cadence matters: at least once a month is the minimum to keep security top of mind.
And the content needs to evolve. Phishing tactics change constantly, and training that doesn't keep pace becomes a liability rather than a safeguard. The goal is an ongoing, dynamic program that keeps employees informed and genuinely engaged, while acknowledging that no one actually enjoys mandatory security training.
Making it relevant, timely, and varied is the closest you'll get to making it effective.
Vishwa: Based on your work, where do current tools fall short in helping security teams detect or prevent social engineering attacks?
Thomas: The biggest blind spot is unmonitored communication channels. Instant messaging platforms tied to personal accounts (LinkedIn, WhatsApp, Facebook, Instagram…) are actively exploited for profiling, pretext-building, and executing attacks, yet they're nearly impossible to monitor or train against through practical simulation.
Beyond that, threat actors are innovating in ways that are genuinely difficult to anticipate. We're now tracking threat campaigns where fake job candidates make it all the way through interview rounds as a social engineering vector.
These personas can be partially or entirely synthetic:
- AI-generated resumes,
- Scripted responses,
- Cloned voices, or even
- Manipulated live video
For security teams, that level of innovation demands alertness and reactivity. Effective security awareness programs have to ship fast, stay current, and give organizations a complete picture of the threat landscape, not just the threats that were relevant eighteen months ago.
Vishwa: Looking ahead, how do you see social engineering evolving with AI-assisted attacks, and what practical changes should organizations make now to prepare?
Thomas: AI adds scale, depth, and an unprecedented level of realism to social engineering. The good news is that the human layer is still one of the fastest defenses to update. You don't need to roll out new hardware or patch every endpoint, you need to shift how people think and respond.
Most security tooling is still built around a relatively straightforward model: detect a malicious file, block a malicious URL, catch a known exploit pattern. Social engineering, particularly AI-assisted social engineering, is increasingly designed to sidestep all of that. Modern threat actors use hybrid attack chains: an SMS followed by an email, an email followed by a vishing call, fake MFA flows layered on top.
These multi-step sequences are specifically engineered to move past standard defenses. Simulations need to replicate these sequences, not abstract them away. Attackers don't stop at the click. They want access, credentials, and ultimately the ability to deploy malware or exfiltrate data.
Training scenarios should include conversational elements that simulate AI-generated social engineering and test how employees respond under real pressure, not just whether they can recognize a suspicious URL in isolation.










