In this interview, Nati Tal, Head of Research at Guardio, discusses how AI is reshaping cybercrime. It not only expands the number of people launching attacks, but also raises their overall sophistication.
Tal previously served the Israel Defense Forces, later directing R&D at Gita Technologies and leading research at ACE Labs, where he managed teams focused on exploitation, mobile, and OS internals.
He draws attention to AI blurring the signals that once helped distinguish human interactions from machine responses, social engineering, and layered security.
Vishwa: Generative AI can craft convincing personas at scale. What detection signals still reliably expose these synthetic identities before victims are drawn in?
Nati: As AI technology continues to advance, the signals that once helped us distinguish between human and machine interactions are rapidly fading. We’re reaching a point where it’s almost impossible to tell the difference.
That’s why the mindset for detecting deception shouldn’t depend on whether there’s a real person or an AI agent behind the keyboard. The same principles apply. In most cases, scams still rely on exploiting basic human impulses - greed, fear, or urgency. We need to remember that in the real world, nothing valuable comes for free.
You don’t suddenly win prizes out of nowhere, your computer doesn’t mysteriously develop a critical issue that requires instant technical support from Microsoft, and Big Tech companies rarely reach out unprompted with your “dream job” opportunity.
The old saying still holds true: “If it’s free, you’re the product.” Practicing everyday skepticism is essential. As generative AI makes it cheaper and easier to run large-scale manipulative operations, one AI agent doing the work of thousands of scammers, our best defense remains a sharp, questioning mindset.
Vishwa: Traditional anti-phishing relies on URL or domain flags. How must these controls adapt to catch AI-generated scams that dynamically morph infrastructure?
Nati: That statement, that traditional anti-phishing relies on URL or domain flags, is exactly what we at Guardio refer to as the “Security Gap.” Long before AI became the buzzword it is today, those legacy methods were already insufficient to stop modern scams and malicious activity.
They simply do not scale to the speed and sophistication of today’s threat landscape, whether it is targeted attacks on major tech organizations or widespread “spray campaigns” across social media, email, and SMS aimed at everyday users.
Protecting people online requires much more than static indicators. It demands proactive, layered defense that catches threats at the earliest possible point of detection and maintains multiple protective layers along the user journey to stop the attack before it causes harm.
At Guardio’s Security Division, we start by asking fundamental questions: How did the user get here? What was their original intent? What potential narrative might be exploited? These are the same questions every person should subconsciously ask before clicking a link. It is the foundation of digital skepticism.
Translating that mindset into real detection technology is what we do. To keep pace with the evolution of fraud in the AI era, we have also embraced AI ourselves, in other words, we fight fire with fire. By combining the expertise of top-tier security researchers and analysts with the power of machine learning and large language models, we stay several steps ahead of scammers.
It is an ongoing race, and it is only getting more complex. With the rise of AI-driven browsers like Comet and Atlas, not only is the content and phishing infrastructure AI-generated, but soon AI itself will be the one falling for scams while browsing on our behalf. The battleground is changing, but so are we.
Vishwa: Social engineering has always evolved with technology. In your view, what is fundamentally different about AI-driven manipulation compared to earlier scam techniques?
Nati: One of the most significant leaps AI has introduced is the removal of barriers that once protected certain professions and skills. Until recently, you needed trained professionals with years of experience to create effective marketing content, write persuasive copyright headline, or build software.
Today, anyone can do those things with the help of AI. You can instantly become a skilled copywriter, translate messages into any language while preserving local nuances, or even build a complete app without knowing what a function is. And yes, you can also start scamming and join the “dark side” of the internet almost instantly.
The second major leap AI brings is instant scale. Think about it: a teenager with no prior experience in online fraud can now act as an entire attack group, running multiple large-scale campaigns at once, stealing personal data, credit card information, and hijacking social accounts. The jump from zero to one is dramatically easier, and scaling from one to one hundred is even simpler when you have AI doing the heavy lifting.
This is how AI fundamentally changes the threat landscape. It not only expands the number of people capable of launching attacks, but also raises the baseline quality of those attacks. The era of spotting scams through broken English or poor translations is over. We are entering a phase where every phishing attempt can look polished, personal, and highly convincing from the very first message.
Vishwa: Many security awareness programs focus on click-prevention. What measurable training approaches actually help users recognize AI-powered persuasion attempts without overwhelming them?
Nati: When it comes to AI-driven social engineering, persuasion is the critical element where AI truly scales beyond human capabilities. An AI can study its targets continuously, identify psychological weak points, build trust over time, and craft perfectly customized narratives for each individual.
These are skills that, until now, required highly trained human operators and could rarely be executed at scale because of the cost and complexity involved. With AI, that limitation is gone.
This is why training cannot focus only on click-prevention anymore. AI-powered persuasion is not about a single moment of failure. It is a long game that can unfold over days or even weeks before reaching the actual “turning point.” Awareness programs must evolve to help employees recognize manipulation patterns early in the engagement, not just at the point of clicking a link.
The solution lies in creating security-oriented habits, not just isolated training events. Employees should be encouraged and equipped to flag any suspicious behavior at its earliest stage, ask the right questions, and quickly share potential concerns with their peers or the security team before an incident escalates. Security should feel like a continuous dialogue, not a one-time test.
Support and customer-facing teams deserve special attention. They are often overlooked in security training yet are on the front line of exposure. Attackers frequently target support channels with fake “clients,” injecting malicious content or even prompt-based manipulations to exploit internal AI systems that handle ticket classification or responses. This is an emerging threat vector that awareness programs must now include.
In short, the goal of training in the age of AI is not just to prevent clicks but to teach people how to recognize when they are being played. That requires updated processes, shared vigilance, and a culture where everyone participates in early detection.
Vishwa: Considering the rise of AI-driven social engineering, what cybersecurity tools and best practices would you recommend for both newcomers and expert practitioners to strengthen defenses?
Nati: As I mentioned earlier, awareness is the ultimate goal. Understanding that any communication from the outside world could be a potential fraud attempt, and continuously learning from real examples, forms the first and most important layer of defense.
However, we often forget one simple truth: we are human. Not machines. Our daily lives are overloaded with distractions, multitasking, and constant interruptions. Even the most security-aware person can make a mistake, overlook a small detail, or click on the wrong link.
That is why a layered security strategy is essential. Do not rely solely on your own caution or your employees’ awareness. Everyone needs at least one additional protective ring around them. This is exactly the gap we address.
Every browsing flow and online activity is continuously monitored, ensuring that users are protected in real time. If a potential threat is detected, the malicious action is stopped immediately.
For example, you might click on a link in a text message that just popped up on your screen, thinking it came from the email you just read. It happens more often than you think. Guardio is built to step in exactly at that moment, keeping your online experience safe and seamless.
Beyond detection, prevention is just as important. Guardio continuously scans your digital assets, including social media and online service accounts, for security configurations and known breaches, and provides proactive recommendations to secure them before attackers can exploit them.
And finally, education is part of the cycle. Every time we block a threat, users receive clear insights into what happened and why. Awareness grows with understanding, and the best security comes from learning through real events. Ultimately, technology protects you in the moment, but knowledge protects you for life.