
David Norlin, Chief Technology Officer at Lumifi Cyber, speaks to TechNadu about modern threat detection, alert fatigue management, and how low-severity vulnerabilities can become high-risk when ignored. We asked about securing ephemeral infrastructure during large public events, how to distinguish real post-exploitation signals, and why telemetry is only half the story.
Norlin previously served as a Network Administrator in the U.S. Air Force, followed by a Cybersecurity Analyst at Kratos SecureInfo, where he specialized in network traffic analysis and incident response.
In this conversation, Norlin laid out why field-grade physical setups need more scrutiny, how AI alters vulnerability assumptions, and where human decision-making still wins.
Read this interview for a serious look at hybrid environments, SOC responsiveness, and next-gen attack surface visibility.
Vishwa: Ahead of the World Cup, what threat modeling principles should organizers adopt early? Are you seeing patterns emerge, like one for public-facing risk surfaces and another for internal logistics and credential flow?
David: Organizations should attempt to assess what’s normal on a day-to-day basis. Establishing a baseline will help identify outliers, and from that, periods of abnormality can be determined. This is especially important during events, where day-to-day data can be extremely helpful in identifying outlier traffic patterns.
For public events, I recommend understanding how data flows through the entire infrastructure, and this usually includes looking at things that are not often considered. In many cases, local internet providers are deploying custom infrastructure that is temporary in nature, perhaps in small mobile closets, trailers, or other ephemeral setups.
Security of this infrastructure starts with physical security. From there, security personnel need to understand what type of traffic is being carried over what circuits, and then rank criticality from there. Some traffic volumes may be massive and comprise end-user wifi/mobile activity, while other data may be critical for event broadcasting and point of sale.
Cybersecurity teams need to understand all of it, and that understanding can only be gained by bringing all responsible parties together.
Vishwa: Without going into specific technical mechanisms, what containment thresholds or triggers help Security Operations Center (SOC) teams stop lateral movement?
David: While keeping technical mechanisms aside, the best response starts with clearly delineated procedures. We don’t want an analyst having to figure out what new procedure needs to be invented in the moment.
This is why predefined rules of engagement and prior authorization are critical to a strong SOC response. The SOC needs to know ahead of time where they can respond and to what degree. From there, regardless of the response action, the individual analyst is empowered to proceed.
Vishwa: What technical signals help Lumifi distinguish genuine post-exploitation from automated pen-testing traffic? Could you walk us through a step-by-step sequence that reveals this in a real environment?
David: Automated testing typically contains artifacts that help identify it post-test. These may be very literal, such as specific vulnerability names (CVEs) embedded into a request header, or a specific contractual engagement name, or even the name of the tester. These types of tags or identifiers vary widely. What is usually a tip-off that a manual test or actual post-exploitation activity is in play is that such artifacts are either non-existent or heavily obfuscated.
Request and response contents are obfuscated, and some indicators are perhaps novel, having been seen for the first time, communicating with sources or destinations that have never seen traffic activity before. That’s a strong indication that something unique is taking place.
Vishwa: Triaging quickly sorts alerts based on severity and relevance. Lumifi triages thousands of security events daily. What logic-based or ML-driven techniques are proving most effective in eliminating noisy false positive patterns?
David: False positive reduction is both art and science. The best, most tangible results are almost always undertaken in coordination with the client. Understanding a client’s normal day-to-day picture of activity is crucial.
False positives will always exist, especially when it comes to authorized users performing admin activity, and that pattern of behavior is not going to translate from one organization to another. What it really takes is a smart engineer sitting down with their equivalent privileged admin counterpart in the client org, and figuring out if the latest series of alerts was useful.
Some UEBA models are especially good at making this easier, but even then, they need some assistance from a human analyst and cannot automatically tune on their own. They can establish a baseline and identify deviations, but in my experience, they cannot always determine the qualitative nature of a baseline like an experienced analyst.
Vishwa: In mid-sized environments, alert fatigue is taking a toll on SOC analysts, slowing down prioritization. Are there practical decisions like turning off certain logging sources or shortening data retention windows that can help reduce alert fatigue while still preserving key visibility?
David: Turning off certain log sources is rarely a fix – what’s needed is understanding which logs are important. Firewalls, for example, are notoriously noisy and verbose. They provide excellent data, but there are often entire categories of logs that have little to no value from a practical detection perspective.
For example, low-severity “blocked” events; these are evidence the firewall is doing its job, and no analyst in the current era of cybersecurity operations is going to spend their day looking at individual alerts that amount to someone “knocking on the door” and being turned away.
Instead, what might be far more productive is looking at high-severity events, or even more granular, certain types of activity to abnormal destinations that were allowed, perhaps with other flags identifying that session’s peculiar nature. This is just one example, all of which is to say, being more discerning when it comes to log types can go a long way to culling the wheat from the chaff. As far as log retention goes, this doesn’t usually contribute to false positives.
Vishwa: As threat actors mimic legitimate admin behavior, what telemetry beyond IP and login time is most useful? How are indicators like access velocity, keystroke sequence, or session timing aiding in insider threat detection?
David: We often talk about the “attack chain”, or a sequence of events that occur in close succession that identify something suspicious, especially if they don’t fit the norm. One thing that’s often a good indicator of suspicious behavior is a successful login followed by unexpected mapping of shares or access requests to dozens, hundreds, if not thousands of file resources.
This is especially odd if undertaken by an account that has existed for some time, and suggests that the user is behaving strangely.
Vishwa: What is your prediction on how AI will be abused to evade detection during credential misuse?
David: AI has proven to be exceptional at chaining vulnerabilities for very effective exploitation paths. A single vulnerability, after being patched, is something we’d traditionally expect to move on from, and perhaps even low-severity vulnerabilities might be moved to the back burner in favor of criticals and highs – and that’s still a perfectly acceptable way of thinking about the massive, overwhelming task of vulnerability management.
But, I think where administrators are going to be faced with a greater challenge is the possibility that enumeration of the environment results in an attacker finding low-severity vulnerabilities that, when combined, form something much more dangerous and exploitable.
We have to advance our vulnerability management mindset into looking at the entire “framework” of current vulnerabilities extant in an environment. In isolation, perhaps a single vulnerability is not worthy of concern, but as low and medium-severity vulnerabilities stack up on a single endpoint or across the network, they might together form a more concerning exploit path.
This is where I see the offensive use of AI really excelling in the next 12-18 months, and system administrators and cybersecurity operators need to be factoring this into their definition of “patched”.
Vishwa: What security cues or behavioral anomalies should our readers, whether in mid-sized teams or technical roles, be trained to never overlook, especially when working with live events, hybrid environments, or shared admin accounts?
David: There’s a real tendency and desire to cut corners for convenience’s sake. Deployed, on-site, temporary events are tough. Personnel surge up to meet the demand, and there’s a desire to make life “easier” by sharing passwords, not configuring devices securely, or leaving default passwords in place.
Unfortunately, attackers also like “easy”, and by creating low-hanging fruit for them to attack, these poor practices and hygiene increase the possibility of disastrous consequences. Wherever possible, treat deployed infrastructure like it’s permanent when it comes to security.
That means changing default passwords, configuring zero-trust network access, segmenting temporary networks properly, locking down operational WAPs, and so on. Especially, where point of sale is concerned, security hygiene is critical, both to protect vendors who are helping make an event possible and their customers, who are making the event worthwhile for everyone.