
In this interview, Bill Moore, Founder and CEO of Xona Systems, explains how critical infrastructure operators can adapt security for high-uptime environments where patching isn’t feasible. He details compensating controls like protocol isolation, JIT access, and multi-factor authentication that actually work in industrial settings.
Moore has led cybersecurity initiatives at FireEye, Varonis, Aruba Networks, and General Dynamics, driving federal defense strategies, advancing secure network architectures, and shaping alliances that strengthened critical infrastructure protection.
We asked how third-party risks can be minimized, what signals indicate lateral movement or privileged misuse, and which tools best protect remote OT access. He emphasized that access is the new perimeter and effective defenses must be designed for operational realities, not just IT assumptions.
Vishwa: Many critical infrastructure systems cannot be patched quickly. What compensating security measures work best in these high-uptime environments?
Bill: You're right, patching in operational technology (OT) environments is often unrealistic. Unlike IT environments, OT systems often can’t just “patch and reboot.” These systems run power grids, water treatment plants, pipelines, and more, and they’re built for uptime, not rapid iteration. So, traditional security strategies must adapt. Instead of pushing patches, we focus on secure-by-design compensating controls.
The most effective compensating controls in these environments focus on isolating access, not hardening endpoints. Why? It’s our assertion that user endpoints are the #1 threat vector for critical infrastructure environments. Eighty-eight percent of industrial sites identify remote services as their most significant cybersecurity risk, and remote services have been observed as the primary attack vector in 71% of major cyber-attacks on OT environments.
So, what’s the best approach to isolating user endpoints? This starts with protocol isolation or proxying access to industrial systems without establishing a direct network path from the user’s device. That way, even if an endpoint is compromised, it can’t interact with critical systems at the network level.
Another key control is just-in-time (JIT) access, where privileges are granted only for a specific task and time window. This eliminates standing access and reduces the risk window dramatically.
Combine that with multi-factor authentication (ideally hardware- or context-based) and session recording, and you can maintain accountability even in environments where patching is delayed for months.
Micro-segmentation and application-layer firewalls also help contain potential threats. And critically, session monitoring with real-time intervention, not just alerting, allows security teams to stop abnormal behavior as it happens.
And frameworks like IEC 62443 and NIST 800-82 support this layered defense model. It’s not about waiting to patch; it’s about designing resilient environments where unpatched systems aren’t easily exploitable.
Vishwa: Remote access to operational technology (OT) environments introduces unique risks. What controls most effectively reduce exposure when granting third-party access?
Bill: Third-party access is one of the most high-risk activities in OT environments. You’re allowing an external user, often with little visibility into your network architecture, to interact with critical systems. That’s a real problem if you're relying on traditional methods like VPNs or jump servers, which assume trust and expose too much.
To reduce exposure, the most important control is to decouple access from the network. That means eliminating direct connections and instead using brokered access, where the third party interacts with a system through an isolated, protocol-level gateway. This breaks the path for lateral movement and neutralizes malware on the user’s device from reaching your environment.
Next, access must be just-in-time, least-privilege, and fully auditable. That includes time-restricted sessions, granular permissions tied to roles, and automatic revocation once the task is done. Ideally, third parties never see or handle credentials, which are managed and injected securely behind the scenes.
One often-overlooked safeguard is moderated session access or the ability for your team to observe or terminate a session in real time. This is especially critical in regulated industries where operational integrity is non-negotiable.
In short: Never give away the keys.
Provide only a controlled, temporary workspace. That’s how you enable secure collaboration without compromising the perimeter or the process.
Vishwa: Session isolation is central to securing privileged access. What implementation pitfalls do you see that weaken isolation in practice?
Bill: Session isolation sounds straightforward, but in practice, it's often implemented in ways that look secure on paper but fall apart under scrutiny, especially in OT environments.
One of the most common pitfalls is failing to fully sever the control path between the user and the target system. If a user’s endpoint still has a direct network route, even through a proxy or jump server, that’s not true isolation. Malware or rogue tools on the user’s machine can still execute commands or scan internal assets.
Another mistake is relying solely on visual isolation, for example, screen sharing or remote desktop sessions that only obscure what's happening under the hood. If clipboard access, file transfers, or drive mappings are still enabled, that’s a massive vulnerability. Attackers can exfiltrate data or plant payloads silently.
Also, many setups neglect to enforce role- and time-based access, which means privileged sessions are always “on” or overly broad in scope. That defeats the purpose of isolation, as it’s just a narrow tunnel with no guardrails.
And finally, insufficient auditing and session supervision weaken deterrence. If users know sessions aren’t actively monitored or recorded, the risk of insider misuse or third-party drift increases.
True session isolation means protocol isolation, credential abstraction, real-time oversight, and complete disconnection from untrusted endpoints. Anything less might feel secure, but it’s not enough when the stakes involve critical infrastructure
Vishwa: Traditional multi-factor authentication (MFA) often fails in OT. What identity assurance methods actually hold up under industrial constraints?
Bill: It’s true, many MFA solutions that work well in corporate IT environments break down fast in OT. You can’t assume smartphone availability, cloud connectivity, or even consistent bandwidth in remote substations or offshore facilities. And when your technicians are wearing gloves or PPE, biometric prompts aren’t exactly usable.
What holds up under these conditions are offline-capable, hardware-based MFA methods. Think FIDO2 tokens, YubiKeys, or contactless smart cards - devices that can authenticate users without needing an internet connection or mobile push notification. These tools offer strong assurance and are well-suited to constrained, high-uptime environments.
Contextual or adaptive authentication can also play a role, especially when tied to known devices, fixed locations, or time-of-day policies. For example, access from a trusted control room may require less friction than a request coming from an unknown vendor on a weekend.
Another essential method is identity binding to roles and systems. In OT, it’s more effective to authenticate based on who is accessing what, when, and why, not just who they are. This supports zero-trust principles and aligns with IEC 62443’s requirement for role-based access control.
The goal isn’t just to verify identity, it’s to do so in a way that doesn’t break existing workflows. Because in OT, failed access can be just as disruptive as failed security.
Vishwa: Lateral movement inside OT networks is particularly damaging. What early signals can reliably indicate that an attacker has breached segmentation boundaries?
Bill: In OT environments, lateral movement is often the moment where a small breach turns into a catastrophic one. But detecting it isn’t about spotting malware; it’s about spotting behaviors that violate the logic of how your systems are supposed to work.
One of the clearest early signals is protocol misuse or misplacement. If an engineering workstation suddenly initiates SMB or RDP connections to PLCs, or a remote user tries to access more than one subnet in a session, that’s a red flag. These environments are typically deterministic. Unusual traffic is a signal, not noise.
Another indicator is identity misuse across zones. If a credential normally tied to a HMI shows up accessing historian data or control layer systems, that’s a sign of privilege escalation and potential segmentation failure.
Timing anomalies can also reveal problems. Access occurring outside of defined maintenance windows or vendor support hours, especially when paired with access to multiple systems, may indicate reconnaissance or pivoting behavior.
In some environments, device fingerprinting is underused. If a known user logs in from a new or unexpected device, it’s worth investigating, even more so if the session attempts to bypass standard access workflows.
Ultimately, the most reliable signals come from violations of access context, not just signature-based detection. In OT, attackers stand out not because of the tools they use but because of what they try to do.
Vishwa: When monitoring privileged sessions, what indicators of misuse should security teams prioritize without overwhelming analysts with noise?
Bill: Prioritization is crucial. In OT environments, alert fatigue isn’t just inconvenient; it can be dangerous. Analysts don’t need more data; they need more meaning from the data they already have. The most telling indicators of privileged misuse come from deviation from known-good behavior.
For example, when a technician accesses systems outside of their normal scope, or performs actions out of sequence, it often signals misuse or compromised credentials.
A high-value signal is unusual timing. Access during restricted hours, on holidays, or outside scheduled maintenance windows, especially from vendors, is often the first sign that something’s off.
Another key indicator is automation inside manual sessions. Privileged users clicking through an HMI is normal. But if command-line tools, scripts, or rapid input patterns show up in a GUI session, that’s a flag. Attackers often try to move fast; real operators usually don’t.
File transfers, clipboard usage, or drag-and-drop actions are also worth monitoring, particularly in air-gapped or segmented environments where such activity is rare or restricted.
And lastly, build in session metadata correlation, who accessed what, from where, and for how long, rather than relying only on content-based alerting. When combined with real-time shadowing and termination capabilities, you enable response at the moment it matters most.
The goal isn’t just to detect compromise. It’s to detect deviation from what safe, productive work looks like, and that starts with knowing your environment better than the attacker does.
Vishwa: For rising threats targeting remote OT access, what cybersecurity tools would you recommend to defend effectively?
Bill: The threats targeting remote OT access today are increasingly tailored, not just generic malware, but attacks designed to exploit the way industrial environments work. That’s why tool selection should be rooted in one principle: access is the new perimeter.
The first priority is to eliminate implicit trust. That means moving beyond perimeter defenses and adopting Zero Trust Access (ZTA) tools that enforce identity, context, and policy before any connection is made. This includes protocol-isolating access gateways that prevent direct user-device communication with critical systems.
Next, incorporate session-brokering platforms that support just-in-time access, full session recording, and real-time supervision. These are especially valuable for managing third-party vendors and short-term contractors without leaving behind persistent accounts or VPN tunnels.
Beyond access, visibility is critical. Use asset behavior analytics and baselining tools tailored to OT to detect anomalies in how systems and users operate. Avoid tools that rely solely on signature-based detection or assume always-on connectivity, as many OT sites are bandwidth-limited or disconnected entirely.
And finally, prioritize tools that work without agents, without cloud dependencies, and without needing to redesign your entire architecture. Simplicity and deployability are not nice-to-haves in OT, they’re essential for adoption.
The best defenses today aren’t just built for cybersecurity, they’re purpose-built for operational realities. That’s what makes them effective.