Traditional solutions rely on known signatures and rules: attackers exploit covert tactics and behavioral deviations.
Saygili noted that EDR and UEBA flag anomalies in logins, access, or processes, indicating threats.
Exploiting the gray zone by gradual lateral movement and privilege escalation mimicking legitimate activity gets missed by signature-based tools.
Cyberthint outlines multi-signal correlation of events, like failed logins, that give reliable signals.
Saygili highlights that Autonomous remediation must be bounded by guardrails with analyst approval and SOAR.
In this interview, Ismail Saygili, CEO of Cyberthint, addresses subtle behavioral deviations getting missed by traditional tools, how organizations can ensure visibility without full reliance on agents, and contextual signals to improve detection accuracy without a flood of alerts.
With his experience in cyber threat intelligence and digital risk management, Saygili brings a practitioner’s perspective to these challenges
We asked how early warning indicators reveal compromise through RDP and SSH, and what tools strengthen defenses for both newcomers and expert practitioners.
Saygili mapped out concrete early warning signals, from failed login spikes to dormant accounts suddenly active. He paired these with layered defenses such as MFA, patching, CTI integration, and deception mechanisms.
Together, these steps show how organizations can detect, respond, and protect against RDP and SSH exploitation.
Vishwa: With endpoint detection evolving, what subtle behavioral deviations in user activity do you see slipping past traditional security tools?
Ismail: As endpoint detection evolves from basic antivirus to advanced EDR and behavioral analytics, attackers are adapting with more covert tactics. Traditional security tools (like signature-based AV or static SIEM rules) often miss subtle deviations in user behavior that don’t match known attack patterns. These “low-and-slow” anomalies can quietly bypass defenses until it’s too late.
Traditional security solutions rely heavily on known signatures and predefined rules. If an action isn’t a known malware signature or explicitly flagged rule violation, it often goes unnoticed. This means that authorized-but-unusual behaviors can evade detection.
For example, an employee performing actions within their permissions (but with malicious intent) will look benign to a rule-based system.
Attackers exploit this gap by blending in with normal user activity. As one report notes, static defenses leave organizations “blind” once an attacker slips through, whereas behavior-based monitoring is needed to spot what prevention tools miss. In short, subtle deviations require context and baselining – something traditional tools lack.
Modern solutions like EDR and UEBA address this by establishing a baseline of normal behavior and detecting anomalies against it. They monitor how devices and users typically act (login times, access patterns, process execution, etc.) and flag deviations that could indicate a threat.
The following are some key subtle behaviors that often fly under the radar of traditional tools but raise red flags in a behavioral analytics approach.
1. Unusual Login Patterns and Access Times
Users logging in outside their normal working hours or from unusual locations often go unnoticed.
Since valid credentials are used, traditional tools treat it as normal.
Behavioral analytics, however, can detect anomalies such as failed login spikes or suspicious session durations.
2. Low-and-Slow Data Exfiltration Techniques
Attackers exfiltrate data in small chunks over time to avoid detection.
Each small transfer looks harmless, so legacy DLP tools often miss it.
Behavior-based systems can spot deviations collectively, identifying these stealthy leaks.
3. Abuse of Legitimate Tools and Fileless Techniques
Tools like PowerShell, WMI, or Office macros can be misused without raising antivirus alerts.
Fileless attacks leave no malicious file, only abnormal behavior traces.
EDR detects when these tools run unusual commands or spawn unexpected processes.
Attackers slowly escalate privileges and move laterally across systems.
Each step seems legitimate, making it invisible to rule-based tools.
Behavioral analytics correlates long-term patterns to reveal privilege creep and account misuse.
5. Why These Deviations Are Hard to Catch (and How to Catch Them)
These anomalies mimic legitimate activity, so signature-based tools ignore them.
Attackers exploit this “gray zone” to dwell in networks for weeks.
Vishwa: Many organizations rely on agent-based monitoring, but attackers increasingly disable or evade these agents. How do you ensure visibility without full reliance on agents?
Ismail: How can we maintain visibility without relying on agents or a single method?
We can examine this under five main headings.
1. Network-Centric Telemetry
East-West Traffic Monitoring: You can observe network behavior even without an agent. Next-gen Firewall and NDR (Network Detection & Response) solutions can detect lateral movement, command-and-control communication, and anomalous data exfiltration.
Encrypted Traffic Analysis: Anomalies can be detected through flow metadata (SNI, JA3, packet sizes, frequency) without decrypting TLS traffic.
2. Log & Telemetry Correlation
Sysmon/Windows Event Logs/Linux Auditd: Visibility can be achieved from native OS logging sources without an agent.
Centralized Log Ingestion (SIEM/SOAR Integration): Active Directory, firewall, DNS, and proxy logs collected independently of the endpoint reveal attacker behavior.
Cloud-native Telemetry: Services such as AWS CloudTrail, Azure Sentinel, and GCP Audit Logs also provide endpoint visibility without the need for agents.
3. Identity & Access-Based Monitoring
UEBA (User and Entity Behavior Analytics): Identity-based behavioral analysis detects anomalous logins, privilege escalations, or lateral movements even when the agent is disabled.
Non-agent activities such as MFA bypass and credential theft are captured with identity telemetry.
4. Fileless & Memory Attack Visibility
Even if the agent is disabled, anomalies after EDR bypass are still visible based on network and logs. For example, PowerShell "living-off-the-land" activities can be captured through correlation via SIEM/Sysmon.
5. Hybrid Approach
Agent-based monitoring is still invaluable, but it is not sufficient on its own.
Solution: A combination of Agent + Agentless telemetry + Network visibility + Identity analytics.
This approach makes the attacker visible even if the agent is disabled. In short, true resilience can be achieved when agentless telemetry (network, log, and identity-based visibility) is combined with agent-based EDR.
Vishwa: False Positives (FP) remain a major burden. What contextual signals do you leverage to boost detection accuracy without flooding teams with alerts?
Ismail: In addition to best-practice approaches, the process must be managed by incorporating the organization's internal dynamics. We aim to achieve pinpoint detection by not only generating signals/logs/alarms but also enriching them with context.
Baseline Behavior: When the user's normal working hours, devices, and applications are known, anomalous activity is detected more reliably.
Identity Signals: Is it a privileged account? Are they logging in from a new device? Are they logging in from different locations simultaneously? These reduce the FP rate.
IOC/IOA Cross-check: Verify the received signal with threat intelligence feeds such as OTX, Cyberthint, USOM, and MISP.
Reputation Data: Is there a connection between IP/domain reputation and previous attacks? This highlights only "real risk" events.
Process Lineage: Did a PowerShell process run after a macro from Outlook, or was it automated by IT?
System Criticality: The same event is more critical on a production database server and less critical in a test environment.
Time of Activity: Activities performed outside of business hours carry a higher risk.
Multi-signal Correlation: Rather than anomalies alone, events occurring in a chain (e.g., failed logins + privilege escalation + data expiration) generate more reliable signals.
Risk Scoring: Events are scored based on asset criticality and threat context.
SOAR/AI-driven Triage: Analysts don't have to deal with hundreds of alerts; priority alerts are automatically highlighted.
In summary, we reduce false positives by adding context – who the user is, what system they’re on, how unusual the action is, and whether it ties to known threat intelligence.
By correlating signals and applying risk scoring, we filter noise and surface only the events that truly matter to analysts.
Vishwa: Autonomous remediation can lead to broken functionality. What guardrails do you recommend to ensure automated defenses don’t degrade system operations?
Ismail: Autonomous remediation must be bounded by guardrails. Risk-based automation, human-in-the-loop approvals for critical actions, strict scoping of playbooks, and post-action validation with rollback options. This ensures defenses act fast without degrading core business operations.
I can give applicable examples from the field on this subject:
For low-risk incidents, full automation is recommended (e.g., blocking known malicious IP addresses). In highly critical systems, automation should be left at the "containment + alert" level. This reduces the risk of incorrect actions that could disrupt system functionality.
Analyst approval is requested before critical actions (e.g., stopping domain controller services). Security Orchestration, Automation, and Response (SOAR) playbooks should be designed with "approve/deny" steps. This approach maintains automation speed while preventing blind system disruption.
Business-critical applications, production DB servers, and specific services are excluded from automation. The same incident can be resolved automatically in a sandbox or test environment, but semi-automatically in a production environment.
After the action, the system should be checked for any abnormal side effects.
For example, after a process is killed, the service's availability metric should be monitored. If an error occurs, the automation should be able to "rollback."
First, run in "detect-only" mode, then switch to "containment-only," and finally switch to "remediation" mode. This phased approach tests faulty rules without affecting the production environment.
Vishwa: Threat actors are increasingly exploiting remote access services like Remote Desktop Protocol (RDP) and Secure Shell (SSH). What early warning indicators reliably signal compromise via these vectors?
Ismail: Early warning comes from spotting anomalies in authentication, account behavior, process activity, and network traffic. Spikes in failed logins, dormant accounts becoming active, suspicious child processes after RDP sessions, or SSH logins from unusual IPs are all reliable red flags.
By correlating these signals with threat intelligence and behavioral baselines, organizations can detect RDP/SSH compromise before full exploitation occurs.
Here are some cases on the subject:
Failed Login Spikes: Multiple failed login attempts in a short period are indicators of brute force.
Off-Hour Logins: Users logging in outside of normal working hours.
Impossible Travel: Geographic inconsistencies, such as logging in from Turkey an hour earlier and logging in from Russia an hour later.
Dormant Accounts Suddenly Active: An RDP account that hasn't been used for months suddenly becomes active.
Privilege Escalation: An account connecting via SSH runs sudo or admin commands shortly thereafter.
Shared Account Usage: Simultaneous logins from different IP addresses using the same account.
Suspicious Child Processes: Command-line tools such as cmd.exe, powershell.exe, or net.exe running after an RDP session.
Unusual Tooling: Unusual use of tools such as scp, rsync, or wget over SSH (e.g., fetching large amounts of data).
Anomalous Source IPs: Connections originating from unknown, notorious, or TOR/VPN IP addresses. Lateral Movement Patterns: The same user account attempts to access other hosts within a short timeframe after an RDP session.
Beaconing Behavior: Regularly occurring communication in SSH tunnel or RDP reverse proxy traffic.
Correlation with SIEM/UEBA: A single failed login is not considered an anomaly, but a chain of failed logins + out-of-hours logins + process spawns is a strong signal.
Threat Intel Enrichment: If the source IP comes from a known brute-force botnet, the alert may be triggered earlier.
Baseline Deviation: Comparison with user behavior profile (e.g., a DBA logging into the web server via SSH).
Vishwa: In environments where threat detection spans both cloud and on-premise systems, how do you unify telemetry and normalize alerts for cross-environment visibility?
Ismail: Hybrid environments often make things difficult for SOC operations. Here, we focus on three things: telemetry collection, normalization, and correlation.
To explain: We unify cloud and on-premise telemetry by centralizing logs into a common SIEM or data lake, then normalizing events using open schemas like ECS or OpenTelemetry.
This allows us to correlate activity across environments – for example, linking a suspicious Azure login with an on-prem RDP attempt. Enriching alerts with context and applying cross-environment playbooks ensures consistent visibility and reduces noise.
Vishwa: With the rise in remote access exploitation, what cybersecurity tools and practices would you recommend for both newcomers and expert practitioners to bolster defenses?
Ismail: Basic Level Controls:
MFA (Multi-Factor Authentication): A second factor is required for RDP, VPN, and SSH logins.
Patch Management: Regular security updates, especially for RDP, SSH, and SMB services.
Strong Authentication Policies: Strong, rotational, and centralized password policies instead of weak passwords.
Network Segmentation: Access requirements such as RDP/SSH should be accessible only via a bastion host or jump box.
CTI/DRP: Strengthening security products by integrating them with threat intelligence data, tracking leaks, staying informed about what's being said about you in dark environments, and combating those who target your systems or customers by impersonating your brand (detection and takedown).
Deception/Honeypots: Detecting attackers early by deploying services like fake RDP/SSH/SMB/MSSQL.
Strong Network Telemetry: Monitoring brute force, lateral movement, and exfiltration behaviors with NDR and IDS/IPS.
Configuration Hardening: Advanced security layers such as Network Level Authentication (NLA) in RDP, key-based auth, fail2ban, and port knocking in SSH.
It is possible to multiply these items, but they should be specifically analyzed and designed according to the needs and environment of each company.
For a better user experience we recommend using a more modern browser. We support the latest version of the following browsers: For a better user experience we recommend using the latest version of the following browsers: