James Wickett, CEO and Co-founder of DryRun Security, discusses why traditional application security tools struggle to capture context, intent, and business logic. These gaps create blind spots for engineering teams during code review.
Wickett brings extensive experience as a DevSecOps practitioner, security researcher, and long-time educator, having led research roles, contributed to the OWASP community, and authored multiple DevOps and security courses used by practitioners worldwide.
He explains how contextual analysis changes how security signals are generated and acted on. And explains why accuracy and intent are becoming more critical than alert volume as software supply chains grow more complex.
We explore persistent challenges such as alert fatigue, authorization flaws, and supply-chain exposure, in fast-moving CI/CD environments.
Read how security teams can identify high-impact issues earlier in the pull request (PR) process without slowing developers down.
Vishwa: How does Contextual Security Analysis operate within developer workflows? What real-world coding and authorization scenarios best demonstrate its value?
James: Contextual Security Analysis (CSA) runs where developers already work: every pull request triggers DryRun’s agents to analyze the change and post a summary comment, a risk assessment, and remediation guidance directly in the PR.
That same run evaluates any Natural Language Code Policies (NLCP) you’ve enabled, so a change that touches sensitive logic can auto‑notify the right reviewers or quietly escalate to the security team. Results also appear in GitHub checks with links to the exact lines, which lets the author fix issues in place instead of context‑switching to a separate dashboard.
What makes CSA “contextual” is the evidence it gathers about the change itself—code paths, functions touched, language and framework—and how that overlaps with the application’s broader context.
We articulate this through the SLIDE model (Surface, Language, Intent, Detections, Environment) and use those layers to make near‑real‑time assertions about risk as code is written. That is why it can prioritize an auth change over a cosmetic change, even if both compile cleanly.
Real‑world examples include catching a JWT algorithm‑confusion path where the verification algorithm is taken from an unverified header, which can enable token forgery if RS256 verification is silently downgraded to HS256.
We surface that with the file and lines implicated, plus suggested remediation. Other high‑impact scenarios include new endpoints that lack authorization enforcement, insecure defaults when calling OIDC endpoints, or role definitions that change in RBAC configs.
Our Code Insights calls out similar “security‑important” changes such as payment‑gateway swaps, AWS configuration changes, and authentication redesigns so those PRs get eyes before merge.
We also include sub-agents targeted at authorization and data‑access problems (for example IDOR) and server‑side request forgery, which often ride along with otherwise “valid” changes. They help lift dangerous changes out of noise while the PR is still open.
Vishwa: Where does Contextual Security Analysis diverge from traditional SAST approaches, especially in detecting business logic or chained vulnerabilities?
Practically, that’s the difference between flagging a raw SQL call and recognizing that a new endpoint bypasses a bespoke authorization check that only exists in your codebase.
We back that up with public head‑to‑heads where our agents more than double the accuracy of legacy SAST. In both a Java Spring Boot and a Ruby on Rails testbed, we identified complex logic and authorization flaws, such as IDOR and broken authentication paths, that rule‑driven scanners missed out of the box.
These are the kinds of flaws that arise from design and chaining, not a single bad API call.
Vishwa: How does contextual AppSec help organizations defend against advanced code-level threats such as supply-chain compromises or dependency poisoning?
James: Supply‑chain risk often enters through automated processes and dependencies rather than a single function call. Our Code Library includes policies that review GitHub Actions changes for unpinned third‑party actions, overly broad permissions, unsafe run, and more.
These policies run on every PR and return line‑numbered evidence, so the fix is straightforward. Beyond CI, our “Code Safety” posture explains that it stores key markers rather than whole code, including languages, frameworks, and notable dependencies.
That metadata gives its agents enough context to correlate suspicious dependency or build‑system changes against the rest of the app so that a quietly introduced package or vendor swap does not slip by. Code Insights then surfaces those “security‑important” changes like payment provider swaps or AWS config changes for early review.
Vishwa: How do Natural Language Code Policies (NLCPs) help teams enforce intent and reduce ambiguity during large-scale code reviews?
James: NLCPs let security teams capture the questions they ask in design review and run them on every PR, instead of gathering dust in PDF. Rather than writing DSL rules, a policy has a “Question” in plain English, optional “Background” to focus the reasoning, and “Guidance” for the team.
Examples include:
Because policies execute alongside core analyzers inside GitHub, they can tag a security list or named reviewers when the policy condition is met, which turns intent into enforcement across hundreds of repos without per‑language maintenance.
Vishwa: In what ways does DryRun structure and leverage metadata to enhance AI-driven contextual analysis without revealing proprietary logic?
James: DryRun takes an explicit "store key markers, not your code" approach. Analysis is performed in ephemeral microservices, ensuring that the code being analyzed is not retained once the task is complete.
Only general, non-sensitive markers are persisted, such as the language, framework, key dependencies, template type, and data stores. This strikes a balance between the context our agents need and the IP protection customers require.
On top of that metadata, CSA layers additional change context (which functions and code paths changed, who authored it, which detections fired) so the model can reason about what the application does and whether this specific change touches sensitive behavior.
Vishwa: What approach does DryRun take to tackle one of the toughest AppSec pain points including security alert fatigue in CI/CD pipelines? How many alerts typically surface in a day, and what does triage look like in a developer’s workflow?
James: DryRun’s stance is that “noise reduction” alone isn’t the goal; accuracy on real risks is. The architecture is AI‑native, so it prioritizes intent and impact of the code change instead of pattern‑matching everything that might be risky.
That is why customers experience fewer “maybe” false positives and more high‑signal alerts that developers actually fix.
In practice, triage lives inside the PR:
NLCPs can route the PR to a security list or tag reviewers, and teams can include Slack notifications when that helps coordination. Because results are right in checks and comments, authors resolve issues as part of the PR cycle rather than later in a separate queue that adds to technical debt.
As you can imagine, alert volume varies with repository size, policy configurations, and PR throughput.
We have customers who have actively delayed hiring because their AppSec team has finally caught up with development.
Vishwa: What does a typical day look like for developers using DryRun, from the first code commit to resolving a flagged issue? How long do key AppSec tasks take on average?
James: A typical flow starts with:
Because analysis happens in the PR and focuses on change intent, most fixes occur during the same review cycle instead of bouncing between tools.
We focus on speed to signal:
Creating or refining NLCPs is also fast because there are no DSL or per‑language rules to maintain.