Too Fast, Too Vulnerable: DevSecOps Teams Struggle to Keep Code Secure

Published
Written by:
Lore Apostol
Lore Apostol
Cybersecurity Writer

A global survey of over 1,000 professionals found that while nearly 60% of organizations deploy code daily or more frequently, security automation has not kept pace. A concerning 45.56% of companies depend on manual processes to add new code to application security testing (AST) programs. 

This results in poor coverage, with 61.64% of organizations admitting they test less than 60% of their applications, accumulating substantial security debt with each release.

Tool Sprawl and Workflow Friction Hinder Progress

A major finding in the Black Duck DevSecOps report 2025, "Balancing AI Usage and Risk in 2025," reveals a significant maturity gap between development speed and security practices. It highlights the "tool sprawl" crisis, where a proliferation of AST tools has led to overwhelming inefficiency. 

Companies depend on manual processes
Companies depend on manual processes | Source: BlackDuck

Over 71% of respondents indicate that a significant percentage of security alerts are noise, including false positives and duplicate results. This friction is a primary reason that 81.22% of professionals state that security testing slows down the development lifecycle. 

AI coding assistant use in companies
AI coding assistant use in companies | Source: BlackDuck

Consequently, the report identifies "better development workflow integration" as the single most important priority for improving AST capabilities, signaling a demand for a more developer-centric approach to security.

Navigating the Duality of AI in Software Security

The report frames AI in software security as a "double-edged sword." There is a strong consensus that AI is both a powerful security ally and a significant new risk vector. While 63.33% of professionals agree that AI coding assistants have improved their ability to write more secure code, 56.55% also believe these tools have introduced new, complex security risks. 

Compounding this issue is the prevalence of "shadow AI," with 10.69% of respondents admitting to using AI assistants without official permission. This creates a critical need for robust AI governance frameworks to manage these emerging DevSecOps challenges, as other recent reports indicate that modern enterprise security is currently threatened by phishing kits and the use of shadow AI.

Expert Commentary on DevSecOps and AI Risk

James Maude, Field CTO at BeyondTrust, said it is vital to gain visibility into the code and software lifecycle while also looking at the bigger picture of the identities, accounts, and secrets that enable the CI/CD toolchain to run. “In order to really close the security gap, it is important to also get a handle on identity and secrets management.”

Casey Ellis, Founder at Bugcrowd, underlined that organizations are misaligning defenses, including those driven by AI, with where attackers are actually succeeding. “Fully integrated suites are great starting points for teams with less experienced staff, or those placing higher priority in other focus areas,” added Trey Ford, Bugcrowd’s Chief Strategy and Trust Officer. 

Organizations need to combine data from endpoint scanning, browser extensions, ERP and expense systems, and SaaS usage logs to produce a single, correlated view,” commented Randolph Barr, Chief Information Security Officer at Cequence Security, on shadow AI and visibility.

The organizations that thrive will be those that use AI to enhance human capability, not replace it, by embedding security intelligence directly into everyday workflows,” said Barr.


For a better user experience we recommend using a more modern browser. We support the latest version of the following browsers: For a better user experience we recommend using the latest version of the following browsers: