AI-Native App Boom Creates Security Blind Spots and Major Security Risks, New Report Finds

Published
Written by:
Lore Apostol
Lore Apostol
Cybersecurity Writer

Key Takeaways

A new report on AI-native application security reveals that the rapid integration of artificial intelligence in enterprise environments is creating critical security blind spots. The report, which surveyed 500 security practitioners, indicates that 63% believe AI-native applications are more susceptible to threats than traditional applications.

AI Adoption Outpaces Enterprise Security

This rush to adopt LLMs and generative AI has outpaced the capabilities of security teams, leaving organizations exposed to a new class of vulnerabilities. 

According to the "State of AI-Native Application Security 2025" report from Harness, on average, 61% of new applications are designed with AI components.

AI-native apps built with AI components
AI-native apps built with AI components | Source: Harness

The Growing Challenge of Shadow AI

The proliferation of unauthorized AI use, termed "shadow AI," is a primary concern. The research found that 75% of security leaders believe the security issues caused by shadow AI risks will soon eclipse those of shadow IT. 

AI-native app security issue familiarity
AI-native app security issue familiarity | Source: Harness

This is compounded by a lack of visibility, as 62% of security teams have no way to track where LLMs are deployed within their infrastructure. This creates a significant blind spot, making it difficult to monitor API traffic, data flows, and access controls for AI components. 

The report highlights a significant breakdown in collaboration between development and security teams, which exacerbates security risks. A majority of respondents (74%) stated that developers often view security as a blocker to innovation, leading them to bypass established governance processes, which contributes to the rise of shadow AI..

AI Security Recommendations

The report notes that most organizations have already suffered security incidents related to LLM vulnerabilities, including prompt injection (76%), vulnerable code (66%), and jailbreaking (65%).

Furthermore, only 43% of organizations report that their developers consistently build AI-native applications with security integrated from the start. This points to a critical need for implementing DevSecOps for AI:

These findings align with other reports that recently highlighted that security gaps force firms to rethink AI adoption, cloud adoption outpaces security readiness, and API security lags as AI adoption accelerates.

The most recent report warned that 65% of the top AI 50 companies leaked sensitive data on GitHub, including API keys and tokens.


For a better user experience we recommend using a more modern browser. We support the latest version of the following browsers: For a better user experience we recommend using the latest version of the following browsers: