
In this conversation, we speak with Satyam Sinha, CEO and co-founder of Acuvity—a company built to address the next generation of threats in AI-driven environments. GenAI's natural language interface is both its superpower and its security blind spot.
Authorization, as Acuvity’s founder notes, remains one of the most complex and unresolved challenges in this space. Artificial Intelligence has created new attack vectors, and adversaries follow the path of least resistance.
The interview reveals that GenAI connects multiple enterprise data sources, complicating reliable authorization controls. Attackers are directly targeting AI systems, introducing new attack vectors besides prompt injection, data poisoning, and multilingual payloads.
In one case, a senior insurance executive asked a GenAI assistant for a weekly summary and received information pulled from his manager’s private files. It was data he wasn’t authorized to see.
Employees are unknowingly introducing risk by uploading client data, credentials, or internal assets into third-party GenAI tools. If your AI assistant can reach across systems, your security strategy needs to do the same.
This conversation with Acuvity explains how organizations can secure GenAI from the ground up, starting with visibility, identity-aware controls, and agent-level guardrails.
Vishwa: You’ve worked across cloud security, identity, and engineering leadership for nearly two decades. What gaps or shifts did you observe that led you to focus specifically on GenAI security, and what problem is Acuvity solving?
Satyam: Throughout my career, I have held various engineering leadership roles that put me at the leading edge of innovation. When the new breed of models using transformers emerged, I immersed myself in modern artificial intelligence (AI) and machine learning (ML) frameworks—from pretraining and fine-tuning to building vector databases and retrieval-augmented generation (RAG) systems.
That hands-on experience made it abundantly clear that AI and Generative AI (GenAI) introduce an entirely new threat landscape that traditional security models are not built to handle.
GenAI defies classical cybersecurity—it's not just about users, but also about AI agents running both in the cloud and on endpoints. Identity and attribution become extremely important.
In addition, the core component is the LLM, which can be tricked using language constructs to become misaligned and produce unintended consequences. There are a ton of new concerns introduced by GenAI systems.
What truly stood out for me was the pace and nature of GenAI adoption. Unlike previous waves like SaaS or Cloud, where security frameworks evolved in parallel with adoption, GenAI is being embraced by developers and knowledge workers ahead of any formal enterprise strategy. Security lags behind—and that is a dangerous gap.
Having seen how categories like CASB and CSPM emerged to secure past technology shifts, I knew GenAI would require a similar ground-up rethink.
That’s why I co-founded Acuvity in 2023 after leading engineering at Palo Alto Networks for several years. Acuvity is purpose-built to help organizations adopt GenAI securely without slowing down innovation. We give enterprises the confidence, clarity, and control they need to scale GenAI responsibly across their stack.
Vishwa: What types of unsanctioned GenAI tools are slipping into enterprise environments, and how are adversaries already abusing this “Shadow AI” wave for silent data access or lateral movement?
Satyam: GenAI appeals to prosumers because it improves their productivity. Whether you're a developer, marketer, sales rep, or in HR—there are GenAI products that help you do your job better.
From my perspective, “Shadow AI” refers to any unsanctioned use—whether it's users leveraging AI-capable functionality available through services, web extensions, code editor plugins, desktop applications, internal agents/applications, or even LLMs running on your enterprise’s cloud, on a cheaper GPU cloud, or even on a laptop.
There are also open-source projects, frameworks, and platforms that developers and AI enthusiasts are experimenting with.
At Acuvity, we have risk profiles of more than ten thousand GenAI services, extensions, plugins, and applications. This helps enterprises understand their exposure and the risks these tools introduce.
As we saw with Grok4 this week and Deepseek earlier in the year, vulnerabilities can lead to issues ranging from known prompt injections to system prompt leakage and the inability to block a single harmful prompt during independent security assessments—making them highly susceptible to jailbreaks.
In the case of Deepseek, a publicly accessible database linked to it exposed “a significant volume of chat history, backend data, and sensitive information.”
Compensatory control and exploit prevention techniques are essential for mitigating prominent vulnerabilities—this is where products like Acuvity come in to safeguard users and agents.
Bottom line: the enterprise attack surface has grown dramatically, and we’re now seeing breach examples emerge weekly—sometimes even daily.
Vishwa: How are adversaries using this?
Satyam: There are many examples, but let me share a couple: New AI unicorns are emerging, one of them being Cursor, which has forked VS Code—the most popular code editor. Microsoft doesn’t allow forks to use extensions from its marketplace, so Cursor uses a community-managed marketplace.
In a post by Fabio Ciucci, a crypto enthusiast using Cursor—who thought they were using a harmless syntax highlighter—lost $500K and had to hire a security firm just to figure out what happened.
Earlier, we saw a similar incident at Disney: stolen messages, leaked confidential data, and eventually an employee losing their job.
Knowledgeable analysts like Lawrence Pingree, VP of Emerging Technologies – Security and Risk at Gartner, are actively educating the public via platforms like LinkedIn.
The world is just beginning to understand the risks around GenAI—and it will require ground-up thinking. Band-aids won’t work.
Vishwa: We’re seeing indicators that LLM misuse is already happening inside organizations, from stealth data scraping to poisoned prompt injection. Where are adversaries investing their effort right now, and what can enterprises do to make these environments more defensible?
Satyam: AI has created new attack vectors, and adversaries follow the path of least resistance. Many of the breaches being disclosed today are surprisingly simple and preventable, but most enterprises haven’t yet implemented a security strategy for AI usage.
Adversaries are using AI in two primary ways: first, to run higher-volume conventional attacks; and second, to target AI systems directly, which have newly exposed threat vectors that are not yet widely understood. These attacks are only going to grow, so enterprises must develop and execute a proactive defense strategy.
Adversaries are increasingly injecting malicious or misleading data into training datasets—corrupting AI outputs and potentially causing harmful or biased behavior. Seventy-three percent of enterprises have reported at least one AI-related breach, with data poisoning cited as a growing concern.
Organizations now understand that AI is here to stay. For a successful AI transformation, they need to adopt a ground-up approach to security. A large new attack surface has been created—and they must seek state-of-the-art solutions for a long-term, defensible strategy.
Identity—both for users and agents—along with attribution and context, will be essential for effective security and forensic investigations.
Vishwa: If an attacker compromises a GenAI application stack, say through prompt injection or hijacked memory, what would a full kill chain look like? And how would that attack be reconstructed during a forensic investigation?
Satyam: I must say—this is a very insightful question.
AI Killchains: A traditional kill chain follows seven steps, and those steps still apply in the GenAI world, as it is ultimately an application layer.
However, newer GenAI-specific attacks can look different. There may be no need to trick a user or set up command-and-control infrastructure.
Scenario: An employee has been using a GenAI agent to read, compose, write, act on, send, forward, or delete emails.
Example 1: Direct Automated Prompt Injection
Attack: An external adversary sends an email to the inbox with a prompt injection, instructing the agent to send confidential information to an external address.
Kill Chain: External email sender → AI reconnaissance → Injection at the email layer.
Outcome: The agent sends out the sensitive information without recognizing its confidentiality.
Example 2: Indirect Prompt Injection
Attack: A user asks the email assistant to summarize messages since their last login. If a malicious email is present and crafted to manipulate the AI, it could cause the assistant to reveal sensitive information to an external party.
Kill Chain: Malicious email sender → Embedded attack → Triggered by summary request → Agent performs harmful action.
Forensics / Replay / Reconstruction: Reconstructing attacks in AI environments is difficult because LLMs are inherently stochastic—they don’t always produce the same output from the same input.
A good GenAI security architecture needs to identify users and agents, attribute their actions, and maintain strong auditability. Only then can forensic teams investigate and resolve incidents effectively.
Vishwa: If a CISO asked what new GenAI-specific telemetry they should start collecting, or if DevSecOps teams wanted to enhance GenAI workflows, what foundational security layer would you recommend as a starting point?
Satyam: From a telemetry standpoint, having a clear inventory of AI components, tech stacks, and deployment methods is critical.
You also need a strong security blueprint. Understanding what the AI is accessing—and where it’s going—is vital for determining its impact on data sources and LLM behavior.
Next, security observability is key. I recommend CISOs implement identity-aware auditability (for both users and agents) across all interactions with data, tools, and LLMs. This type of audit trail enables effective forensic analysis and provides the context needed to act on security events.
Finally, runtime controls and guardrails—covering everything from prompt injection and jailbreaks to goal-breaking—are essential. These defenses must also account for false positives and negatives.
Vishwa: As GenAI use grows, what fresh categories of security incidents are emerging, especially ones involving unintended data exposure, malicious output injection, or AI-assisted social engineering? How must incident response evolve to detect and handle these risks?
Satyam: Over the years, we’ve educated developers about handling credentials safely—for example, never checking credentials into repositories like GitHub. But now, with tools like GitHub Copilot or Cursor, those credentials can inadvertently be uploaded to GenAI services, where they might be logged or stored.
If an attacker compromises even the logs of such a service, they may gain access to sensitive secrets.
These playbooks should detail step-by-step procedures for containment, eradication, and recovery.
Vishwa: What specific anomalies should SOC teams watch for to detect GenAI misuse early? Are there signals like unexpected query volumes, token spiking, or irregular interaction patterns that suggest the AI layer is being exploited?
Satyam: SOC teams need to monitor a broad range of indicators, including:
They must also look for combinations of these signals, which may indicate more sophisticated or multi-stage threats.
At Acuvity, we’re actively tracking these signals and continually evolving what ground-up AI security looks like.
Vishwa: If autonomous agents were granted wide-reaching access across GitHub, Jira, Slack, or billing systems, what kind of unintended consequences would you worry about in an enterprise environment?
Satyam: This is one of the biggest challenges in GenAI: authorization.
GenAI connects multiple data sources across your enterprise, but unlike traditional software, the interface is natural language—making authorization extremely difficult to implement reliably.
Organizations need to design a security blueprint for agentic applications and apply the principle of limiting the blast radius. For example, don’t grant a single agent access to GitHub, JIRA, Slack, and billing systems.
A router agent to delegate queries to:
This segmentation ensures, for instance, that an HR-related agent never interacts with billing systems. Granular objectives and permissions reduce risk.
Example: If a ticketing agent is tricked into creating another ticket based on content in an email, that’s privilege escalation. If the same agent sends a follow-up email, that’s both privilege escalation and anomalous behavior.
Vishwa: Have red teams or attackers started leveraging GenAI to scale offensive operations? Are you seeing signs of real-time phishing payloads, OSINT automation, or impersonation content that security engineering or threat intelligence teams should be preparing for?
Satyam: Absolutely. Red teamers and attackers are already leveraging GenAI in real-world campaigns.
For example, GenAI-powered agents can generate hyper-personalized, grammatically correct, and highly persuasive phishing emails. We’re also seeing multilingual payloads designed to evade traditional NLP filters and use real-time context to increase success rates.
On the OSINT side, attackers are using LLMs and agentic frameworks to process large volumes of public data—from social profiles to GitHub repositories—turning them into highly targeted reconnaissance.
Impersonation is another serious concern. Voice cloning, fake profiles, and deepfakes (video and image-based) are now part of the attacker’s toolkit.
Even code and exploit generation is evolving. Adversaries are using GenAI to convert vulnerabilities into working proofs-of-concept (PoCs), rapidly prototype malware, and scale attack development.
We’ve entered a new era—one with a much higher volume of threats. Security teams need to prepare now.
Vishwa: What are the biggest “oh no” moments organizations face after deploying GenAI into production, and what security guardrails would have helped them sidestep these missteps?
Satyam: Most “oh no” moments occur when employees unknowingly share sensitive data with third-party GenAI services—this includes secrets, passwords, personally identifiable information (PII), and intellectual property like documents, research, or even Zoom recordings and whiteboard images.
Take a consulting firm, for example. They’re legally prohibited from uploading client data to external systems, yet employees sometimes upload meeting notes or images for productivity gains—violating legally binding contracts.
In agentic frameworks, the problem is even more pronounced. Enterprises often rely on RBAC (role-based access control) to protect siloed data, but many AI tools fail to enforce those controls properly. We’ve seen services like Microsoft Copilot return information that clearly violated RBAC.
Handling entitlements correctly is critical. A senior executive at a major insurance company once told me that after asking his GenAI assistant to “catch me up on what happened last week,” the agent responded with information pulled from his manager’s private documents—data he wasn’t authorized to see.
Another common misstep? Discovering after deployment that customer data was exposed to an LLM that should never have had access in the first place. Enterprises often lack visibility into how answers were generated—what agent was used, what data was accessed, which LLMs were queried, and whether the response was accurate.
The solution is robust guardrails, granular entitlements, and deep observability—built from the start.