New AI-Native Threat: Vulnerability in Google Gemini Enterprise and Vertex AI Search Allowed Stealing Gmail, Docs, and Calendar Data

Published
Written by:
Lore Apostol
Lore Apostol
Cybersecurity Writer

Key Takeaways

A significant Google AI security flaw has been discovered by security firm Noma Labs, highlighting a new category of AI-native cybersecurity threats. The vulnerability, named GeminiJack, affected Google Gemini Enterprise and Vertex AI Search, allowing attackers to access and exfiltrate corporate data through a novel, zero-click attack vector. 

An attacker could embed malicious instructions in a shared Google Docs, email, or calendar invite, which the AI would later execute during a routine employee search.

The Mechanics of the GeminiJack Vulnerability

The Noma Security report says the flaw was not a conventional bug but an architectural weakness in how the AI's Retrieval-Augmented Generation (RAG) system processed information, making it susceptible to indirect prompt injection. 

Google Gemini Enterprise Configurations tab to select the model
Google Gemini Enterprise Configurations tab to select the model | Source: Noma Security

Attackers could poison a document with hidden commands instructing the AI to search for sensitive terms such as "budget," "finance," or "acquisition," and then load the results into an external image URL under the attacker's control. 

When an employee performed a relevant search, the AI would retrieve the malicious document, interpret the hidden instructions as legitimate commands, and search across all connected Workspace data sources—including Gmail, Calendar, and Docs. The GeminiJack attack enabled a silent, zero-click data breach, as Gemini treated the embedded instruction as a legitimate command.

Google Gemini Enterprise Connected data store sources for RAG
Google Gemini Enterprise Connected data store sources for RAG | Source: Noma Security 

The process would appear as normal traffic to traditional security tools. This meant confidential data could be stolen without triggering any alarms.

Google didn't filter HTML output, which means an embedded image tag triggered a remote call to the attacker's server when loading the image,” said Sasi Levi, Security Research Lead at Noma Security. 

The URL contains the exfiltrated internal data discovered during searches. Maximum payload size wasn't verified; however, we were able to successfully exfiltrate lengthy emails.

Coordinated Disclosure and Mitigation 

Following responsible disclosure practices, Noma Labs reported the vulnerability to Google on May 6, 2025. Google's security team acknowledged the issue and worked with the researchers to implement a fix, which was deployed after a thorough investigation. 

Recommendations to organizations:

The mitigation addressed the core issue of how the RAG system differentiates between content and instructions. While this specific issue is resolved, GeminiJack serves as a critical warning about the evolving security landscape as AI becomes more deeply integrated with sensitive enterprise data.

In a similar case discovered a few months ago, a vulnerability in Salesforce Agentforce dubbed ForcedLeak exposed CRM data through indirect AI prompt injection.


For a better user experience we recommend using a more modern browser. We support the latest version of the following browsers: For a better user experience we recommend using the latest version of the following browsers: