
Researchers have unveiled a Promptware vulnerability in how Gemini processes inputs from integrated services. This controlled experiment exposed how AI prompt injection can be weaponized to control actions on smart home devices connected through Google Home. Â
The vulnerability exploited by researchers from prominent institutions such as Tel Aviv University and Technion relied on indirect prompt injection attacks, cited by ZDNet.Â
Malicious instructions were embedded within seemingly normal Google Calendar invites. When users asked Gemini to summarize their calendars, the AI inadvertently executed commands pre-programmed by attackers.Â
Indirect prompt injection in a Google invitation exploit Gemini for Workspace's agentic architecture to trigger the following outcomes:Â
The experiment used short-term context poisoning to trigger a one-time malicious action and long-term memory poisoning to affect Gemini’s "Saved Info," combined with tool misuse, including automatic agent and automatic app invocation to perform malicious activities.
This attack serves as a demonstration of AI's potential to manipulate real-world environments through stealthy digital hijacking. Â
While this exploit was part of a controlled study and not an active threat in the wild, it underscores the risks posed by vulnerabilities in AI systems integrated with smart devices. The capacity for AI-driven cyberattacks to manipulate physical systems raises alarms across industries reliant on interconnected devices.
In July, Google’s Gemini for Workspace exposed users to advanced phishing attacks via email summary hijack.
Google has since fortified Gemini with stronger safeguards. These include filtered outputs, explicit user confirmations for sensitive actions, and AI-driven prompt monitoring.Â
However, users can also enhance their defenses by limiting permissions for smart device controls, avoiding unnecessary integration of services with AI assistants, and staying alert for unexpected behavior from their devices. Â