A growing cyber threat known as AI recommendation poisoning was identified in attacks designed to manipulate the behavior of Large Language Models (LLMs). This technique involves injecting unauthorized data or instructions into an AI's processing stream, effectively "poisoning" its output.
Meanwhile, on February 10, Microsoft published fixes for actively exploited Windows shell and Office one-click vulnerabilities tracked as CVE-2026-21510 and CVE-2026-21513. The first allows bypassing Microsoft’s SmartScreen feature when users click a malicious link, and the latter could compromise the target machine when the user opens a malicious Office file.
Microsoft has issued a formal warning regarding this critical vulnerability in how AI assistants ingest and process external data, with Microsoft Defender Security Team researchers identifying more than 50 unique manipulative prompts deployed by 31 companies across 14 industries.
The attack method exploits "Summarize with AI" buttons and shareable links common on modern websites. Attackers embed hidden commands within the query parameters of these URLs, the report said.
When a user clicks the link to generate a summary, the AI receives both the article and the hidden manipulative prompt. This forces the chatbot to generate manipulated AI outputs that reflect the attacker's desired bias, tone, or specific misinformation.
Because the prompt is encoded in the URL, the manipulation often goes unnoticed by the user, who assumes the summary is an objective reflection of the source material.
If an AI assistant treats the injected instruction as a user preference, such as "always summarize financial news positively," it may store it as a persistent rule. This results in AI memory poisoning, where subsequent legitimate queries are biased by the earlier attack.
Memory poisoning can occur through several vectors, including:
This type of attack hidden in summarization prompts targets the AI's saved preferences. To mitigate these risks, Microsoft advises users to:
In Microsoft 365 Copilot, saved memories can be reviewed via Settings → Chat → Copilot chat → Manage settings → Personalization → Saved memories. Then, select “Manage saved memories” to view or remove memories or turn off the feature entirely.
October 2025 research on LLM data poisoning showed that LLMs can be poisoned by small samples. TechNadu last month reported that a Google Gemini prompt injection flaw allowed the exfiltration of private data via Calendar invites, and a similar case involved Anthropic Claude.