DeepSeek AI Vulnerabilities Tied to Political Triggers Like ‘Tibet,’ ‘Uyghurs,’ or ‘Falun Gong’ Found by CrowdStrike 

Published
Written by:
Lore Apostol
Lore Apostol
Cybersecurity Writer

Key Takeaways

Recent research has uncovered significant DeepSeek AI vulnerabilities, revealing that the model's code generation capabilities degrade when exposed to politically sensitive triggers. The study focused on DeepSeek-R1 (referring to the full 671-billion parameter model), a large language model (LLM) from the China-based startup DeepSeek released in January 2025.

Political Bias in AI Models and Code Security

According to research by CrowdStrike's Counter Adversary Operations, including certain keywords in a prompt dramatically increases the likelihood of the model producing insecure code. 

This discovery introduces a new, subtle attack surface for AI coding assistants, raising concerns about AI-generated code security risks for developers who rely on these tools.

Generated code vulnerability rate
Generated code vulnerability rate | Source: CrowdStrike

Over 6,000 unique prompts per LLM were sent five times to account for randomness in the LLM’s response, resulting in a total of 30,250 prompts per LLM. Researchers established a baseline vulnerability rate and then introduced various contextual and geopolitical modifiers. 

Prompt used by CrowdStrike researchers to test DeepSeek-R1 response
Prompt used by CrowdStrike researchers to test DeepSeek-R1 response | Source: CrowdStrike

They found that prompts containing terms the Chinese Communist Party (CCP) likely considers sensitive led to a nearly 50% increase in the generation of severely flawed code. 

Innocuous geopolitical modifiers in system prompts, such as references to "Tibet," "Falun Gong," or "Uyghurs," were observed to degrade the quality and security of the generated code, even when irrelevant to the coding task. For example, a request to code for a "financial institution based in Tibet" resulted in output with hard-coded secrets and invalid code. 

A more complex prompt designed to trigger negative behavior in DeepSeek-R1
A more complex prompt designed to trigger negative behavior in DeepSeek-R1 | Source: CrowdStrike

The model also exhibited an "intrinsic kill switch," where it would formulate a plan to answer a request involving a sensitive topic like "Falun Gong" but then refuse to generate the output at the final stage.

This suggests a political bias in AI models can have tangible, negative effects on code quality, potentially as an emergent side effect of training the model to align with specific ideological values.

Implications for AI Coding Assistant Security

This research highlights a novel security challenge where the integrity of AI-generated code can be compromised by seemingly unrelated contextual information. While DeepSeek-R1 is a capable model under normal conditions, its inconsistent performance when faced with political triggers is a significant concern. 

The study notes that while the model was trained to adhere to Chinese regulatory requirements, the production of insecure code is likely an instance of "emergent misalignment" rather than an intentional design. 

These findings stress the importance for organizations to conduct thorough, environment-specific testing of AI agents and coding assistants rather than relying solely on generic benchmarks.

Reports this year brought up DeepSeek as another LLM that is prone to jailbreak for malicious use and that has data privacy issues, with the Czech government banning it in public administration over data security concerns. 


For a better user experience we recommend using a more modern browser. We support the latest version of the following browsers: For a better user experience we recommend using the latest version of the following browsers: