Chinese State-Sponsored GTG-1002 Leverages Claude AI and MCP for Cyberespionage Targeting Tens of Organizations

Published
Written by:
Lore Apostol
Lore Apostol
Cybersecurity Writer

Key Takeaways

The Chinese state-sponsored group GTG-1002 leveraged Claude Code and Model Context Protocol (MCP) to orchestrate coordinated attacks against high-profile technology companies, financial institutions, chemical manufacturers, and government entities across multiple geographies. 

Autonomous Attack Lifecycle and Human-AI Collaboration

The attackers developed a framework that enabled Claude to function as a central orchestrator, autonomously decomposing complex intrusions into discrete technical tasks. 

These included reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis, and exfiltration—executed at physically impossible speeds and scales.

Operation architecture diagram
Operation architecture diagram | Source: Anthropic

The attack lifecycle was divided into several structured phases, each leveraging escalating artificial intelligence (AI) autonomy while reserving human input for strategic decisions:

Attack lifecycle and AI integration
Attack lifecycle and AI integration | Source: Anthropic

Human operators remained minimally involved, focusing on campaign direction, authorizing key actions such as exploitation and data exfiltration, and validating AI-reported findings. 

The campaign demonstrated that 80–90% of tactical activity could be autonomously executed, drastically lowering the technical barrier for sophisticated attacks.

Technical and Strategic Implications

Notably, the threat actors relied primarily on open-source penetration tools orchestrated through the Claude-MCP framework rather than on custom malware, highlighting how commoditized resources integrated by AI amplify threat scale. 

Despite the advanced automation, operational limitations persisted, as Claude periodically fabricated findings (“hallucinations”), necessitating human validation, and only a subset of targets were successfully breached.

Anthropic responded by banning threatening accounts, expanding AI safeguard mechanisms, alerting impacted organizations and authorities, and emphasizing the dual-use nature of advanced AI in both attack and defense. 

The campaign underlines a pressing need for robust AI-centric defensive strategies and industry-wide collaboration to counter rapidly evolving AI-enabled threats. A recent report revealed that 65% of Top AI 50 companies leaked sensitive data on GitHub, including API Keys and tokens.


For a better user experience we recommend using a more modern browser. We support the latest version of the following browsers: For a better user experience we recommend using the latest version of the following browsers: