The Rise of Malicious AIs: WormGPT 4 Emerges as a Powerful AI Tool for Cybercrime with Subscriptions Starting at $50

Published
Written by:
Lore Apostol
Lore Apostol
Cybersecurity Writer

Key Takeaways

A new version of a criminally focused large language model (LLM), WormGPT 4, is being advertised on underground forums, offering sophisticated cybercrime capabilities for a low cost. This model is marketed as an AI "without boundaries," capable of generating malicious code on demand. 

The emergence of such models highlights a significant shift in the cybercrime landscape, where AI is now being purpose-built for offensive operations.

WormGPT 4 AI Malware Generation

According to research from Palo Alto Networks' Unit 42, subscriptions start at just $50 for monthly access, with a lifetime license available for $220, making these powerful malicious AI tools accessible to a wide range of threat actors. 

WormGPT ad found on Hack Forums
WormGPT ad found on Hack Forums | Source: Palo Alto Networks' Unit 42 

Unlike mainstream models that have safeguards to prevent malicious use, WormGPT 4 AI malware generation is its primary function. Researchers demonstrated its ability to create a fully functional PowerShell script for ransomware when prompted.

WormGPT 4 generates a rudimentary ransomware script impacting PDF files
WormGPT 4 generates a rudimentary ransomware script impacting PDF files | Source: Palo Alto Networks' Unit 42 

Its AI-generated code can encrypt all PDF files on a Windows host, leave a ransom note with a 72-hour payment deadline, and even include an option to exfiltrate data via Tor. 

WormGPT 4 advertises that it can generate:

Implications of Cybercrime AI Models

The availability of cybercrime AI models like WormGPT 4 and the free alternative KawaiiGPT represents a serious escalation in AI-driven threats. These tools automate critical steps in the attack lifecycle, from social engineering to code generation and lateral movement. 

While the output from these models may still require some human tweaking to bypass advanced security protections, they serve as powerful building blocks for complex attacks. 

This trend points toward a future where AI-assisted campaigns become more common, requiring security professionals to adapt their defensive strategies to counter machine-generated threats.

A recent research showed that LLMs can be poisoned by small samples, and the first AI-powered ransomware was reported in August, which uses the Ollama API for dynamic attacks.

The FunkSec ransomware group appeared to utilize AI assistance in developing malware, the AI-powered bot AkiraBot was seen bypassing CAPTCHA checks to target websites with spam at scale, and AI-powered cloaking services were observed in phishing campaigns.


For a better user experience we recommend using a more modern browser. We support the latest version of the following browsers: For a better user experience we recommend using the latest version of the following browsers: