A security flaw dubbed the DockerDash vulnerability exposes a severe weakness in how AI agents interpret contextual data. It affects Docker's "Ask Gordon" AI assistant, allowing attackers to influence execution flow from AI interpretation to Model Context Protocol (MCP) execution.
In modern AI architectures, the MCP acts as a bridge between the LLM and the local environment (files, Docker containers, databases), providing the "context" AI needs to answer questions.
Noma Labs researchers say the Docker environment can be compromised via a single malicious metadata label in a Docker image through a simple three-stage attack, each stage with zero validation:
The core issue, termed "Meta-Context Injection," occurs because the AI fails to distinguish between benign metadata and malicious instructions, processing both as legitimate tasks. The Ask Gordon AI flaw presents two distinct attack paths depending on the deployment environment.
In Cloud or CLI environments, the vulnerability can lead to Remote Code Execution (RCE). Attackers can embed multi-step command sequences in image labels, which the AI interprets and the MCP Gateway executes without validation.
In Docker Desktop environments, where permissions are restricted to read-only, the flaw becomes a high-impact data exfiltration vector. Attackers can coerce the AI to gather sensitive reconnaissance data, such as network topology and environment variables, and transmit it externally via user-provided image URLs.
Following responsible disclosure, a critical Docker security update was released to address these risks. Docker Desktop version 4.50.0 blocks rendering of images with user-provided URLs to prevent data exfiltration and introduces a mandatory "Human-in-the-Loop" mechanism that requires explicit user confirmation before the AI invokes any MCP tools.
Jason Soroko, Senior Fellow at Sectigo, highlighted that the 4.50.0 patch neutralizes this "cascading trust" failure by enforcing a human-in-the-loop requirement, “ensuring that no AI-driven tool execution occurs without explicit user confirmation.”
David Brumley, Chief AI and Science Officer at Bugcrowd, said this flaw shows how “companies are developing new AI products, but not checking guardrails for prompt injection.”
Ronald Lewis, Senior Manager, Security Compliance, and Auditing at Black Duck, said that as the boundaries of the attack surface have become a little "fuzzy," organizations should:
Security professionals are advised to upgrade immediately, as this vulnerability underscores the need to implement zero-trust validation for all contextual data provided to AI models.
This discovery highlights a growing crisis in AI supply chain security, where trusted input sources can be manipulated to compromise systems. In November, Chinese state-sponsored GTG-1002 leveraged Claude AI and MCP for cyberespionage.