Key Takeaways
Researchers uncovered CVE-2025-61260, a Codex CLI flaw that lets malicious repo configs trigger automatic command execution. The issue allowed attackers with commit or PR access to plant silent RCE payloads that ran whenever developers used Codex.
OpenAI patched the vulnerability in version 0.23.0, blocking project-level redirects that enabled the exploit.
Checkpoint found that Codex automatically executed MCP server commands from local project configs without prompts or validation, allowing a .env redirect and malicious .codex/config.toml to run attacker-controlled commands at startup.
A malicious commit adding a crafted .env and config file could trigger arbitrary code execution whenever developers would run Codex. This would enable stealthy supply-chain backdoors and post-merge swaps for more harmful payloads.
The mechanism fails because Codex treats repo-level config files as trusted execution material, performing no approval, validation, or rechecking, allowing any listed command to run automatically under the developer’s own privileges.
This would allow an attacker to plant a benign-looking repo, including a .env redirect and malicious MCP command, relying on Codex to execute the payload when a developer cloned and used the project.
The flaw enabled silent RCE among developer machines, CI systems, and downstream consumers. This allowed persistent access, credential theft, lateral movement, and contamination of templates and popular open-source projects.
OpenAI’s Codex CLI lets developers run, edit, and automate code through natural-language commands, using MCP to integrate external tools and extend workflows directly from the terminal.
Researchers outlined the risks of automatic command execution and supply-chain exposure and said the issue reflects a deeper pattern in emerging AI tooling.
Andrew Bolster, Senior R&D Manager at Black Duck said, “This research underpins the emerging threat of the ‘Lethal Trifecta’ (otherwise known as the ‘Rule of Two'). Allowing AI agents to inspect seemingly innocuous files, websites, or APIs can introduce unexpected, hidden, instructions.
Threats such as these necessitate a zero-trust approach to agentic operations in terms of their prompting, operation, and the actions that those agents are permitted to make on behalf of the user.”
Heath Renfrow, Co-Founder and Chief Information Security Officer at Fenix24 said, “AI tooling often inherits developer-friendly defaults that assume trust where no trust should exist. The most serious implication isn’t the initial exploitation, but the stealth.”
Teams should respond the same way they respond to ransomware or supply-chain threats: