Security researchers have uncovered 15 vulnerabilities in OpenClaw, the rapidly growing open source platform used to run AI agents with access to systems and data. The flaws involve authentication and access control checks that govern who can interact with AI agents and trigger tool execution.
The disclosures add concern regarding agent frameworks that are being deployed extensively before their security controls are adequately tested.
OpenClaw is an AI agent that connects to Slack, WhatsApp, and Telegram. It can also access files, run commands, and use API keys inside enterprise environments.
It is a platform where AI agents can receive and act on messages through chat and voice options. It enables agent-to-agent interaction because it connects to services including Slack, Matrix and others.
Its massive adoption has made it one of the most-watched new projects in AI. OpenClaw also drew attention after its creator joined OpenAI. It has around 200,000 GitHub stars, with millions of users expected to be running it.
AISLE says it discovered 15 vulnerabilities in OpenClaw in the past weeks. The figure is about 21% of all disclosed OpenClaw security advisories so far. The firm says one issue is rated critical with a CVSS3.0 score of 9.4, and 9 are marked as high severity. The flaws have been fixed.Â
Among the flaws is GHSA-4rj2-gpmh-qq5x, which is a critical authentication bypass in the agent’s voice-call extension. It does not require user interaction or privileged access.
It could allow any caller to bypass allowlist checks through caller ID handling weaknesses. It could enable unauthorized access to OpenClaw’s tool execution features.
Another high-severity flaw, GHSA-pchc-86f6-8758, could allow unauthorized chat participants to trigger the agent pipeline.
Several vulnerabilities highlight access control failures. Besides bypassing allowlists, they allowed attackers to circumvent approval and identity checks. Other issues are related to chat platforms, command execution controls, and webhook authentication.
Moderate severity issues include an SSRF pathway, command injection in maintainer tooling, secret leakage, and additional authorization bypasses. AISLE, which used its AI system to discover and disclose these flaws, says more vulnerabilities have been reported and are still being addressed.
AISLE researchers are credited with 15 of the 73 disclosed OpenClaw security advisories, making it the largest source of published advisories for the project so far.
Researchers have expressed concern over the rapid adoption of the AI agent. OpenClaw is being deployed in production even as its security model evolves. Agent tools can expand attack surfaces quickly. The findings matter most to enterprises using OpenClaw in internal workflows or customer-facing services.Â
As AI agents could hold credentials and execute actions across infrastructure, weak access controls could allow remote compromise.
It is imperative to have stronger security controls as AI agents will have security flaws identified and patched, but the broader concern is around what remains undiscovered.Â