AISLE Becomes the #1 Source for OpenClaw Security Disclosures
Author
Stanislav Fort
Date Published
-1.webp%3F2026-02-16T17%3A50%3A22.491Z&w=3840&q=100)
Yesterday, OpenClaw's creator Peter Steinberger joined OpenAI, with OpenClaw itself becoming a foundation that OpenAI will continue to support. At the time of writing, the project has 200,000 GitHub stars, 3,400 open pull requests, and is probably the fastest-growing open source project in history. Openclaw gives AI agents shell access, file system control, and API keys to your infrastructure. It connects to Slack, Telegram, WhatsApp. Millions of people are running it, but very few have actually been auditing it for security issues.
We at AISLE took the initiative, and for the past couple of weeks, used our autonomous AI system to discover, disclose, and fix 15 vulnerabilities in the OpenClaw codebase, making us the single largest source of security advisories at the time of writing. In total, 15 of the currently 73 disclosed vulnerabilities -- 21% -- were discovered by our AI-based analyzer; more than any other researcher or group conducting security research into OpenClaw's codebase. One is rated Critical with a CVSS3.0 score of 9.4 out of 10. Nine are rated High. The remaining five are Moderate.
This is a different kind of work from what we're known for. Our OpenSSL and curl findings involved cracking codebases that had been hardened by decades of expert review and millions of CPU-hours of automated fuzzing, while OpenClaw is just a few months old. Unlike those codebases which have had countless eyes look over it, OpenClaw has had just a few; all-the-while, tens of thousands of deployments have been made live, and exposed.
What we found
The most severe finding, GHSA-4rj2-gpmh-qq5x, is a Critical authentication bypass in OpenClaw's voice-call extension. The inbound allowlist check used suffix-based matching and accepted empty caller IDs after normalization. In literal terms, this meant an anonymous caller could bypass access controls entirely and reach the AI agent, including its tool execution capabilities, with no network restrictions, user interaction, or privileges required.
Below is a summary of all of our Critical and High findings.
Advisory | Severity | Finding |
|---|---|---|
Critical | Inbound allowlist bypass in voice-call extension via empty caller ID and suffix matching | |
High | Exec allowlist bypass via command substitution and backticks inside double quotes | |
High | Windows | |
High | Gateway connect skips device identity checks when auth token is present but not yet validated | |
High | Nextcloud Talk allowlist bypass via | |
High | Twitch | |
High | BlueBubbles webhook auth bypass via loopback proxy trust | |
High | Access-group authorization bypass when channel type lookup fails | |
High | Unauthenticated Nostr profile endpoints allow remote profile and config tampering | |
High | Unauthenticated discovery TXT records could steer routing and TLS pinning |
In addition to the above, we also discovered the following Moderate severity issues:
Advisory | Severity |
|---|---|
Moderate | |
Moderate | |
Moderate | |
Moderate | |
Moderate |
A pattern runs through these: most of them break OpenClaw's access control boundary. The allowlists, authentication checks, and approval gates that determine who can reach the agent and trigger tool execution. The voice-call allowlist, the Twitch allowFrom, the Nextcloud Talk allowlist, the exec approval gating, the gateway identity checks. These are all different implementations of the same question: is this caller authorized? And in each case, the answer could be manipulated to be "yes", despite it not being true..
The five Moderate-severity findings include an SSRF via attachment URL hydration, a command injection in the maintainer tooling, secret leakage through skills.status, and two additional auth bypasses in the Matrix and webhook integrations.
We have reported additional vulnerabilities that are still being addressed.
The leaderboard
We verified credits on all 73 published advisories. Here's how the top contributors break down:*
Source | Total | Critical | High | Moderate | Low |
|---|---|---|---|---|---|
AISLE (us) | 15 | 1 | 9 | 5 | 0 |
vincentkoc | 9 | 0 | 4 | 4 | 1 |
p80n-sec | 6 | 0 | 4 | 2 | 0 |
yueyueL | 5 | 0 | 5 | 0 | 0 |
akhmittra | 5 | 0 | 1 | 4 | 0 |
aether-ai-agent | 4 | 0 | 2 | 1 | 1 |
Everyone else | 31 | 2 | 21 | 7 | 2 |
*A few advisories credit multiple independent reporters. Each credited researcher is counted once per advisory they're credited on, so the rows sum to 75 rather than the 73 unique advisories. The AISLE row aggregates findings credited to @MegaManSec (Joshua Rogers), @simecek (Petr Šimeček), and @stanislavfortaisle (Stanislav Fort) as reporters or analysts.
At High severity and above, AISLE has 10 findings. The next closest is the Github user yueyueL with 5. We account for roughly one in five of all OpenClaw security advisories, and our findings skew toward the top of the severity scale.
Why this matters
There is something deeply strange about the current moment in AI infrastructure. OpenClaw is being deployed into production environments, enterprise Slack workspaces, internal tools, customer-facing services, while its security model is still being written. The project itself acknowledged this by shifting from open-by-default to pairing-first access control in January 2026. Security researchers found over 42,000 exposed OpenClaw instances on the public internet, with 93% vulnerable to exploitation through default configurations. Malicious "skills" have been uploaded to ClawHub. A database breach at Moltbook, an OpenClaw-based social network, exposed 32,000 agent credentials.
This is the pattern we've seen before with foundational infrastructure: adoption massively outpaces security investment, but the difference is the timeline. OpenSSL accumulated its vulnerability debt over 25 years, while OpenClaw accumulated a comparable exposure in weeks.
AI agents have shell access. They hold API keys. They can read, write, and delete files. They execute code. When their access controls fail, as in our Critical finding, the blast radius is arbitrary command execution on whatever system the agent has access to.
Everyone is asking what AI agents can do, but almost nobody is asking who actually secures them. We see this as the AI infrastructure security gap: the distance between how much trust we're placing in these systems and how much scrutiny they've actually received. That gap is enormous right now, and it is growing faster than the security community is closing it.
What this means for AISLE
Our core work hasn't changed: we still spend most of our time on the hardest targets like OpenSSL, curl, the Linux kernel, glibc, the browser engines since that's where the depth of our AI system's security reasoning matters most, and where the impact per finding is highest. But the attack surface is expanding; OpenClaw today has the adoption trajectory of the tools that became load-bearing for the internet, and the same pattern of security investment lagging years behind deployment.
We intend to be part of closing that gap. The same system that found 12 CVEs in OpenSSL can also systematically audit the AI agent frameworks that enterprises are deploying right now, with the same commitment to responsible disclosure and maintainer collaboration.
If you're deploying OpenClaw or similar AI agent frameworks in production, the security posture of the underlying platform should be as much a part of your evaluation as the features. The 15 findings we disclosed are patched. The question is what hasn't been found yet.
Stanislav Fort is co-founder and Chief Scientist at AISLE. For our work on OpenSSL, see What AI Security Research Looks Like When It Works.
Our 15 OpenClaw advisories
ID | Severity |
|---|---|
Critical | |
High | |
High | |
High | |
High | |
High | |
High | |
High | |
High | |
High | |
Moderate | |
Moderate | |
Moderate | |
Moderate | |
Moderate |