OpenClaw Security Guide
Running an AI agent with shell access is... spicy. Here is how to not get pwned. Based on official OpenClaw security documentation.
Understanding the Threat Model
Your AI assistant can execute shell commands, read/write files, access networks, and send messages.
Prompt Injection
Attackers craft messages to manipulate your AI into doing unsafe things.
Untrusted Content
Web results, URLs, emails, docs can carry adversarial instructions.
Credential Exposure
Session transcripts may contain API keys if not properly redacted.
Core Security Principles
Identity first, Scope next, Model last.
Identity First
Decide who can talk to your bot using DM pairing, allowlists.
Scope Next
Decide where the bot can act: groups, tools, sandboxing.
Model Last
Assume models can be manipulated. Design limited blast radius.
The "find ~" Incident
A tester asked Clawd to run find ~ and share output. Clawd dumped entire home directory. Lesson: Even innocent requests can leak sensitive info.