We ship Podflare across every installation channel a developer might use in 2026. This post is the copy-paste-ready version of each one. If you use more than one AI tool, pick the row that matches your primary and skip the rest — they all land you in the same place.
TL;DR — pick your tool
| Tool | Install command / config |
|---|---|
| OpenClaw (ClawHub) | clawhub install podflare |
| OpenClaw (direct) | openclaw skills install podflare |
| Smithery | npx -y @smithery/cli install podflare/sandbox |
| Claude Code | Paste MCP config in ~/.claude/settings.json |
| Cursor | Paste MCP config in .cursor/mcp.json |
| OpenAI Codex CLI | Paste MCP config in ~/.codex/config.toml |
| Claude Desktop | Paste MCP config in claude_desktop_config.json |
| Cline, Continue, Aider, Zed, any MCP-compatible tool | Same pattern: one JSON entry. |
You need a free API key for every path. Mint one at dashboard.podflare.ai/keys. $200 starter credit, no card, 60 seconds.
OpenClaw / ClawHub
OpenClaw is an open-source personal AI assistant with a companion plugin marketplace at ClawHub. We publish a dedicated skill at PodFlare-ai/openclaw-podflare; one line installs it:
# From ClawHub (recommended) clawhub install podflare # Or directly through OpenClaw openclaw skills install podflare # Then export your key (or let OpenClaw prompt) export PODFLARE_API_KEY=pf_live_...
OpenClaw materializes the skill as a temporary Claude-Code plugin, loads the Podflare MCP server from the frontmatter, and every subsequent agent run has the 8 Podflare tools available. See the OpenClaw integration docs for troubleshooting.
Smithery
Smithery is the largest MCP server registry in the ecosystem (7k+ servers, Docker-Hub-shaped UI, one-click install into Claude Desktop / Cursor / Cline). Podflare is listed as podflare/sandbox:
# Install into Claude Desktop npx -y @smithery/cli install podflare/sandbox --client claude # Install into Cursor npx -y @smithery/cli install podflare/sandbox --client cursor # Install into Cline npx -y @smithery/cli install podflare/sandbox --client cline
Smithery handles the config injection per-client. You’ll be prompted for your PODFLARE_API_KEY at install time.
Direct MCP config — Claude Code
User-level (affects every project):
# ~/.claude/settings.json
{
"mcpServers": {
"podflare": {
"url": "https://mcp.podflare.ai",
"headers": {
"Authorization": "Bearer pf_live_..."
}
}
}
}Restart Claude Code. The tool mcp__podflare__run_python appears alongside Bash. For the strictest mode, also deny the built-in Bash tool per-project — full guide in the Claude Code / Cursor / Codex sandboxing post.
Direct MCP config — Cursor
# .cursor/mcp.json
{
"mcpServers": {
"podflare": {
"url": "https://mcp.podflare.ai",
"headers": {
"Authorization": "Bearer pf_live_..."
}
}
}
}Direct MCP config — OpenAI Codex CLI
# ~/.codex/config.toml
[mcp_servers.podflare]
url = "https://mcp.podflare.ai"
headers = { Authorization = "Bearer pf_live_..." }Direct MCP config — Claude Desktop
Claude Desktop doesn’t support remote MCP servers directly; wrap with mcp-remote:
# ~/Library/Application Support/Claude/claude_desktop_config.json (macOS)
# %APPDATA%\Claude\claude_desktop_config.json (Windows)
{
"mcpServers": {
"podflare": {
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://mcp.podflare.ai",
"--header",
"Authorization:Bearer pf_live_..."
]
}
}
}Restart Claude Desktop after editing.
Self-host (optional)
The MCP worker that powers mcp.podflare.ai is open source — a small Cloudflare Worker you can deploy under your own domain:
git clone https://github.com/PodFlare-ai/podflare cd podflare/mcp-worker pnpm install pnpm run deploy # deploys to your Cloudflare account
You’ll still need a Podflare API key to hit the underlying sandbox fleet, but your MCP endpoint lives under your own domain and custom auth if you want it.
What you’re now equipped with
Every install above gives the agent these 8 tools:
create_sandbox— provision a Linux microVM (template:defaultorpython-datasci)run_python— persistent REPL, state sticks across callsrun_bash— fresh subprocess per callfork— snapshot + N copies in ~80 ms (tree-of-thought)merge_into— commit winner back to parentupload/download— bytes in/outdestroy_sandbox— tear down
Full tool schemas, latencies, and the server-side architecture at docs.podflare.ai/integrations/mcp.
Why you’d bother
If you were going to run LLM-generated code anywhere, the short list of reasons to route it through this sandbox instead of your own machine:
- No credential leaks — the sandbox can’t read
.env, cloud CLI creds, or~/.ssh/. - No lateral movement — even if the agent is prompt-injected, it has no route to your internal network.
- Faster than Docker-per-call — ~190 ms cold, ~46 ms hot, vs 500–2000 ms
docker run. fork(n)primitive — not available anywhere else; enables tree-of-thought at a cost that’s actually competitive with sequential.
The threat-model post walks through 7 real incident patterns that this setup closes.
Troubleshooting
401 Unauthorized
API key missing or revoked. Mint a new one at dashboard.podflare.ai/keys and paste it back into the config.
The agent keeps using the default Bash / Terminal tool
Most agents are free to pick any available tool. To force sandbox-only execution: deny the built-in Bash tool in your client’s permission config, or add a project-level rule telling the agent to prefer mcp__podflare__run_python. The Claude Code / Cursor / Codex post has per-tool snippets.
ECONNRESET / handshake timeout from Claude Desktop
Claude Desktop doesn’t natively support remote MCP servers. Use the mcp-remote wrapper shown above.
Ship it
Free Podflare account, mint a key, pick the install path that matches your tool. If anything breaks or the docs are wrong, hello@podflare.ai — we read every reply.