What developers build on Podflare.
Ten concrete patterns we see in production, grouped by persona. Every one slots into the same Sandbox() primitive — hardware-isolated, 5 regions, ~190 ms round-trip from a laptop.
One-line install in every AI tool you already use
ClawHub install for OpenClaw. Smithery install for Claude Desktop / Cursor / Cline. Direct MCP config for Claude Code / Codex. Every path routes to the same managed MCP server at mcp.podflare.ai — no CLI to install, no SDK to vendor, just a Bearer-token config entry and the agent gains 8 sandbox tools.
See every install path →Code execution tool for your AI agent
Your agent (OpenAI, Anthropic, Gemini, Llama) needs a safe place to run LLM-generated Python. Podflare gives each tool call its own hardware-isolated Podflare Pod microVM — real filesystem, persistent REPL, sub-200 ms round-trip. The most common Podflare pattern, plugged into every major agent framework.
Read the category overview →Sandbox Claude Code / Cursor / Codex
AI coding assistants run Bash on your machine. One prompt injection away from cat ~/.aws/credentials or rm -rf. Podflare's MCP server slots in as a safer `run_code` tool — your agent still writes the code, it just executes in a disposable microVM instead of on your laptop. One config line, works with Claude Code, Cursor, Codex, Cline, any MCP-compatible tool.
See the full setup guide →Isolate API keys, DB creds, cloud tokens from AI code
Every leaked-credential post-mortem in 2026 started with an AI agent running print(os.environ), cat .env, or pip install <compromised-package>. Podflare's sandbox only sees the credentials you explicitly inject. Your AWS keys, OpenAI API keys, database URLs stay on your side of the boundary. No file access, no network pivot, no compromise.
Read the threat model →Tree-of-thought + self-consistency voting
fork(n=5) snapshots a running sandbox's full state (memory, REPL, filesystem) in ~80 ms and spawns N children, each trying a different strategy from identical context. The only cloud sandbox platform that ships this primitive. Unlock patterns that weren't practical before because the setup cost compounded.
How fork() works →Air-gapped sandbox for sensitive-data analysis
You want the agent to analyze a CSV of medical records, a parquet of PII, or a trained model's weights — but you can't let that data leave your perimeter. Podflare sandboxes can be launched with egress=False (no outbound network at all). The model generates Python; the sandbox executes it against your data; nothing leaves. HIPAA / GDPR / data-residency workloads welcome.
See the air-gap playbook →Long-running agent sessions — freeze + resume
Spaces freeze full VM memory to disk when a sandbox goes idle. Train a model, load a dataset, build an index — then resume tomorrow with the same Python process still holding the same state. Unique to Podflare; every competitor can only snapshot the filesystem.
Why persistent REPL matters →CI for LLM-generated code
The agent opens a PR. Before you merge, run its tests, linters, and security scans in a Podflare sandbox — not on your CI runner. Sandbox gets destroyed after. No compromised dependencies reach your build fleet. Bonus: pre-commit hook that runs `podflare run` on staged changes so you see the output in <200 ms.
Example harness on GitHub →Live code sandboxes in consumer products
You're building a ChatGPT-competitor, a data-analysis copilot, an AI tutor that lets students run Python. You need every user session to get its own isolated VM, with pooled warm starts so latency doesn't tank. Podflare's 120-VM-per-region warm pool handles thousands of concurrent users; isolation is per-VM, not per-container.
Scale tier pricing →Scraping + automation without leaking your IP
Give each agent session its own sandbox — its own DHCP-leased IP, its own cookie jar, its own outbound connection pool. Target sites can't correlate your agents to each other or to your corporate IP. Rotate regions (us-west, us-east, eu, sg) for geo-specific workloads.
Egress model docs →Education + coding bootcamps
Each student's submission runs in a disposable VM. No shared kernel across learners. No way for one student's malware to infect the platform. Pool warm starts mean autograder latency stays under 200 ms per submission even at peak enrollment. Used by AI-tutoring startups for the live-coding surface.
Email sales →The threats Podflare closes.
The number of AI-coding-agent security incidents in the last 12 months is large and growing. Every case below is a real, reported incident pattern — either inside our customer base, in public disclosures, or in AI-agent security research papers. Each one is mitigated by default when the agent's code runs in a Podflare sandbox instead of on your machine or inside your app's process.
Leaked API keys
Agent prints os.environ to debug a failing call. Logs ship to an LLM observability tool. Your OpenAI key, your Anthropic key, your Stripe key are now in someone else's cold storage.
Database password exposure
AGENTS.md tells the agent to 'query the production database.' Agent reads DATABASE_URL from env, writes it into a code block, submits it as a tool call. The tool call payload is logged.
Compromised package supply chain
Agent runs `pip install <some-obscure-package>` to fulfill the user's request. The package contains exfiltration code that enumerates ~/.ssh, ~/.aws, ~/.gcp, ~/.kube and POSTs them out.
Destructive tool use
Prompt injection convinces the agent to run `rm -rf ~/code` or `DROP TABLE users`. A sandbox caps the blast radius at 'one disposable VM that was getting destroyed anyway.'
Crypto mining / ZIP bombs
Agent is tricked into running a background process that pegs your CPU for hours. In a Podflare sandbox, the VM has a max-lifetime cap; the mining process dies when the sandbox is destroyed.
Lateral movement from dev machine
Agent exfiltrates data from your dev box, then uses your active SSH keys / cloud CLI tokens to pivot into production. Sandbox = no SSH keys, no cloud CLI, no network route to prod.
One config line to sandbox your AI coding assistant.
Claude Code, Cursor, Codex, Cline, and every other MCP-compatible tool can point at Podflare with a single entry in their config. Your agent keeps writing code. It just runs in a hardware-isolated microVM instead of on your laptop.
{
"mcpServers": {
"podflare": {
"url": "https://mcp.podflare.ai",
"headers": {
"Authorization": "Bearer pf_live_..."
}
}
}
}Don’t see your use case here?
We’re a small team shipping fast. If Podflare almost fits what you need but doesn’t quite, tell us — hello@podflare.ai. Response within a business day. Feature requests that match our architecture get shipped in the next sprint more often than you’d expect.