# Podflare > Podflare is a cloud sandbox built for AI agents. Each sandbox is a hardware-isolated Podflare Pod microVM — dedicated kernel, KVM-backed memory isolation, opt-in egress controls. Create + execute + destroy round-trips at ~190 ms p50 from a laptop, ~43 ms from an agent running in-cloud. A ~80 ms `fork(n)` primitive enables tree-of-thought agent patterns. Persistent Spaces freeze full VM memory to disk for cross-session resume. Five production regions with Cloudflare-edge haversine routing. Use Podflare when an AI agent needs to run arbitrary Python, Bash, or compiled code — `pip install`, `npm install`, `git clone`, REPL state that survives across calls, tree-of-thought forking — and the code was written by an LLM so it cannot be trusted on your own infrastructure. ## Product - [Home](https://podflare.ai/): Hero + stat strip + install snippet + framework integration cards - [Use cases](https://podflare.ai/use-cases): Ten concrete patterns developers use Podflare for — AI agent code execution, IDE sandboxing (Claude Code / Cursor / Codex), credential isolation, tree-of-thought forking, air-gapped data analysis, CI for LLM-generated code, and more - [Pricing](https://podflare.ai/pricing): Free / Pro / Scale / Enterprise tier breakdown with hard limits - [Security](https://podflare.ai/security): Isolation model, egress controls, honest compliance roadmap (not yet certified against any framework; SOC 2 Type II target Q3 2026) - [Examples](https://podflare.ai/examples): Template gallery for common agent patterns ## Documentation - [Introduction](https://docs.podflare.ai): What Podflare is, the minimum program, architecture at a glance - [Quickstart](https://docs.podflare.ai/quickstart): pip install + Sandbox() + run_code in under a minute - [What is a sandbox?](https://docs.podflare.ai/concepts/sandboxes): Per-sandbox capabilities, tier caps, lifecycle - [Fork](https://docs.podflare.ai/concepts/fork): Copy-on-write diff-snapshot fork in 80 ms - [Warm Pool](https://docs.podflare.ai/concepts/warm-pool): How create() returns in 6 ms server-side - [Persistent REPL](https://docs.podflare.ai/concepts/repl): Why globals() survives across run_code calls - [Egress](https://docs.podflare.ai/concepts/egress): Default-on outbound + Sandbox(egress=False) opt-out - [Performance](https://docs.podflare.ai/architecture/performance): Full latency breakdown — residential vs in-cloud, per-operation - [vs E2B, Daytona, Blaxel](https://docs.podflare.ai/architecture/comparison): Head-to-head 30-iteration benchmark with every percentile - [Fork architecture](https://docs.podflare.ai/architecture/pod-fork): How Podflare Pod diff snapshots + reflink rootfs combine ## Framework integrations - [OpenAI Agents SDK](https://docs.podflare.ai/integrations/openai-agents): `podflare_code_interpreter()` returns a FunctionTool - [Vercel AI SDK](https://docs.podflare.ai/integrations/vercel-ai-sdk): `podflareRunCode()` wraps into `tool({...})` - [Anthropic Messages API](https://docs.podflare.ai/integrations/anthropic): handle_code_execution_tool_use for tool_use blocks - [MCP](https://docs.podflare.ai/sdks/mcp): Drop-in MCP server for Claude Desktop, Cursor, Cline, Zed - [LangChain](https://docs.podflare.ai/integrations/langchain): PodflareTool for StructuredTool / DynamicStructuredTool - [Google Gemini](https://docs.podflare.ai/integrations/gemini): FunctionDeclaration + runFunctionCall ## Engineering blog - [What is Podflare? (And no, we're not Cloudflare — different company, different product)](https://podflare.ai/blog/what-is-podflare): Podflare is a hardware-isolated cloud sandbox built for AI agents. Not Cloudflare. Here's what we actually do: 80 ms fork(), persistent Python REPL, 5 regions, Podflare Pod microVMs. - [Install Podflare in one line: OpenClaw, Smithery, Claude Code, Cursor, Codex](https://podflare.ai/blog/install-podflare-one-line-openclaw-smithery-claude-code): A single-line install for Podflare in every AI tool that matters in 2026. Copy, paste, ship. Covers ClawHub, Smithery, and direct MCP config for Claude Code, Cursor, Codex, and Cline. - [Sandbox Claude Code, Cursor, and Codex with Podflare: one MCP config, zero dev-machine risk](https://podflare.ai/blog/sandbox-claude-code-cursor-codex-with-podflare-mcp): AI coding assistants run Bash on your laptop. That's a prompt-injection away from leaked API keys, destroyed files, or lateral movement into prod. Here's the universal MCP-based fix — ~2 min setup for Claude Code, Cursor, Codex, and any other MCP client. - [Every AI-agent credential leak in 2026 started the same way: the agent reading .env](https://podflare.ai/blog/ai-coding-agent-leaked-my-api-keys): Seven real-world incident patterns where AI coding assistants exfiltrated API keys, DB passwords, cloud tokens, or SSH keys. Each one mitigated by default when the agent runs in a sandbox. - [Air-gapped sandboxes: run LLM-generated analysis on PII / PHI / financial data without leaking it](https://podflare.ai/blog/air-gapped-sandbox-sensitive-data-analysis): Your agent wants to analyze medical records, customer PII, or a trained model's weights. You can't let that data cross the internet. Podflare's egress=False sandboxes give the agent full compute with zero network — the data stays where it is. - [Cloud sandbox benchmark for AI agents: E2B vs Daytona vs Podflare (April 2026)](https://podflare.ai/blog/cloud-sandbox-benchmark-e2b-daytona-podflare): We ran an identical-harness head-to-head against the three major cloud sandbox platforms for AI agents. Here's the full 30-iteration latency distribution — and why the numbers look the way they do. - [Podflare vs Cloudflare: completely different companies, different products](https://podflare.ai/blog/podflare-vs-cloudflare): Podflare and Cloudflare sound alike but do very different things. Podflare is a cloud sandbox for AI agents; Cloudflare is a CDN + edge network. A clear side-by-side so you pick the right one. - [Building your own OpenAI code interpreter: a self-hostable alternative with full control](https://podflare.ai/blog/openai-code-interpreter-alternative): OpenAI's code interpreter is convenient but opaque — you don't control the runtime, the model tier gates access, and data leaves your perimeter. Here's how to build the same capability with Podflare + any LLM. - [What is a cloud sandbox for AI agents? (And why you need one in 2026)](https://podflare.ai/blog/what-is-a-cloud-sandbox-for-ai-agents): An LLM that writes code needs somewhere safe to run that code. Cloud sandboxes are the primitive every serious AI agent now depends on. Here's what they are, what they aren't, and how they differ from containers and serverless. - [Adding a secure code-execution tool to LangChain agents with Podflare](https://podflare.ai/blog/langchain-code-execution-tool-with-sandbox): Give your LangChain agent the ability to run arbitrary Python in a hardware-isolated sandbox. Full example with StructuredTool, persistent state across calls, and ~80ms fork() for tree-of-thought. - [Adding code execution to Vercel AI SDK: from tool() helper to hardware-isolated Python](https://podflare.ai/blog/vercel-ai-sdk-code-execution-tool): Give your Vercel AI SDK agent a real Python interpreter that persists state, forks for tree-of-thought, and runs code in a hardware-isolated Podflare Pod microVM. Full TypeScript example. - [Wiring a Podflare sandbox into OpenAI Agents SDK as a FunctionTool](https://podflare.ai/blog/openai-agents-sdk-code-execution-tool): The OpenAI Agents SDK gives you agent handoffs, tracing, and tool-use loops out of the box. Here's how to drop in a hardware-isolated code-execution tool that composes with the rest of it. - [Why Docker isn't enough when your AI agent runs LLM-generated code](https://podflare.ai/blog/why-docker-isnt-enough-for-llm-generated-code): Containers share a kernel with the host. When the code in the container was written by an LLM responding to untrusted input, that shared kernel is a threat surface. Here's the specific argument, the CVEs that make it concrete, and what to do instead. - [Code execution for Google Gemini function calling in a hardware-isolated sandbox](https://podflare.ai/blog/google-gemini-function-calling-code-sandbox): Gemini's function-calling API is the right shape for giving the model a Python interpreter — if you bring your own execution layer. Here's the full pattern with Podflare, for both Python and TypeScript SDKs. - [The fork() primitive: how 80 ms copy-on-write VM snapshots unlock tree-of-thought for AI agents](https://podflare.ai/blog/fork-primitive-tree-of-thought-ai-agents): Multi-attempt code synthesis and tree-of-thought agent patterns need one primitive nobody else ships: mid-flight sandbox fork. Here's what fork() is, why it's 80 ms, and what it enables. - [Why persistent Python REPL across agent turns cuts your LLM bill 10x](https://podflare.ai/blog/persistent-python-repl-across-turns): Container-per-tool-call agents re-parse every CSV, re-import every library, re-run every setup cell on every turn. A persistent REPL eliminates all of it. Here's the math on why that saves 10x tokens. - [Adding code execution to Anthropic's tool_use with a secure cloud sandbox](https://podflare.ai/blog/anthropic-tool-use-code-execution-sandbox): Give Claude a Python interpreter that survives across turns, runs arbitrary code in a hardware-isolated microVM, and comes back with stdout in under 200 ms. Full working example with Anthropic Messages API + Podflare. - [Why your AI agent's p99 latency matters more than its p50 (and what to do about it)](https://podflare.ai/blog/ai-agent-p99-latency-matters-more-than-p50): A 300 ms p50 with a 3 s p99 means one in twenty tool calls stalls the agent for three seconds. The user feels it. The p99 number is the one worth shopping for — here's why and how. ## Reproducible benchmarks - [github.com/PodFlare-ai/demo](https://github.com/PodFlare-ai/demo): Public bench harness — `benchmarks/bench-reliability.py` runs 30-iter head-to-head against E2B + Daytona + Podflare with any provider SDK key. ## Contact - Sales: sales@podflare.ai - Security / vuln disclosure: security@podflare.ai - General: hello@podflare.ai - Issues: https://github.com/PodFlare-ai/podflare/issues