Blog

Notes from building a hardware-isolated cloud sandbox.

Benchmarks, architecture deep-dives, security thinking, integration tutorials. Every post is written by the engineers shipping the thing — no content-marketing filler.

ProductApr 22, 20265 min read

What is Podflare? (And no, we're not Cloudflare — different company, different product)

Podflare is a hardware-isolated cloud sandbox built for AI agents. Not Cloudflare. Here's what we actually do: 80 ms fork(), persistent Python REPL, 5 regions, Podflare Pod microVMs.

Robel TegegnePodflare, founder
TutorialsApr 22, 20266 min read

Install Podflare in one line: OpenClaw, Smithery, Claude Code, Cursor, Codex

A single-line install for Podflare in every AI tool that matters in 2026. Copy, paste, ship. Covers ClawHub, Smithery, and direct MCP config for Claude Code, Cursor, Codex, and Cline.

Robel TegegnePodflare, founder
SecurityApr 22, 20269 min read

Sandbox Claude Code, Cursor, and Codex with Podflare: one MCP config, zero dev-machine risk

AI coding assistants run Bash on your laptop. That's a prompt-injection away from leaked API keys, destroyed files, or lateral movement into prod. Here's the universal MCP-based fix — ~2 min setup for Claude Code, Cursor, Codex, and any other MCP client.

Robel TegegnePodflare, founder
SecurityApr 22, 202610 min read

Every AI-agent credential leak in 2026 started the same way: the agent reading .env

Seven real-world incident patterns where AI coding assistants exfiltrated API keys, DB passwords, cloud tokens, or SSH keys. Each one mitigated by default when the agent runs in a sandbox.

Robel TegegnePodflare, founder
SecurityApr 22, 20267 min read

Air-gapped sandboxes: run LLM-generated analysis on PII / PHI / financial data without leaking it

Your agent wants to analyze medical records, customer PII, or a trained model's weights. You can't let that data cross the internet. Podflare's egress=False sandboxes give the agent full compute with zero network — the data stays where it is.

Robel TegegnePodflare, founder
BenchmarksApr 21, 20267 min read

Cloud sandbox benchmark for AI agents: E2B vs Daytona vs Podflare (April 2026)

We ran an identical-harness head-to-head against the three major cloud sandbox platforms for AI agents. Here's the full 30-iteration latency distribution — and why the numbers look the way they do.

Robel TegegnePodflare, founder
ProductApr 20, 20264 min read

Podflare vs Cloudflare: completely different companies, different products

Podflare and Cloudflare sound alike but do very different things. Podflare is a cloud sandbox for AI agents; Cloudflare is a CDN + edge network. A clear side-by-side so you pick the right one.

Robel TegegnePodflare, founder
TutorialsApr 19, 20268 min read

Building your own OpenAI code interpreter: a self-hostable alternative with full control

OpenAI's code interpreter is convenient but opaque — you don't control the runtime, the model tier gates access, and data leaves your perimeter. Here's how to build the same capability with Podflare + any LLM.

Robel TegegnePodflare, founder
ArchitectureApr 18, 20269 min read

What is a cloud sandbox for AI agents? (And why you need one in 2026)

An LLM that writes code needs somewhere safe to run that code. Cloud sandboxes are the primitive every serious AI agent now depends on. Here's what they are, what they aren't, and how they differ from containers and serverless.

Robel TegegnePodflare, founder
TutorialsApr 17, 20267 min read

Adding a secure code-execution tool to LangChain agents with Podflare

Give your LangChain agent the ability to run arbitrary Python in a hardware-isolated sandbox. Full example with StructuredTool, persistent state across calls, and ~80ms fork() for tree-of-thought.

Robel TegegnePodflare, founder
TutorialsApr 16, 20267 min read

Adding code execution to Vercel AI SDK: from tool() helper to hardware-isolated Python

Give your Vercel AI SDK agent a real Python interpreter that persists state, forks for tree-of-thought, and runs code in a hardware-isolated Podflare Pod microVM. Full TypeScript example.

Robel TegegnePodflare, founder
TutorialsApr 15, 20266 min read

Wiring a Podflare sandbox into OpenAI Agents SDK as a FunctionTool

The OpenAI Agents SDK gives you agent handoffs, tracing, and tool-use loops out of the box. Here's how to drop in a hardware-isolated code-execution tool that composes with the rest of it.

Robel TegegnePodflare, founder
SecurityApr 14, 20268 min read

Why Docker isn't enough when your AI agent runs LLM-generated code

Containers share a kernel with the host. When the code in the container was written by an LLM responding to untrusted input, that shared kernel is a threat surface. Here's the specific argument, the CVEs that make it concrete, and what to do instead.

Robel TegegnePodflare, founder
TutorialsApr 13, 20267 min read

Code execution for Google Gemini function calling in a hardware-isolated sandbox

Gemini's function-calling API is the right shape for giving the model a Python interpreter — if you bring your own execution layer. Here's the full pattern with Podflare, for both Python and TypeScript SDKs.

Robel TegegnePodflare, founder
ArchitectureApr 12, 20269 min read

The fork() primitive: how 80 ms copy-on-write VM snapshots unlock tree-of-thought for AI agents

Multi-attempt code synthesis and tree-of-thought agent patterns need one primitive nobody else ships: mid-flight sandbox fork. Here's what fork() is, why it's 80 ms, and what it enables.

Robel TegegnePodflare, founder
ArchitectureApr 11, 20266 min read

Why persistent Python REPL across agent turns cuts your LLM bill 10x

Container-per-tool-call agents re-parse every CSV, re-import every library, re-run every setup cell on every turn. A persistent REPL eliminates all of it. Here's the math on why that saves 10x tokens.

Robel TegegnePodflare, founder
TutorialsApr 10, 202610 min read

Adding code execution to Anthropic's tool_use with a secure cloud sandbox

Give Claude a Python interpreter that survives across turns, runs arbitrary code in a hardware-isolated microVM, and comes back with stdout in under 200 ms. Full working example with Anthropic Messages API + Podflare.

Robel TegegnePodflare, founder
ArchitectureApr 9, 20267 min read

Why your AI agent's p99 latency matters more than its p50 (and what to do about it)

A 300 ms p50 with a 3 s p99 means one in twenty tool calls stalls the agent for three seconds. The user feels it. The p99 number is the one worth shopping for — here's why and how.

Robel TegegnePodflare, founder

Want new posts in your inbox?

We ship an engineering deep-dive roughly once every two weeks. No drip sequences, no sales pitches.

We’ll only send engineering posts and major product launches. Unsubscribe any time by replying to any email.

Blog — Podflare — Podflare