Setup & Deployment
OpenClaw System Requirements: Complete Hardware & Software Guide (2026)
25 min read · Updated 2026-03-26
By DoneClaw Team · We run managed OpenClaw deployments and write from hands-on production experience.
Trying to figure out the exact OpenClaw system requirements before you commit to hardware or a VPS plan? You're not alone — it's one of the most common questions in the OpenClaw community, and the answer isn't a single number. It depends on how you plan to use your agent: cloud APIs only, local LLMs, coding agents, or a hybrid setup. This guide breaks down every hardware and software requirement for OpenClaw across all deployment scenarios — from a $5/month VPS running cloud APIs to a $1,599 Mac Mini Pro running 70B parameter models locally. You'll get exact RAM numbers, CPU benchmarks, storage calculations, and real-world performance data so you can size your setup correctly the first time.
Quick Answer: OpenClaw Minimum System Requirements
If you just want the numbers, here they are:
CPU: 1 vCPU / ARM64 (minimum cloud API), 2 vCPU / 4 cores (recommended), 4+ cores / Apple Silicon (local LLM). RAM: 1 GB (minimum cloud API), 2–4 GB (recommended), 16–64 GB (local LLM). Storage: 10 GB SSD (minimum), 20–40 GB SSD (recommended), 50–200 GB NVMe (local LLM). GPU: not required for cloud API setups, 6+ GB VRAM or Apple Silicon for local LLMs. Network: 1 Mbps stable (minimum), 5+ Mbps (recommended). OS: Linux 64-bit, macOS 14+, or Windows with WSL2; add CUDA 12+ or Metal for local LLMs. Node.js: v22.0+ required, v22.x LTS recommended. Docker: v24+ optional, v25+ with Compose v2 recommended, v25+ with GPU passthrough for local LLMs.
The key insight: OpenClaw itself is lightweight — the gateway process uses only 150–300 MB of RAM. What drives your hardware requirements is your model choice and workload pattern, not the agent framework.
Understanding What Drives Hardware Requirements
Before diving into specific configurations, you need to understand the three factors that determine your OpenClaw system requirements.
The first and biggest variable is your model hosting strategy. If you're using cloud APIs (Anthropic Claude, OpenAI GPT-4o, Google Gemini), your hardware requirements are minimal — the heavy computation happens on the provider's servers. You're essentially running a lightweight Node.js process that orchestrates API calls. If you're running local models via Ollama or LM Studio, your RAM and GPU requirements jump dramatically. A quantized 7B parameter model needs approximately 5 GB of RAM just for the model weights, plus additional memory for the context window. A 70B model needs 40+ GB.
The second factor is concurrent workload. Running a single chat session with cloud APIs barely registers on system resources. But if you're running cron jobs, heartbeat checks, multiple sub-agents, and a coding agent simultaneously, memory and CPU usage scales accordingly. Each active session consumes memory for its context window and any spawned processes.
The third factor is sandbox and tool usage. OpenClaw's sandbox runs tools in isolated environments. Browser automation, file operations, and shell commands each consume additional resources. The sandbox itself adds roughly 50–100 MB of overhead. If you're running browser automation with Chromium, budget an extra 200–500 MB per browser instance.
Cloud API Setup: Minimum Hardware Requirements
Running OpenClaw with cloud APIs (Claude, GPT-4o, Gemini) is the most lightweight configuration. The agent acts as an orchestrator — receiving messages, calling APIs, managing memory, and executing skills.
The minimum viable configuration is 1 vCPU (x86_64 or ARM64), 1 GB RAM (tight but functional), 10 GB SSD, and Ubuntu 22.04 LTS or Debian 12 (64-bit). At 1 GB RAM, you can run the OpenClaw gateway, respond to messages, and execute basic skills. You'll hit limits if you try to run browser automation or multiple concurrent sessions. Swap space helps as a safety net but shouldn't be relied on for regular operations.
The recommended cloud API configuration is 2 vCPU, 2–4 GB RAM, 20–40 GB SSD, and Ubuntu 22.04 LTS or Debian 12. With 2–4 GB, you have comfortable headroom for the gateway process (~200 MB), Docker overhead (~150 MB), one or two concurrent sessions with tool use, basic browser automation, log files and memory data, and OS processes and cron jobs. This is the sweet spot for most users. It handles daily personal assistant workflows — email management, calendar checks, web searches, file operations — without breaking a sweat.
For real-world VPS recommendations: Contabo Cloud VPS S offers 4 vCPU, 8 GB RAM, and 50 GB SSD for $4.95/month — best value per spec. Hetzner CX22 offers 2 vCPU, 4 GB RAM, and 40 GB for ~$3.79 EUR/month — great for European users. DigitalOcean Basic offers 1 vCPU, 1 GB RAM, and 25 GB for $6/month — beginner-friendly. Vultr Cloud Compute offers 1 vCPU, 1 GB, and 25 GB for $5/month. Linode Nanode offers 1 vCPU, 1 GB, and 25 GB for $5/month. Oracle Cloud Free Tier offers 4 ARM cores, 24 GB RAM, and 200 GB for $0/month — best free option.
Pro tip: Oracle Cloud's Always Free tier offers an ARM-based VM with 4 OCPU and 24 GB RAM — enough to run OpenClaw with a small local model. The catch is limited availability in popular regions.
Contabo deserves special mention: at $4.95/month for 4 vCPU and 8 GB RAM, it offers 4–8x the specs of competitors at the same price point. The trade-off is that Contabo's CPUs and disk I/O are generally slower than Hetzner's or DigitalOcean's equivalents on a per-core basis, but for OpenClaw (which is I/O-bound waiting for API responses, not CPU-bound), this barely matters.
Local LLM Setup: Hardware Requirements by Model Size
Running local models changes the equation entirely. Here's what you actually need, organized by model size class.
Small models (1B–3B parameters) include Llama 3.2 3B, Qwen 2.5 3B, and Phi-3.5 Mini. They need 4–8 GB total system memory, 4–6 GB VRAM (or CPU-only with slower inference), and 15–20 GB disk for model plus OS. Speed is 15–40 tokens/sec on a modest GPU, or 5–10 tok/sec CPU-only. Small models are fast and resource-light but produce noticeably lower quality output than cloud models. They're suitable for quick tasks — classification, summarization, simple Q&A — but struggle with complex reasoning, coding, and multi-step planning. Best for users who want some local capability on minimal hardware, or a fallback model when the internet is down.
Medium models (7B–14B parameters) include Llama 3.1 8B, Mistral 7B, Qwen 2.5 Coder 14B, and DeepSeek-R1 14B. They need 16–32 GB total system memory, 8–12 GB VRAM (RTX 3060/4060 or Apple M-series with 16+ GB), and 30–50 GB disk. Speed is 20–50 tokens/sec with GPU, or 3–8 tok/sec CPU-only. This is the practical sweet spot for local inference. An 8B model with quantization (Q4_K_M) fits comfortably in 8 GB VRAM and produces surprisingly good output for conversational tasks. The 14B class (Qwen 2.5 Coder, DeepSeek-R1) delivers near-cloud-quality for specific tasks like coding. Important: OpenClaw requires a minimum 64K token context window for local models. This context window consumes additional VRAM beyond the model weights. Budget approximately 2–4 GB extra for context with 8B models at 64K context.
Large models (30B–70B+ parameters) include Llama 3.3 70B (Q4), DeepSeek-R1 70B, and Qwen 2.5 72B. They need 48–96 GB total system memory (or 48 GB unified memory on Apple Silicon), 24+ GB VRAM (RTX 4090, A6000) or 48+ GB unified memory, and 80–200 GB disk. Speed is 10–30 tokens/sec on a high-end GPU, or 1–5 tok/sec on Apple Silicon. 70B models approach GPT-4 level quality for many tasks but require serious hardware. The most cost-effective path is Apple Silicon: a Mac Mini M4 Pro with 48 GB unified memory ($1,599) runs quantized 70B models at usable speeds. On the GPU side, you're looking at an RTX 4090 ($1,600–$2,000) or dual RTX 3090s.
Reality check: For most users, a 70B local model costs more in hardware than years of cloud API access. The main reasons to go this route are privacy requirements, offline operation, or the satisfaction of running everything yourself.
Platform-Specific Requirements
The Raspberry Pi 5 (8 GB) is a surprisingly capable OpenClaw host for cloud API workloads. The Pi 5 has a BCM2712 4-core processor at 2.4 GHz (2–3x faster than the Pi 4 in practice), 8 GB LPDDR4X-4267 RAM (shared with GPU), and supports NVMe M.2 storage (strongly recommended over microSD). Power draw is 2.6–11.6W, costing roughly $0.33–$0.87/month in electricity. It supports ARM64 Docker images. The Pi 5 with case, NVMe SSD, and power supply runs about $100–$130. Verdict: Excellent for cloud API workloads with running cost under $1/month. Not viable for local LLMs beyond toy experiments.
Apple Silicon Mac Minis are the community favorite for self-hosting OpenClaw, especially when local LLMs are involved. The unified memory architecture means CPU and GPU share the same high-bandwidth memory pool — no separate VRAM limitation. The M2 refurbished ($350) with 8 GB handles 3B models only. The M4 base ($499) with 16 GB runs 7B–8B models smoothly. The M4 with 32 GB ($699) handles quantized 70B models. The M4 Pro with 24 GB ($1,399) runs 34B models fast. The M4 Pro with 48 GB ($1,599) runs 70B+ models at full speed. Monthly power cost for all models is $0.50–$2.00. Best value: M4 with 32 GB ($699) — runs OpenClaw with cloud APIs AND quantized 70B local models as a fallback. Silent, fanless under normal loads, and draws just 4–7W at idle.
Running OpenClaw on Windows requires WSL2. Minimum requirements are Windows 10 v2004+ or Windows 11, WSL2 enabled, 4 GB RAM minimum (8+ GB recommended since WSL2 takes a share), 20+ GB free on the WSL2 ext4 virtual disk, and an NVIDIA GPU with WSL2 CUDA support for local LLMs. WSL2-specific gotchas: WSL2 defaults to claiming 50% of your system RAM — set memory=4GB in .wslconfig to cap it. File I/O across the Windows/WSL boundary (e.g., /mnt/c/) is extremely slow — keep all OpenClaw data on the Linux filesystem. Docker Desktop for Windows uses WSL2 as its backend — if you're running both Docker Desktop and a WSL2 distro, they compete for memory.
Linux is the primary target for OpenClaw and offers the most straightforward deployment. Any modern 64-bit Linux distribution works. Verified distributions include Ubuntu 22.04 LTS, 24.04 LTS, Debian 11 and 12, Fedora 38+, Arch Linux (rolling), and Raspberry Pi OS (64-bit).
# Node.js 22+ (via NodeSource or nvm)
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo bash -
sudo apt install -y nodejs
# Docker (optional but recommended)
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
# Verify
node --version # v22.x.x
docker --version # 25.x.x+Software Requirements
Beyond hardware, OpenClaw needs specific software to function.
Core requirements: Node.js v22.0+ (runtime environment, required), npm v10+ (bundled with Node, required), Docker v24+ (container deployment, recommended), Docker Compose v2.20+ (multi-container orchestration, needed with Docker), Git v2.30+ (workspace version control, optional), and curl or wget (API connectivity, required).
You need at least one model provider configured. The most popular options are: Google Gemini (free tier with 15 req/min, easy setup), OpenRouter (some free models, pay-per-token, easy setup), Anthropic Claude (no free tier, ~$3/MTok input, medium setup complexity), OpenAI (no free tier, ~$2.50/MTok input, easy setup), and Ollama for local models (fully free, hardware cost only, medium setup complexity).
OpenClaw needs outbound HTTPS access to your model provider's API endpoints. For channel integrations (Telegram, Discord, WhatsApp), it also needs to receive incoming webhooks — which means either a public IP with an open port behind a reverse proxy, or a tunneling solution like Tailscale. Bandwidth requirements are minimal. A typical conversation exchanges a few KB per message. Even with heavy use (100+ messages/day, web searches, file operations), you'll use less than 1 GB/month of bandwidth. The exception is if you're downloading large files or generating images through skills.
# Minimum required (if using UFW)
sudo ufw allow 22/tcp # SSH access
sudo ufw allow 443/tcp # HTTPS for reverse proxy
# Do NOT expose port 18789 directly — use a reverse proxySkip 60 minutes of setup — deploy in 60 seconds
DoneClaw handles Docker, servers, security, and updates. Your OpenClaw agent is ready to chat in under a minute.
Deploy NowStorage Planning: How Much Disk Space Do You Actually Need
Storage requirements grow over time as your agent accumulates memory files, logs, and cached data.
Here's what consumes disk space: OpenClaw binary and dependencies take ~200 MB initially and grow per update. The Docker image takes ~500 MB. Memory files (MEMORY.md, daily logs) start at ~1 MB and grow ~500 KB/month. Conversation history starts at ~5 MB and grows ~2–5 MB/month. Skills take ~50 MB plus ~10 MB per skill added. Browser automation cache takes ~100 MB and ~50 MB per session. Ollama models take 2–40 GB each (only for local LLMs). Docker logs, if unrotated, grow without bound — configure rotation immediately.
Critical: Docker's default logging driver has NO rotation. Without configuration, logs will fill your disk. Always set log rotation.
For cloud API setups, 20 GB is comfortable for 6–12 months of use. For local LLM setups, start with 50 GB minimum and plan for 100+ GB if you want multiple models available.
# In docker-compose.yml
services:
openclaw:
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}Performance Benchmarks: Real-World Numbers
Here are actual response time measurements across different hardware configurations running OpenClaw, measured as time from receiving a Telegram message to the agent's first response token.
For cloud API response times: Contabo 4 vCPU / 8 GB with Claude 3.5 Sonnet averages 1.2–2.5s (network latency dominant). Hetzner 2 vCPU / 4 GB with GPT-4o averages 1.0–2.0s (slightly faster EU to US). Raspberry Pi 5 (8 GB) with Gemini 2.5 Flash averages 1.5–3.0s (gateway adds ~200ms). Mac Mini M4 (16 GB) with Claude 3.5 Sonnet averages 0.8–1.5s (fast local processing). Oracle Free Tier ARM with GPT-4o Mini averages 1.0–2.0s (comparable to paid VPS). The bottleneck for cloud API setups is always network latency to the API provider, not your hardware. A Raspberry Pi performs within 0.5s of a high-end VPS.
For local LLM response times: Mac Mini M4 (32 GB) with Llama 3.1 8B runs 35–45 tok/s with 0.8–1.2s first response — an excellent daily driver. Mac Mini M4 Pro (48 GB) with Llama 3.3 70B Q4 runs 8–15 tok/s with 2–4s first response — usable for conversations. RTX 4090 (24 GB) with Mistral 7B runs 80–120 tok/s with 0.3–0.5s first response — the fastest consumer option. RTX 3060 (12 GB) with Qwen 2.5 14B Q4 runs 20–35 tok/s with 0.8–1.5s first response — a good budget GPU. Contabo 8 GB CPU-only with Llama 3.2 3B runs 3–6 tok/s with 3–8s first response — barely usable. Raspberry Pi 5 with Llama 3.2 3B runs 1–3 tok/s with 8–15s first response — not recommended.
The takeaway: If you want local LLMs at usable speeds, you need either Apple Silicon with 16+ GB unified memory or an NVIDIA GPU with 8+ GB VRAM. CPU-only inference on a VPS or Pi is technically possible but painfully slow.
Scaling Checklist: Signs You Need More Resources
Watch for these indicators that your setup is under-provisioned.
Memory pressure: If swap usage regularly exceeds 500 MB, you need more RAM. CPU saturation: A load average consistently above 2x your CPU count means tasks are queuing. Disk space: Keep at least 20% of disk free for Docker operations and temporary files. Response time degradation: If your agent's response times have increased noticeably compared to when you first set it up, check logs for errors, review memory usage, and consider whether your workload has grown beyond your hardware.
# Check memory usage
free -h
# Watch for high swap usage
vmstat 1 5
# Check load average
uptime
# Real-time monitoring
htop
# Check disk usage
df -h
# Find large files
du -sh /var/lib/docker/* | sort -rh | head -10Recommended Configurations by Use Case
Personal Assistant (Cloud APIs): Daily assistant on Telegram for email checks, calendar, web searches, and reminders. Hardware: Contabo VPS ($4.95/mo) or Raspberry Pi 5. RAM: 2–4 GB. Storage: 20 GB SSD. Model: Claude 3.5 Sonnet or Gemini 2.5 Flash. Monthly cost: ~$5 hosting + $2–10 API costs.
Developer Workstation (Cloud + Coding Agents): Coding assistance, PR reviews, multi-agent orchestration with Claude Code or Codex. Hardware: 4+ vCPU VPS or Mac Mini M4 (16 GB). RAM: 8+ GB. Storage: 40+ GB SSD. Model: Claude 3.5 Sonnet (main) + GPT-4o (routing). Monthly cost: ~$10 hosting + $10–30 API costs.
Privacy-First (Local LLMs Only): No data leaves your network with full offline capability. Hardware: Mac Mini M4 (32 GB) or Linux with RTX 3060+. RAM: 32+ GB (or 16+ GB VRAM). Storage: 100+ GB NVMe. Model: Qwen 2.5 Coder 14B + Llama 3.1 8B fallback. Monthly cost: $1–2 electricity (Mac Mini) or $0 (existing PC).
Small Team (Multiple Agents): 3–5 team members each with their own OpenClaw agent. Hardware: Dedicated server or large VPS (8+ vCPU, 16+ GB). RAM: 16–32 GB. Storage: 100+ GB SSD. Model: Mixed — Claude for complex tasks, Gemini Flash for routine. Monthly cost: $20–40 hosting + $20–100 API costs.
Common Mistakes and How to Avoid Them
Over-provisioning for cloud API use: If you're using cloud APIs, you do NOT need 32 GB of RAM or a powerful GPU. A $5 VPS handles it fine. Save your money for API credits.
Under-provisioning for local LLMs: The opposite problem. Running a 14B model on 8 GB of RAM means heavy swapping and 10x slower inference. Check model requirements carefully.
Ignoring Docker log rotation: This silently fills your disk over weeks. Configure it from day one.
Exposing port 18789 to the internet: OpenClaw's gateway port should NEVER be directly exposed. Use a reverse proxy with TLS or Tailscale.
Running on spinning hard drives: OpenClaw does frequent small reads/writes to memory files. An HDD adds noticeable latency to every operation. Use an SSD or NVMe.
Skipping swap on low-memory VPS: Even with 2 GB RAM, a 2 GB swap file prevents OOM kills during occasional memory spikes.
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstabTroubleshooting Resource Issues
If the OpenClaw gateway won't start or crashes immediately, check your Node.js version first — it must be v22 or higher. Also verify you have at least 500 MB of free disk space.
If the gateway process disappears and dmesg shows OOM kills, add swap space and consider upgrading RAM. If running Docker, limit container memory to prevent system-wide OOM kills by setting deploy.resources.limits.memory in your compose file (for example, 1.5G).
If your agent takes 10+ seconds to respond with a local LLM, check whether Ollama is actually using your GPU. On NVIDIA, run nvidia-smi and look for the ollama process in GPU memory. On Apple Silicon, check Activity Monitor GPU History. If Ollama is falling back to CPU inference, verify CUDA drivers on Linux or ensure Metal is available on macOS.
If your Docker container keeps restarting, check the logs with docker logs openclaw-agent. Common causes include invalid JSON in openclaw.json configuration, missing API key environment variables, port 18789 already in use (check with lsof -i :18789), or a corrupt Docker volume that needs to be removed and recreated.
# Check Node.js version
node --version
# If below v22, install via nvm:
nvm install 22
nvm use 22
# Check disk space
df -h /
# Check GPU usage (NVIDIA)
nvidia-smi
# Limit container memory to prevent system-wide OOM
services:
openclaw:
deploy:
resources:
limits:
memory: 1.5GConclusion
OpenClaw's system requirements are refreshingly modest for cloud API setups — a $5 VPS or a Raspberry Pi handles personal assistant workloads without breaking a sweat. The resource demands only become significant when you introduce local LLMs, at which point your model choice dictates everything. For most people starting out: Get a 2–4 GB VPS from Contabo or Hetzner, use cloud APIs (start with Google Gemini's free tier), and upgrade only when you actually hit limits. Don't over-buy hardware based on hypothetical future needs. If privacy is your priority: A Mac Mini M4 with 32 GB ($699) is the single best investment — it handles both cloud APIs and local 70B models while running silent and drawing almost no power. If you want zero infrastructure hassle: DoneClaw provisions everything for $29/month. No hardware decisions, no Docker, no maintenance. Whatever path you choose, OpenClaw's configuration format is portable across all platforms. Start small, measure what you actually need, and scale from there.
Skip the setup? DoneClaw deploys OpenClaw for you — $29/mo with 7-day free trial, zero configuration.
Skip 60 minutes of setup — deploy in 60 seconds
DoneClaw handles Docker, servers, security, and updates. Your OpenClaw agent is ready to chat in under a minute.
Deploy NowFrequently asked questions
Can I run OpenClaw on 512 MB of RAM?
Technically yes, with aggressive swap usage, but it will be painfully slow and unreliable. The gateway alone uses 150–200 MB, leaving almost nothing for the OS, Docker, and tool execution. The practical minimum is 1 GB for cloud API setups.
Does OpenClaw need a GPU?
No — not for cloud API setups. A GPU is only needed if you want to run local language models via Ollama or LM Studio. Even then, CPU-only inference works (just slower). Apple Silicon's unified memory architecture is the best middle ground since it uses shared memory for both CPU and GPU workloads.
How much bandwidth does OpenClaw use?
Very little. A typical day of personal assistant use (50–100 messages with web searches) consumes about 10–30 MB. Most VPS plans include far more bandwidth than you'll ever need.
Can I run multiple OpenClaw agents on one server?
Yes. Each agent runs in its own container with its own workspace and configuration. A server with 8 GB RAM can comfortably host 3–4 cloud-API agents simultaneously. Use separate Docker Compose files with different ports and volume names for each agent.
What's the cheapest way to run OpenClaw 24/7?
Oracle Cloud's Always Free tier ($0/month for 4 ARM cores + 24 GB RAM) is the cheapest hosted option. For self-hosting, a Raspberry Pi 5 costs ~$100 upfront with less than $1/month in electricity. DoneClaw's managed service at $29/month eliminates all hardware and infrastructure concerns.