Setup & Deployment
How to Run OpenClaw on Unraid: Complete Setup Guide (2026)
13 min read · Updated 2026-04-13
By DoneClaw Team · We run managed OpenClaw deployments and write from hands-on production experience.
If you want to run OpenClaw on Unraid, you are in a sweet spot: Unraid gives you easy Docker management, persistent storage, sane backups, and enough flexibility to run a serious always-on AI agent without turning your homelab into a second job. The timing is good too. On February 3, 2026, the jdhill777/openclaw-unraid project announced that OpenClaw was available as an official Unraid Community Applications template, which makes installation dramatically easier for non-developers. Instead of hand-rolling Docker commands, you can install from Unraid's app flow, map persistent folders, add your model provider key, and get to a working Control UI fast. This guide shows exactly how to do that, when to use the CA template vs a manual Docker run, how to wire in Ollama or cloud models, and how to avoid the dumb mistakes that break most first installs.
Why Unraid is a strong fit for OpenClaw
OpenClaw is not just a chatbot in a box. It is a persistent AI agent platform with memory, channels, tools, automation, scheduled jobs, and optional browser or shell access. That means the host matters.
Unraid is a strong fit because it gives you simple Docker lifecycle management from the web UI, persistent appdata storage under /mnt/user/appdata/, easy LAN exposure for the OpenClaw Control UI, optional GPU and local-model workflows through Ollama or other sidecars, and a natural place for always-on personal infrastructure like Home Assistant, media tools, and private services.
Compared with a random Ubuntu VM, Unraid is easier to maintain. Compared with a managed platform, it gives you more control over your data, storage layout, and network policy.
The tradeoff is obvious: you are now the ops team.
Community Apps vs manual Docker
You have two realistic ways to install OpenClaw on Unraid. The Community Apps template is best for most users — it is fast, flexible enough, and has a low risk of mistakes. Manual docker run is best for power users — it offers maximum flexibility but takes longer and has a higher risk of mistakes.
My recommendation is simple: use the Community Apps template unless you have a specific reason not to.
The openclaw-unraid template already maps the right persistent paths, exposes the default Control UI port, and gives you fields for gateway auth and provider keys.
What you need before you start
Before installing, have these ready:
OpenClaw's Getting Started docs recommend Node 24 for direct installs, but on Unraid you usually do not care because the container handles runtime packaging for you.
- Unraid 6.x or 7.x with Docker enabled
- At least 2 GB RAM available for the container flow — OpenClaw's Docker docs call out that image builds can be OOM-killed on 1 GB hosts with exit 137
- A gateway token for Control UI/API auth
- One model provider key, such as: Anthropic, OpenAI, OpenRouter, Google Gemini, or Groq
- Optional: Ollama if you want local inference
Folder layout that will save your ass later
The Unraid template repo documents these persistent mounts: /root/.openclaw maps to /mnt/user/appdata/openclaw/config (stores openclaw.json, sessions, credentials), /home/node/clawd maps to /mnt/user/appdata/openclaw/workspace (workspace files, memory, projects), /projects maps to /mnt/user/appdata/openclaw/projects (optional coding projects), and /home/linuxbrew/.linuxbrew maps to /mnt/user/appdata/openclaw/homebrew (optional Homebrew packages).
Do not dump everything into one giant folder. Separate config, workspace, and optional projects now, and backups become much cleaner later.
Step 1: Install OpenClaw from Unraid Community Apps
If the CA entry is available in your server, open Apps in Unraid, search for OpenClaw, click Install, fill in the required template fields, and click Apply.
The repo README documents these core fields:
Then open the Control UI at your Unraid IP on port 18789 with your gateway token as a query parameter. That query token matters. If you forget it, the UI will look broken when it is actually doing auth correctly.
- Control UI Port: 18789
- Config Path: /mnt/user/appdata/openclaw/config
- Workspace Path: /mnt/user/appdata/openclaw/workspace
- Gateway Token: any strong secret, ideally generated with openssl rand -hex 24
- Provider API key: Anthropic, OpenRouter, OpenAI, Gemini, Groq, etc.
Step 2: Set the right default model
This is the most common OpenClaw misconfiguration on Unraid.
The template README explicitly warns that OpenClaw does not auto-detect your provider from the API key. If you set a Gemini key but leave the default model on Anthropic, you will get authentication failures like "No API key found for anthropic."
After install, open the Control UI config and set the primary model correctly. Common model examples by provider: Anthropic uses anthropic/claude-sonnet-4-5, Google Gemini uses google/gemini-2.0-flash, OpenAI uses openai/gpt-4o, Groq uses groq/llama-3.1-70b-versatile, and OpenRouter uses openrouter/anthropic/claude-3-sonnet.
{
"agents": {
"defaults": {
"model": {
"primary": "google/gemini-2.0-flash"
}
}
}
}Step 3: Create a minimal working config
OpenClaw reads config from ~/.openclaw/openclaw.json. The docs show that a missing config is fine because OpenClaw can start with defaults, but in practice on Unraid you should make the basics explicit.
A clean starter config looks like this:
- bind: "lan" lets other devices on your network reach the UI through the published Unraid port
- auth.mode: "token" is the simplest secure default for a personal server
- Explicit workspace path reduces confusion during later migrations
{
"gateway": {
"mode": "local",
"bind": "lan",
"controlUi": { "allowInsecureAuth": true },
"auth": { "mode": "token" }
},
"agents": {
"defaults": {
"workspace": "~/.openclaw/workspace",
"model": {
"primary": "anthropic/claude-sonnet-4-5"
}
}
}
}Skip 60 minutes of setup — deploy in 60 seconds
DoneClaw handles Docker, servers, security, and updates. Your OpenClaw agent is ready to chat in under a minute.
Deploy NowStep 4: Add channels after the core install works
Do not start with Discord, Telegram, Slack, voice mode, and local LLMs all at once. That is how people manufacture chaos.
First, verify the container starts, the dashboard loads, the token works, and a test prompt gets a reply. Only then add channels.
{
"channels": {
"telegram": {
"enabled": true,
"botToken": "YOUR_TELEGRAM_BOT_TOKEN",
"dmPolicy": "pairing"
}
}
}
{
"channels": {
"discord": {
"enabled": true,
"token": "YOUR_DISCORD_BOT_TOKEN"
}
}
}Step 5: Run OpenClaw on Unraid with Ollama
This is where Unraid gets really interesting. If you want lower recurring cost or more privacy, pair OpenClaw with Ollama on the same box or another LAN machine.
The practical choice is whether Ollama lives in another Unraid container, on a separate GPU box, or not at all because cloud models are still better for your use case.
Do not use Ollama just because "local sounds cool." If you need top-tier reasoning, multimodal quality, or agentic coding reliability, cloud models still win more often than homelab copium wants to admit.
- Use Ollama when you care about local inference
- Use Ollama for predictable latency inside your LAN
- Use Ollama for reduced per-message API cost
- Use Ollama for keeping sensitive prompts off third-party APIs
{
"agents": {
"defaults": {
"model": {
"primary": "ollama/qwen2.5:14b"
}
}
}
}Step 6: Harden the install before you expose anything beyond LAN
This part matters more than the sexy AI bits.
OpenClaw's security docs are blunt: the platform assumes a personal assistant trust model, not a hostile multi-tenant environment. If multiple untrusted people can message one tool-enabled agent, you are effectively sharing that tool authority.
For a personal Unraid install, do these at minimum:
The security docs also recommend starting with the smallest access that still works, then widening only when you have a reason.
- Keep access on LAN or behind a trusted tunnel
- Use token auth with a long random token
- Do not expose a wide-open bot to strangers
- Limit enabled tools if you do not need shell, browser, or filesystem actions
- Run regular audits with openclaw security audit and openclaw security audit --deep
Manual install on Unraid without Community Apps
If you prefer full control, the template repo includes a manual docker run path. A simplified version is shown below.
Use the template unless you need to customize mounts, image tags, or environment behavior. Manual runs are more fragile and harder to maintain from the Unraid GUI.
mkdir -p /mnt/user/appdata/openclaw/config
mkdir -p /mnt/user/appdata/openclaw/workspace
mkdir -p /mnt/user/appdata/openclaw/homebrew
docker run -d \
--name OpenClaw \
--network bridge \
--user root \
--restart unless-stopped \
-p 18789:18789 \
-v /mnt/user/appdata/openclaw/config:/root/.openclaw:rw \
-v /mnt/user/appdata/openclaw/workspace:/home/node/clawd:rw \
-v /mnt/user/appdata/openclaw/homebrew:/home/linuxbrew/.linuxbrew:rw \
-e OPENCLAW_GATEWAY_TOKEN=YOUR_TOKEN \
-e ANTHROPIC_API_KEY=sk-ant-YOUR_KEY \
ghcr.io/openclaw/openclaw:latestPerformance expectations on Unraid
OpenClaw itself is not usually the bottleneck. The model is.
The gateway container is light enough for modest homelab hardware. Docker builds are safer with 2 GB+ RAM available. Cloud models offer the best overall quality but have ongoing API cost. Local Ollama models have lower cost but higher hardware sensitivity. Browser-heavy or coding workflows benefit from stronger CPU, RAM, and fast storage.
If your Unraid box already runs Plex, arr apps, VMs, and local inference, OpenClaw will feel great only if you leave real headroom. Starving an agent platform of RAM and blaming the software is clown behavior.
Updating OpenClaw on Unraid
The template repo documents two common update paths.
From the Unraid UI: go to Docker, click the OpenClaw icon, choose Check for Updates, and apply the update.
Before major updates, back up at least your config and workspace directories under /mnt/user/appdata/openclaw/.
docker pull ghcr.io/openclaw/openclaw:latest
docker restart OpenClawTroubleshooting OpenClaw on Unraid
1. The Control UI opens but you cannot log in — You forgot the tokenized URL or mismatched the gateway token. Use the full URL with your token as a query parameter. Verify openclaw.json exists in your config mount and that auth is configured correctly.
2. "No API key found for anthropic" even though you added a key — Your default model is still set to Anthropic while you actually entered a Gemini, OpenAI, or Groq key. Set agents.defaults.model.primary to a model from the provider you actually configured.
3. Container starts, but skills or extra tools disappear after restart — You installed packages in a non-persistent path. Use the dedicated persistent Homebrew mount at /mnt/user/appdata/openclaw/homebrew.
4. Docker build or install crashes with exit 137 — Memory pressure. OpenClaw's Docker docs recommend at least 2 GB RAM for image build workflows. Free memory, stop other heavy containers, or move the build to stronger hardware.
5. The dashboard loads, but remote devices cannot connect — Wrong bind mode, bad port mapping, or LAN/firewall issue. Check that port 18789 is published, gateway.bind is set to lan, you are using the right Unraid server IP, and no reverse proxy or firewall is mangling the request.
6. Channel setup works in theory but messages never arrive — Usually bad channel credentials, missing bot permissions, or too much setup at once. Strip back to one channel, verify logs, then re-add complexity one layer at a time.
Conclusion
If you want a practical, always-on homelab deployment, OpenClaw on Unraid is one of the best self-hosting combinations available right now. The official Community Apps path removes most of the annoying setup friction. Unraid gives you clean persistent storage, easy Docker management, and a natural place to run adjacent services like Ollama. The main pitfalls are not about OpenClaw itself, they are about sloppy configuration: wrong default model, weak auth, bad folder mapping, and exposing too much too soon. Use the CA template, keep the first install boring, get one provider working, then add channels, automations, and local models one step at a time. That path works.
Skip the setup? DoneClaw deploys OpenClaw for you — $29/mo, cancel anytime, zero configuration.
Skip 60 minutes of setup — deploy in 60 seconds
DoneClaw handles Docker, servers, security, and updates. Your OpenClaw agent is ready to chat in under a minute.
Deploy NowFrequently asked questions
Is OpenClaw on Unraid a good idea for beginners?
Yes, if you are already comfortable with Unraid Docker basics. It is easier than a raw Linux install and easier to maintain than a pile of custom scripts.
Can I run OpenClaw on Unraid with local models only?
Yes, using Ollama or another local provider path. But local-only is not automatically better. It depends on your hardware and quality requirements.
What port does OpenClaw use on Unraid?
The documented default is 18789 for the Control UI and Gateway API.
How much RAM do I need to run OpenClaw on Unraid?
The Docker docs specifically warn that image build steps may fail on 1 GB hosts and recommend at least 2 GB RAM for build workflows. Real usage depends mostly on your model setup and what else your server is doing.
Should I expose OpenClaw directly to the internet from Unraid?
No, not by default. Keep it on LAN, behind a secure tunnel, or behind a properly hardened auth and network setup.