Setup & Deployment

How to Set Up OpenClaw on a Mac Mini: The Definitive Guide (2026)

22 min read · Updated 2026-03-12

By DoneClaw Team · We run managed OpenClaw deployments and write from hands-on production experience.

The Mac Mini has become the go-to hardware for self-hosting OpenClaw. With Apple Silicon drawing just 4–7 watts at idle, silent fanless operation, and enough unified memory to run local LLMs, an OpenClaw Mac Mini setup gives you a personal AI agent that runs 24/7 for under $2/month in electricity — no VPS bills, no cloud dependency, no data leaving your home. This guide walks you through every step: choosing the right Mac Mini model, installing OpenClaw, configuring headless operation, setting up local AI models with Ollama, enabling secure remote access with Tailscale, and hardening everything for long-term reliability. Whether you're a developer building an AI-powered workflow or a power user who wants a personal assistant on Telegram, this is the complete playbook.

Why the Mac Mini Is the Best Hardware for OpenClaw

Before diving into the setup, let's understand why the Mac Mini has emerged as the community favorite over VPS hosting, Raspberry Pi, and other options.

**Power Efficiency That Changes the Math**

The M4 Mac Mini draws approximately 4 watts at idle and 10–15 watts under typical OpenClaw workloads (processing messages, running API calls, executing skills). Compare that to a typical VPS at $5–$20/month or a desktop PC drawing 60–150 watts at idle.

Hardware comparison: Mac Mini M4 idles at 4–7W costing $0.50–$1.50/month in electricity. Raspberry Pi 5 idles at 3–5W costing $0.40–$1.00/month. A budget VPS (Contabo/Hetzner) costs $5–$10/month recurring with no electricity cost. An old desktop PC idles at 60–150W costing $8–$20/month. Mac Mini M4 Pro idles at 5–10W costing $0.70–$2.00/month. Electricity costs calculated at $0.15/kWh US average.

**Apple Silicon Unified Memory: The Local LLM Advantage**

This is the real differentiator. Apple Silicon's unified memory architecture means the CPU and GPU share the same high-bandwidth memory pool (up to 100 GB/s on M4, 200 GB/s on M4 Pro). For local LLM inference, this translates to dramatically faster token generation compared to traditional CPU-only systems at the same price point.

A 32 GB Mac Mini M4 can comfortably run quantized 70B parameter models — something that would require an expensive GPU setup on Linux. The Raspberry Pi 5, by contrast, maxes out at 8 GB RAM and can barely run 7B models at usable speeds.

**Silence and Form Factor**

The M4 Mac Mini is completely fanless under normal loads. It's smaller than a sandwich. You can tuck it behind your router, on a bookshelf, or in a closet and forget it exists — which is exactly what you want from an always-on server.

**macOS Reliability**

macOS's launchd daemon system is rock-solid for keeping services running. OpenClaw integrates natively with launchd, meaning automatic startup on boot, automatic restart after crashes, and automatic recovery after power outages. No systemd unit files to write, no Docker containers to manage.

Choosing the Right Mac Mini Model

Not every Mac Mini is equal for OpenClaw. Your choice depends on one key question: do you want to run local AI models, or are you fine using cloud APIs?

Model comparison: Mac Mini M2 (refurbished, 8 GB, ~$350) is best for cloud APIs only with minimal local LLM capability (3B models only). Mac Mini M4 base (16 GB, $499) is good for cloud APIs plus small local models, running 7B–8B models smoothly. Mac Mini M4 upgraded (32 GB, $699) is the best all-around value, running quantized 70B models excellently. Mac Mini M4 Pro (24 GB, $1,399) is great for heavy local inference with fast 34B models. Mac Mini M4 Pro (48 GB, $1,599) offers maximum local AI with outstanding 70B+ at full speed.

**Our Recommendation**

For most users: Mac Mini M4 with 32 GB ($699). This hits the sweet spot. You can run OpenClaw with cloud APIs (Claude, GPT-4o, Gemini) for pennies per query, AND fall back to local models via Ollama when you want zero API costs. The 32 GB of unified memory handles quantized Llama 3.3 70B at comfortable speeds for conversational use.

If budget is tight, the base M4 with 16 GB ($499) works perfectly for cloud-API-only setups and still runs smaller local models like Llama 3.3 8B and Mistral 7B.

If you already own an M1 or M2 Mac Mini, it works fine — OpenClaw's resource requirements are minimal. The agent itself uses less than 200 MB of RAM. The model choice is what drives memory needs.

Step-by-Step OpenClaw Mac Mini Setup

**Prerequisites**

You'll need a Mac Mini running macOS Sonoma 14+ or Sequoia 15+, an internet connection, an API key from your chosen AI provider (Anthropic, OpenAI, Google, or a free option), and 10–15 minutes.

**Step 1: Update macOS and Install Homebrew**

Start with a fully updated system. Open Terminal (press Cmd + Space, type "Terminal"):

**Step 2: Install Node.js 22+**

OpenClaw requires Node.js 22 or higher:

**Step 3: Install and Onboard OpenClaw**

This is the main event. Install OpenClaw globally and run the onboarding wizard:

The --install-daemon flag is critical on Mac Mini. It creates a launchd plist that starts OpenClaw automatically when the Mac boots (even before you log in), restarts the gateway automatically if it crashes, and runs OpenClaw as a background daemon — no terminal window needed.

The onboarding wizard will walk you through: Model provider — Choose your AI provider and enter your API key. Channel setup — Connect Telegram, WhatsApp, Discord, or other messaging channels. Workspace — Choose or create a workspace directory for your agent's files. Binding mode — Choose loopback for Tailscale (recommended), or lan for local network access only.

**Step 4: Verify the Gateway Is Running**

After onboarding completes, check that the gateway started successfully. You should see output showing the gateway is running, which channels are connected, and the current model. If everything looks good, open your browser and navigate to http://localhost:18789. This opens the Control UI — a web dashboard where you can chat with your agent, manage settings, view sessions, and install skills.

**Step 5: Test Your Agent**

Send a test message through your connected channel (Telegram, WhatsApp, etc.) or use the Control UI chat. If the agent responds, congratulations — your OpenClaw Mac Mini setup is live.

# Check for and install macOS updates
softwareupdate --install --all

# Install Homebrew (macOS package manager)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

# Add Homebrew to your PATH (Apple Silicon Macs)
echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile
eval "$(/opt/homebrew/bin/brew shellenv)"
brew install node@22

# Verify the installation
node --version   # Should show v22.x.x
npm --version    # Should show 10.x.x
# Install OpenClaw
npm install -g openclaw@latest

# Verify installation
openclaw --version

# Run the onboarding wizard with daemon installation
openclaw onboard --install-daemon
openclaw status

Configuring the Mac Mini for Always-On Operation

A personal AI agent is only useful if it's always available. These settings ensure your Mac Mini stays online 24/7 without manual intervention.

**Prevent Sleep**

Open System Settings → Energy (or Battery → Options on laptops) and enable: Prevent automatic sleeping when the display is off, Wake for network access, and Start up automatically after a power failure. You can also set these via Terminal:

**Enable Automatic Login**

For the launchd daemon to have full capabilities after a reboot, go to System Settings → Lock Screen → Log in automatically as: Select your user account.

Security note: If your Mac Mini is in a physically secure location (your home), automatic login is fine. If it's in a shared space, skip this and accept that some features may need manual login after a reboot.

**Enable SSH for Headless Access**

You don't need a monitor, keyboard, or mouse connected to the Mac Mini once it's set up. Go to System Settings → General → Sharing → Remote Login: On. Now you can SSH from any machine on your network.

Harden SSH by disabling password authentication (use SSH keys instead). Generate an SSH key on your other machine if you don't have one:

# Prevent sleep
sudo pmset -a sleep 0
sudo pmset -a disablesleep 1

# Wake for network access
sudo pmset -a womp 1

# Auto-restart after power failure
sudo pmset -a autorestart 1

# Verify settings
pmset -g
# Find the Mac Mini's local IP
ipconfig getifaddr en0

# SSH from another machine
ssh [email protected]

# Or use Bonjour/mDNS
ssh [email protected]
# On the Mac Mini, edit SSH config
sudo nano /etc/ssh/sshd_config

# Add or modify these lines:
PasswordAuthentication no
PubkeyAuthentication yes

# Restart SSH
sudo launchctl stop com.openssh.sshd
sudo launchctl start com.openssh.sshd
ssh-keygen -t ed25519
ssh-copy-id [email protected]

Running Local AI Models with Ollama

One of the biggest advantages of an OpenClaw Mac Mini setup over a cheap VPS is the ability to run powerful local LLMs with zero per-token cost. This is where Apple Silicon unified memory really shines.

**Install Ollama**

**Pull Your First Model**

Choose a model based on your Mac Mini's RAM:

**Configure OpenClaw to Use Ollama**

Edit your OpenClaw configuration. Add or modify the model settings. Or use a hybrid approach — Ollama for routine tasks, cloud API for complex ones. This lets your agent handle simple queries locally (zero cost) while escalating complex reasoning tasks to Claude via API.

**Ollama Performance Benchmarks on Apple Silicon**

Performance comparison: Llama 3.3 8B (Q4) runs at ~45 tokens/sec on M4 16GB, ~48 on M4 32GB, ~55 on M4 Pro 48GB. Mistral 7B (Q4) runs at ~50, ~52, and ~60 tokens/sec respectively. Llama 3.3 70B (Q4) won't fit on 16GB, runs ~12 tokens/sec on 32GB, ~18 on 48GB. Qwen 2.5 32B (Q4) runs at ~8, ~20, and ~28 tokens/sec respectively. Benchmarks are approximate and vary by quantization level and system load.

For comfortable conversational use, you want at least 8–10 tokens/sec. The 32 GB M4 Mac Mini hits this threshold for 70B models, which is why we recommend it as the sweet spot.

brew install ollama

# Start Ollama (it will also auto-start via launchd)
ollama serve
# For 16 GB Mac Mini — fast, capable for everyday tasks
ollama pull llama3.3:8b

# For 32 GB Mac Mini — the sweet spot, near-GPT-4 quality
ollama pull llama3.3:70b-instruct-q4_K_M

# For 48 GB Mac Mini — maximum quality
ollama pull llama3.3:70b-instruct-q8_0
{
  "agent": {
    "model": "ollama/llama3.3:70b-instruct-q4_K_M"
  }
}
{
  "agent": {
    "model": "ollama/llama3.3:8b",
    "models": {
      "default": "ollama/llama3.3:8b",
      "complex": "anthropic/claude-sonnet-4-20250514"
    }
  }
}

Secure Remote Access with Tailscale

Your OpenClaw agent should be reachable from anywhere — your phone, your laptop at a coffee shop, your office — without opening any ports on your router. Tailscale creates a private encrypted mesh network between your devices for free (personal use).

**Install and Configure Tailscale**

Install Tailscale on your phone and laptop too. All devices on the same Tailscale network can reach each other directly.

**Configure OpenClaw for Tailscale**

OpenClaw has built-in Tailscale integration. With bind set to loopback, the gateway only listens on 127.0.0.1 — it's invisible to the internet. Tailscale Serve handles HTTPS and routing, so you can access the Control UI at https://your-mac-mini.tail-net-name.ts.net/.

**Enable MagicDNS**

In the Tailscale admin console, enable MagicDNS. This gives your Mac Mini a memorable hostname instead of a raw IP address.

# Install Tailscale
brew install tailscale

# Authenticate (opens browser)
sudo tailscale up

# Check your Tailscale IP
tailscale ip -4
{
  "gateway": {
    "bind": "loopback",
    "tailscale": {
      "mode": "serve"
    }
  }
}

Skip 60 minutes of setup — deploy in 60 seconds

DoneClaw handles Docker, servers, security, and updates. Your OpenClaw agent is ready to chat in under a minute.

Deploy Now

Managing the OpenClaw Daemon

Since you installed with --install-daemon, OpenClaw runs as a launchd service. Here's how to manage it.

**Check Status, Restart, Stop, and View Logs**

Use the openclaw gateway commands for common operations. For lower-level control, you can use launchctl directly:

# Check status
openclaw gateway status

# Restart the gateway
openclaw gateway restart

# Stop the gateway
openclaw gateway stop

# Follow live logs
openclaw gateway logs -f

# Or check the log file directly
tail -f ~/Library/Logs/openclaw/gateway.log
# Check if the daemon is loaded
launchctl list | grep openclaw

# Unload (stop and remove from launchd)
launchctl unload ~/Library/LaunchAgents/ai.openclaw.gateway.plist

# Reload
launchctl load ~/Library/LaunchAgents/ai.openclaw.gateway.plist

Personalizing Your Agent

A fresh OpenClaw installation is a blank slate. The real power comes from customizing your agent's personality, memory, and skills.

**SOUL.md — Your Agent's Personality**

Create a SOUL.md file in your workspace to define how your agent behaves:

**Memory System**

OpenClaw's persistent memory system means your agent remembers conversations across sessions. On a Mac Mini, memory files live in your workspace directory and persist across reboots. No special configuration needed — it works out of the box.

**Installing Skills**

Skills extend your agent's capabilities. Install them from ClawHub. Popular skills for Mac Mini setups include: weather (get forecasts via natural language), brave-search (web search without leaving the chat), todoist (task management integration), himalaya (email access via IMAP/SMTP), and video-edit (video manipulation with ffmpeg).

# SOUL.md

You're a sharp, efficient personal assistant. You have opinions and
you share them. No corporate speak, no "I'd be happy to help" — just
direct, useful answers.

You manage my calendar, check my email, research topics I'm curious
about, and remind me about things I'd forget. You're always on, always
available, and you actually remember our conversations.

Humor is welcome. Sycophancy is not.
# Search for skills
openclaw skills search weather

# Install a skill
openclaw skills install weather

# List installed skills
openclaw skills list

Mac Mini vs VPS vs Raspberry Pi: Full Comparison

Choosing between a Mac Mini, a VPS, and a Raspberry Pi for OpenClaw hosting? Here's the complete breakdown.

Mac Mini M4 (32GB): $699 upfront, ~$1.50/month electricity, breaks even vs VPS in ~7–12 months, excellent local LLM support (70B models), 16–48 GB unified RAM, 256 GB–2 TB NVMe storage, silent operation, full physical access, 100% local data privacy, excellent reliability via launchd, but limited to your home upload speed.

Budget VPS ($5/mo): $0 upfront, $5–$20/month recurring, no local LLM support (CPU too slow), 4–8 GB typical RAM, 50–200 GB SSD, remote with no physical access, provider can access data, excellent reliability via systemd, data center network speeds.

Raspberry Pi 5 (8GB): ~$100 upfront, ~$0.75/month electricity, breaks even vs VPS in ~2 months, poor local LLM support (7B max, very slow), 4–8 GB RAM, microSD (slow) or NVMe HAT storage, near-silent, full physical access, 100% local data privacy, good reliability but SD card wear risk, limited to home upload speed.

Bottom line: If you want local LLM capability and care about data privacy, the Mac Mini is the clear winner. If you need data center network speeds or don't want hardware to manage, a VPS is better. The Raspberry Pi is a great budget option for cloud-API-only setups.

Troubleshooting Common Issues

**OpenClaw Won't Start After Reboot**

Symptom: The gateway doesn't start automatically after restarting the Mac Mini. Fix: Check that the launchd plist is loaded. If it's not listed, reload it. If the plist file doesn't exist, re-run the daemon installation:

**"Connection Refused" When Accessing Control UI Remotely**

Symptom: You can access http://localhost:18789 on the Mac Mini itself, but not from other devices. Fix: If you're using Tailscale with bind set to loopback, that's expected — the gateway intentionally doesn't listen on the network. Access it via the Tailscale hostname. If you want LAN access without Tailscale, change the bind mode to lan and restart the gateway.

**Mac Mini Goes to Sleep Despite Settings**

Symptom: The agent becomes unreachable after a few hours. Fix: Verify power management settings. If the settings look correct but sleep still happens, there may be a Power Nap issue. Also check System Settings → Lock Screen and ensure the screen lock timer isn't triggering a full sleep.

**Ollama Models Run Slowly**

Symptom: Token generation is much slower than expected. Fix: Check if the model fits in memory. If the model is larger than available memory, macOS will swap to disk, which kills performance. Either use a smaller quantization (Q4_K_M instead of Q8_0), use a smaller model (8B instead of 70B), or close other memory-intensive apps.

**Telegram Bot Not Responding**

Symptom: Messages sent to the Telegram bot don't get responses. Fix: Check the gateway logs for errors. Common causes: Invalid bot token (re-check the token from BotFather), Webhook conflict (if you previously hosted the bot elsewhere, use openclaw channel telegram reset), or Network issue (ensure the Mac Mini has internet access).

launchctl list | grep openclaw

# If not listed, reload:
launchctl load ~/Library/LaunchAgents/ai.openclaw.gateway.plist

# If plist doesn't exist:
openclaw onboard --install-daemon
pmset -g

# Look for:
# sleep = 0
# disablesleep = 1
# womp = 1

# If Power Nap is the issue:
sudo pmset -a powernap 0
# Check available memory
vm_stat | head -5

# Check Ollama's memory usage
ollama ps
openclaw gateway logs -f

Advanced: Optimizing for Long-Term Reliability

**Automatic macOS Updates**

You want security updates but don't want the Mac Mini rebooting unexpectedly. This enables automatic critical security patches while leaving major OS updates for manual installation during a maintenance window.

**Monitoring Uptime**

Add a simple uptime check to your agent's heartbeat configuration. Your OpenClaw agent can monitor its own health and alert you via Telegram if something goes wrong.

**Backup Your Workspace**

Your agent's personality, memory, and configuration live in the workspace directory. The workspace directory contains your SOUL.md, MEMORY.md, daily memory files, skill configurations, and TOOLS.md. Losing these means losing your agent's accumulated context and personality — essentially resetting it to a blank slate.

# Enable automatic security updates only
sudo defaults write /Library/Preferences/com.apple.SoftwareUpdate AutomaticallyInstallMacOSUpdates -bool false
sudo defaults write /Library/Preferences/com.apple.SoftwareUpdate CriticalUpdateInstall -bool true
# Simple backup to an external drive or cloud
rsync -av ~/openclaw-workspace/ /Volumes/Backup/openclaw/

# Or use Time Machine (it's on a Mac, after all)

Cost Breakdown: First Year vs Ongoing

Cost breakdown for running OpenClaw on a Mac Mini M4 32GB: Year 1 with cloud API costs approximately $897 total ($699 Mac Mini + $18 electricity + $180 Claude API + $0 Tailscale). Year 2+ with cloud API costs approximately $198/year ($18 electricity + $180 API). With local models only, Year 1 costs approximately $717 ($699 + $18) and Year 2+ costs just $18/year in electricity.

Compare this to alternatives: DoneClaw managed service is $29/month ($348/year) including everything with zero setup. A VPS plus API costs run approximately $60–$240/year for the VPS plus API costs on top.

The Mac Mini pays for itself within the first year if you were previously paying for a VPS, and becomes essentially free to run in subsequent years if you use local models.

Conclusion

The Mac Mini is genuinely the best hardware for self-hosting OpenClaw in 2026. The combination of Apple Silicon performance, negligible power draw, silent operation, native launchd integration, and the option to run local LLMs makes it hard to beat at any price point. The setup takes about 15 minutes if you're comfortable with Terminal, and you end up with a personal AI agent that runs 24/7 without supervision, costs under $2/month in electricity, remembers everything across sessions, connects to Telegram, WhatsApp, Discord, and more, can run local AI models with zero API costs, is reachable from anywhere via Tailscale, and recovers automatically from power outages and crashes. If you'd rather skip the self-hosting entirely, DoneClaw's managed service gives you all of this without touching hardware — but for those who want full control and the satisfaction of running their own AI infrastructure, the Mac Mini is the way to go.

Skip the setup? DoneClaw deploys OpenClaw for you — $29/mo with 7-day free trial, zero configuration.

Skip 60 minutes of setup — deploy in 60 seconds

DoneClaw handles Docker, servers, security, and updates. Your OpenClaw agent is ready to chat in under a minute.

Deploy Now

Frequently asked questions

Can I use a Mac Mini M1 or M2 for OpenClaw?

Yes. OpenClaw itself is very lightweight — it needs less than 200 MB of RAM and minimal CPU. Any Apple Silicon Mac Mini runs the gateway perfectly. The difference is in local LLM capability: M1/M2 with 8 GB can only run small models (3B–7B), while M4 with 32 GB handles 70B models. For cloud-API-only setups, even an M1 with 8 GB works great.

Can I run OpenClaw and Ollama simultaneously without issues?

Yes. OpenClaw's gateway process uses minimal resources. Ollama loads models into memory on demand and unloads them after inactivity. On a 32 GB Mac Mini, you can comfortably run both with room to spare. Just avoid loading multiple large models simultaneously.

Should I use Docker on the Mac Mini?

You can, but it's unnecessary for most users. OpenClaw's native installation with launchd is simpler and more efficient on macOS. Docker adds a virtualization layer that slightly reduces Apple Silicon's memory bandwidth advantage for local LLMs. For a dedicated Mac Mini, native installation is the better path.

What happens if my home internet goes down?

Your agent's gateway keeps running locally, but cloud API calls and messaging channels (Telegram, WhatsApp) won't work until internet returns. If you're running local models via Ollama, the agent can still process requests through the local Control UI. All queued messages will be processed when connectivity resumes.

How do I update OpenClaw on the Mac Mini?

Run npm update -g openclaw, then restart the gateway with openclaw gateway restart. OpenClaw updates are backward-compatible — your workspace, memory, and configuration are preserved across updates.