Setup & Deployment
How to Set Up OpenClaw with OpenRouter: Complete Multi-Model Access Guide (2026)
19 min read · Updated 2026-03-19
By DoneClaw Team · We run managed OpenClaw deployments and write from hands-on production experience.
Setting up OpenClaw with OpenRouter is one of the smartest moves you can make as an AI agent user. Instead of managing separate API keys for OpenAI, Anthropic, Google, Mistral, and dozens of other providers, OpenRouter gives you a single unified API that routes to 300+ AI models — including free ones. Your OpenClaw agent gets instant access to every major LLM on the market, with automatic fallbacks, provider-level uptime pooling, and pay-as-you-go pricing with zero markup on inference costs. Whether you want to experiment with cutting-edge models, reduce costs by routing different tasks to different LLMs, or simply have a reliable fallback chain that keeps your agent running when one provider goes down, OpenRouter is the answer. This guide walks you through every step: from getting your API key to advanced multi-model routing configurations that can cut your AI costs by 50% or more.
What Is OpenRouter and Why Use It with OpenClaw?
OpenRouter is a unified AI gateway that sits between your application and dozens of model providers. Think of it as a smart proxy: you send requests to one endpoint (https://openrouter.ai/api/v1), and OpenRouter routes them to the right provider — OpenAI, Anthropic, Google, Meta, Mistral, Cohere, and many more.
Here's why this matters for OpenClaw users:
Single API Key, Hundreds of Models — Without OpenRouter, connecting OpenClaw to multiple providers means managing separate API keys, billing accounts, and configurations for each one. With OpenRouter, you configure one provider and unlock everything.
Automatic Fallbacks — If Anthropic's API goes down at 3 AM, OpenRouter automatically routes your request to another provider hosting the same model. Your agent keeps working while you sleep. This is especially valuable for always-on OpenClaw deployments where uptime matters.
Free Models for Experimentation — OpenRouter offers several models completely free, including variants of Llama, Gemma, Phi, and others. These are perfect for low-stakes tasks like summarization, drafting, or brainstorming — saving your API budget for the tasks that need frontier models.
Pay-As-You-Go with No Markup — OpenRouter passes through provider pricing without markup on inference. You pay a small fee when purchasing credits (around 5% for card payments), but the per-token costs are identical to what you'd pay going direct. For many users, the convenience and fallback reliability more than justify this.
OpenRouter also supports powerful model variants through suffixes. Free (:free) routes to free-tier providers with lower rate limits. Extended (:extended) uses providers with longer context windows. Thinking (:thinking) enables reasoning/chain-of-thought by default. Online (:online) attaches web search results to the prompt. Nitro (:nitro) prioritizes fastest providers for throughput. Floor (:floor) prioritizes cheapest providers for cost savings. Exacto (:exacto) optimizes for tool-calling reliability.
These variants are appended to any model ID. For example, anthropic/claude-sonnet-4-5:free uses the free tier, while openai/gpt-5.2:nitro prioritizes speed.
Prerequisites
Before you start, make sure you have:
- A running OpenClaw installation — If you haven't set up OpenClaw yet, follow our beginner's setup guide first.
- Node.js 22 or later — Required for OpenClaw.
- An OpenRouter account — Free to create at openrouter.ai.
- Credits on OpenRouter — Even $5 is enough to get started. Free models work without credits, but with reduced rate limits.
Step 1: Create Your OpenRouter Account and API Key
Go to openrouter.ai and sign up (Google, GitHub, or email).
Navigate to Settings → Credits and add at least $5 in credits. This unlocks higher rate limits for free models (200 requests/day vs. 50 without credits) and lets you use paid models.
Go to Settings → API Keys and click Create Key.
Name your key something descriptive like openclaw-agent and copy the key. It starts with sk-or-v1-.
Important: Store this key securely. You'll need it in the next step, and you won't be able to see it again in the OpenRouter dashboard.
Step 2: Configure OpenClaw with OpenRouter
OpenClaw has built-in support for OpenRouter as a bundled provider plugin. This means you don't need to manually configure base URLs or API compatibility — OpenClaw handles it natively.
Method 1: Using the Onboarding Wizard (Recommended) — The easiest way to configure OpenRouter is through OpenClaw's onboarding wizard:
Method 2: Environment Variable — Set your API key as an environment variable. For persistence, add it to your shell profile (~/.bashrc, ~/.zshrc) or your OpenClaw environment configuration.
Method 3: Direct Configuration (openclaw.json) — For the most control, edit your openclaw.json configuration directly. After saving, restart your OpenClaw gateway.
Method 4: CLI Quick Setup — Use the openclaw models commands for quick configuration.
openclaw onboard
export OPENROUTER_API_KEY="sk-or-v1-your-key-here"
{
// Set OpenRouter as the primary provider
agents: {
defaults: {
model: {
primary: "openrouter/anthropic/claude-sonnet-4-5"
}
}
},
// Set the API key via env
env: {
OPENROUTER_API_KEY: "sk-or-v1-your-key-here"
}
}
openclaw gateway restart
# Set OpenRouter as your primary model
openclaw models set openrouter/anthropic/claude-sonnet-4-5
# Verify the configuration
openclaw models statusStep 3: Verify Your Setup
After configuration, verify everything works:
You should see a response identifying the model you configured. If you get an authentication error, double-check your API key.
# Check model status
openclaw models status
# List available models
openclaw models list --provider openrouter
# Send a test message
openclaw agent --message "Hello, which model are you?" --thinking offStep 4: Scan and Select Free Models
One of the best features of the OpenClaw + OpenRouter combo is the built-in model scanner. This tool inspects OpenRouter's free model catalog and can probe models for tool and image support:
The scanner ranks models by: image support (multimodal capability), tool-calling latency (how fast they handle function calls), context window size, and parameter count.
In interactive mode (TTY), you can select fallback models from the scan results. For automated/headless setups, use --yes to accept the top recommendations.
Recommended Free Models for OpenClaw (March 2026): meta-llama/llama-4-maverick:free (400B MoE, 1M context, good for general tasks and coding), google/gemma-3-27b-it:free (27B, 96K context, good for instruction following), qwen/qwen-2.5-coder-32b-instruct:free (32B, 32K context, good for code generation), mistralai/mistral-small-3.2:free (24B, 128K context, good for fast general tasks), and deepseek/deepseek-chat-v3-0324:free (685B MoE, 64K context, good for complex reasoning). All support tool calling.
Note: Free model availability and rate limits change. Run openclaw models scan periodically to discover new options.
# Full scan with probing (requires OPENROUTER_API_KEY)
openclaw models scan
# Metadata only (no live probes)
openclaw models scan --no-probe
# Filter by minimum parameter count (e.g., 70B+ models only)
openclaw models scan --min-params 70
# Auto-set the best free model as your default
openclaw models scan --set-default
# Auto-set the best free image-capable model
openclaw models scan --set-imageSkip 60 minutes of setup — deploy in 60 seconds
DoneClaw handles Docker, servers, security, and updates. Your OpenClaw agent is ready to chat in under a minute.
Deploy NowStep 5: Set Up Multi-Model Routing with Fallbacks
The real power of OpenRouter with OpenClaw comes from intelligent model routing. You can set a premium model as primary and cheaper (or free) models as fallbacks:
Once configured, you can switch models on the fly in any chat session using the /model command. Use /model to show the model picker, /model list to list all available models, /model 3 to select model #3 from the list, or /model openrouter/openai/gpt-5.2 to switch to a specific model.
Tip: For OpenRouter models that contain / in the model ID (like anthropic/claude-sonnet-4-5), you must include the openrouter/ prefix.
{
agents: {
defaults: {
model: {
// Primary: best model for complex tasks
primary: "openrouter/anthropic/claude-opus-4-6",
// Fallbacks: tried in order if primary fails
fallbacks: [
"openrouter/anthropic/claude-sonnet-4-5",
"openrouter/openai/gpt-5.2",
"openrouter/meta-llama/llama-4-maverick:free"
]
},
// Define the model allowlist with aliases
models: {
"openrouter/anthropic/claude-opus-4-6": {
alias: "Opus"
},
"openrouter/anthropic/claude-sonnet-4-5": {
alias: "Sonnet"
},
"openrouter/openai/gpt-5.2": {
alias: "GPT-5"
},
"openrouter/meta-llama/llama-4-maverick:free": {
alias: "Llama Free"
}
}
}
}
}
# Add a fallback model
openclaw models fallbacks add openrouter/openai/gpt-5.2
# List current fallbacks
openclaw models fallbacks list
# Remove a fallback
openclaw models fallbacks remove openrouter/openai/gpt-5.2
# Clear all fallbacks
openclaw models fallbacks clearAdvanced Configuration: Cost Optimization Strategies
Strategy 1: Use Model Variants for Task-Appropriate Routing — Instead of always using the most expensive model, append variants to match the task. Use :floor for routine tasks (cost-optimized), :nitro for time-sensitive tasks (speed-optimized), and :free for low-stakes operations.
Strategy 2: Separate Image Models — Configure a dedicated image-capable model for multimodal tasks while keeping a text-only model as primary.
Strategy 3: Cron Jobs on Cheaper Models — If you use OpenClaw cron jobs for periodic tasks like email summaries or news digests, route those to cheaper models.
{
agents: {
defaults: {
model: {
// Premium model for complex tasks
primary: "openrouter/anthropic/claude-opus-4-6"
},
models: {
// Cost-optimized variant for routine tasks
"openrouter/anthropic/claude-sonnet-4-5:floor": {
alias: "Sonnet Cheap"
},
// Speed-optimized for time-sensitive tasks
"openrouter/anthropic/claude-sonnet-4-5:nitro": {
alias: "Sonnet Fast"
},
// Free variant for low-stakes operations
"openrouter/meta-llama/llama-4-maverick:free": {
alias: "Free"
}
}
}
}
}
{
agents: {
defaults: {
model: {
primary: "openrouter/anthropic/claude-opus-4-6"
},
imageModel: {
primary: "openrouter/openai/gpt-5.2",
fallbacks: [
"openrouter/google/gemini-3.1-pro-preview"
]
}
}
}
}
{
cron: {
jobs: [
{
name: "daily-email-summary",
schedule: { kind: "cron", expr: "0 8 * * *" },
payload: {
kind: "agentTurn",
message: "Summarize my unread emails",
model: "openrouter/meta-llama/llama-4-maverick:free"
},
sessionTarget: "isolated"
}
]
}
}OpenRouter vs Direct Provider: Cost Comparison
Here's a realistic cost comparison for a typical OpenClaw user processing ~1 million tokens per month (roughly 750K input + 250K output):
Claude Opus 4.6 costs $22.50/mo direct vs $22.50/mo + ~$1.13 credit fee through OpenRouter. Claude Sonnet 4.5 costs $5.25/mo direct vs $5.25/mo + ~$0.26 credit fee. GPT-5.2 costs $7.50/mo direct vs $7.50/mo + ~$0.38 credit fee. Llama 4 Maverick (free) would require self-hosting directly but costs $0.00 through OpenRouter. With mixed routing, you'd pay ~$15/mo managing 3 API keys directly, vs ~$8/mo with 1 API key through OpenRouter — about 47% savings.
The key insight: while individual models cost slightly more through OpenRouter (the ~5% credit purchase fee), the ability to route tasks to the right model — including free ones — typically saves money overall. Plus you eliminate the overhead of managing multiple provider accounts.
Using OpenRouter's BYOK (Bring Your Own Key)
If you already have API keys from providers like OpenAI or Anthropic, you can use them through OpenRouter while still getting the benefits of unified routing.
This is ideal if you have existing API credits or enterprise agreements with specific providers but still want OpenRouter's fallback and routing features.
- Go to OpenRouter Settings → Integrations
- Add your provider API keys
- The first 1,000 BYOK requests per month are free
- After that, OpenRouter charges a small percentage of what the same model would normally cost
Troubleshooting Common Issues
"Model is not allowed" — If you see this error, your model isn't in the OpenClaw allowlist. Fix: Add the model to your agents.defaults.models configuration, or remove the allowlist entirely to allow all models.
Authentication Errors (401/403) — Verify your API key with echo $OPENROUTER_API_KEY. Check the key hasn't expired in the OpenRouter dashboard. Ensure the key has sufficient credits. Run openclaw models status to verify auth state.
Rate Limiting (429) — Buy credits to increase free model limits from 50 to 200 requests/day. Use the :floor variant to route to less-loaded providers. Add fallback models so OpenClaw automatically tries the next fallback on rate limits. Space out cron jobs to avoid scheduling multiple jobs at the same time.
Slow Responses — Use the :nitro variant to prioritize throughput-optimized providers. Check OpenRouter status at status.openrouter.ai. Switch to a faster model — smaller models like Sonnet are faster than Opus. Check your VPS location, as OpenRouter's primary servers are in the US.
Model Output Issues — Set explicit maxTokens, as some models default to low output limits. Use :exacto for tool-heavy workflows, optimized for function-calling reliability. Check model compatibility by running openclaw models scan to verify tool calling support.
# Quick fix: remove the allowlist
openclaw config patch '{"agents":{"defaults":{"models":null}}}'Monitoring Your OpenRouter Usage
OpenRouter Dashboard — The OpenRouter Activity tab (openrouter.ai/activity) shows per-request cost breakdown, model usage distribution, token counts (input/output), and provider routing decisions.
OpenClaw Built-in Monitoring — Use OpenClaw's /status or /usage commands to see session-level usage. For programmatic monitoring, the OpenClaw API exposes usage data you can pipe into dashboards or alerting systems.
/status # Current session stats
/usage # Detailed token and cost breakdownOpenRouter + OpenClaw: Best Practices
Follow these best practices to get the most out of the OpenRouter and OpenClaw combination:
- Start with a scan. Run openclaw models scan before configuring anything. It shows you what's available and what works best.
- Set up fallbacks from day one. Even if you only use one model, adding 2-3 fallbacks prevents outages from affecting your agent.
- Use free models for background tasks. Heartbeats, periodic checks, and low-stakes automation don't need Claude Opus. Route them to free Llama or Gemma models.
- Monitor your spending weekly. Check the OpenRouter Activity tab. If one model dominates your costs, consider whether a cheaper model would work for those tasks.
- Keep your OpenClaw updated. OpenClaw's bundled OpenRouter plugin gets updates with new models and features. Run openclaw update regularly.
- Test model variants. The :nitro, :floor, and :exacto variants can dramatically change performance and cost. Experiment to find the best fit for your workflows.
- Use model aliases. Set up short aliases like "fast" and "smart" in your config so you can quickly switch in chat without remembering full model IDs.
Conclusion
Setting up OpenClaw with OpenRouter takes about 5 minutes and immediately expands your agent's capabilities from a single model to hundreds. The combination of unified billing, automatic fallbacks, free model access, and intelligent routing variants makes OpenRouter the most practical way to get multi-model access in OpenClaw. Start with the free models, add a $5 credit to unlock better rate limits, and set up a fallback chain. Your agent will be more reliable, more cost-efficient, and more capable than with any single direct provider connection.
Skip the setup? DoneClaw deploys OpenClaw for you — $29/mo with 7-day free trial, zero configuration.
Skip 60 minutes of setup — deploy in 60 seconds
DoneClaw handles Docker, servers, security, and updates. Your OpenClaw agent is ready to chat in under a minute.
Deploy NowFrequently asked questions
Can I use OpenRouter's free models with OpenClaw at zero cost?
Yes. If you configure free model variants (e.g., openrouter/meta-llama/llama-4-maverick:free), you pay nothing for inference. The only limitation is rate limits: 50 requests/day without credits, 200/day with any credit purchase. For a personal AI agent with moderate usage, free models are genuinely viable.
Does OpenRouter add latency compared to direct provider connections?
Minimal. OpenRouter adds roughly 50-100ms of routing overhead per request. For streaming responses (which OpenClaw uses by default), this is barely noticeable — you might see a slightly longer time-to-first-token, but the overall experience is smooth. Use the :nitro variant if latency is critical.
Can I mix OpenRouter with direct provider keys in OpenClaw?
Absolutely. OpenClaw supports multiple providers simultaneously. You can use Anthropic directly for your primary model and OpenRouter as a fallback, or vice versa. Configure your primary model as anthropic/claude-opus-4-6 and add OpenRouter models like openrouter/openai/gpt-5.2 as fallbacks.
What happens when my OpenRouter credits run out?
Paid models stop working immediately — you'll get an insufficient credits error. Free models (:free variants) continue working at reduced rate limits. OpenClaw's fallback system helps here: if you have free models in your fallback chain, your agent gracefully degrades instead of going silent.
How does OpenRouter compare to running Ollama locally?
They serve different needs. OpenRouter gives you access to frontier models (Claude Opus, GPT-5) without local hardware. Ollama runs models locally for zero API cost and complete privacy, but requires significant hardware (16GB+ RAM for 7B models, 64GB+ for 70B). The best setup is often both: Ollama for local/private tasks, OpenRouter for frontier model access.