Setup & Deployment

How to Use MCP Servers with OpenClaw: Complete Setup Guide (2026)

23 min read · Updated 2026-03-31

By DoneClaw Team · We run managed OpenClaw deployments and write from hands-on production experience.

If you've been using OpenClaw as your personal AI agent, you've probably hit a wall: your agent can browse the web, send messages, and run commands — but what about connecting to your actual tools? Your databases, your project management apps, your CRM, your code repositories? That's exactly what MCP servers solve. The Model Context Protocol (MCP) is the open standard that lets AI agents like OpenClaw connect to virtually any external system through a universal interface. Think of it as USB-C for AI — one protocol, infinite connections. In this guide, we'll walk through everything: what MCP is, how OpenClaw supports it via the built-in mcporter skill, how to configure and connect popular MCP servers, and real-world workflows that turn your OpenClaw agent from a smart chatbot into a genuinely integrated assistant.

What Is MCP (Model Context Protocol)?

MCP is an open-source protocol created by Anthropic that standardizes how AI applications connect to external tools and data sources. Before MCP, every integration was custom — if you wanted your AI to access Google Calendar, you wrote a Google Calendar integration. If you wanted Notion, you wrote a separate Notion integration. Each one with its own authentication, data format, and error handling.

MCP changes that by defining a universal client-server architecture: the MCP Host is the AI application (in our case, OpenClaw) that manages connections; the MCP Client is a component within the host that maintains a connection to one MCP server; and the MCP Server is a program that exposes tools, resources, or prompts to the AI.

The protocol uses JSON-RPC 2.0 over two transport mechanisms: STDIO for local servers on the same machine (lowest latency, simple setup — just a command), and Streamable HTTP for remote servers and cloud services (higher latency due to network round-trips, moderate setup requiring a URL and authentication).

Each MCP server can expose three types of capabilities: Tools — functions the AI can call (e.g., create_issue, query_database, send_email); Resources — data the AI can read (e.g., file contents, API responses, configuration); and Prompts — pre-written templates for specific tasks (e.g., "summarize this PR").

As of early 2026, there are thousands of MCP servers available — from official ones by companies like Sentry, Linear, and Notion, to community-built servers for everything from home automation to cryptocurrency trading.

How OpenClaw Supports MCP: The mcporter Skill

OpenClaw integrates MCP through a built-in skill called mcporter. This is a CLI tool and runtime that manages MCP server connections, handles authentication, and lets your agent call MCP tools directly.

Here's what makes mcporter powerful:

  • Unified management: List, configure, and call any MCP server from one interface
  • Multiple transports: Supports both STDIO (local) and HTTP (remote) servers
  • OAuth integration: Built-in OAuth flow for servers that require it
  • Daemon mode: Keep frequently-used servers running in the background
  • Type generation: Auto-generate TypeScript types from server schemas
openclaw skills list | grep mcporter
npm install -g mcporter
mcporter --version

Step-by-Step: Connecting Your First MCP Server

Let's connect a real MCP server to OpenClaw. We'll start with the filesystem server — one of the simplest and most useful MCP servers available.

Step 1: Install the MCP Server. Most MCP servers are distributed as npm packages or Python packages. The filesystem server is available via npm. This server gives your AI agent controlled access to specific directories on your machine.

Step 2: Configure mcporter. Create or edit your mcporter configuration. This tells mcporter the server name (filesystem), transport (STDIO for local process), and the command to launch the filesystem server with access to two directories.

Step 3: Verify the Connection. List available tools from the server. You should see tools like read_file, write_file, list_directory, and search_files.

Step 4: Test a Tool Call. If you see your directory listing, the connection is working.

Step 5: Use It from OpenClaw. Once configured, your OpenClaw agent can use mcporter tools naturally. The agent recognizes mcporter as an available skill and routes relevant requests through it automatically.

npx @modelcontextprotocol/server-filesystem --help
mcporter config add filesystem \
  --transport stdio \
  --command "npx @modelcontextprotocol/server-filesystem /home/user/documents /home/user/projects"
mcporter list filesystem --schema
mcporter call filesystem.list_directory path="/home/user/documents"

Connecting Popular MCP Servers

Here's how to set up the most useful MCP servers for common workflows.

GitHub MCP Server — perfect for code reviews, issue management, and repository exploration. Available tools include create_issue, search_repositories, get_file_contents, create_pull_request, list_commits, and more.

Linear MCP Server — for teams using Linear for project management. Available tools include list_issues, create_issue, update_issue, list_projects, and search_issues.

PostgreSQL / SQLite Database Server — query your databases directly through natural language.

Sentry MCP Server (Remote) — Sentry provides an official remote MCP server. Available tools include search_issues, get_issue_details, list_projects, and get_event.

Notion MCP Server — connect your Notion workspace for page and database management.

# Install
npm install -g @modelcontextprotocol/server-github

# Configure with your GitHub token
mcporter config add github \
  --transport stdio \
  --command "npx @modelcontextprotocol/server-github" \
  --env "GITHUB_PERSONAL_ACCESS_TOKEN=ghp_your_token_here"
mcporter config add linear \
  --transport stdio \
  --command "npx @linear/mcp-server" \
  --env "LINEAR_API_KEY=lin_api_your_key_here"
# SQLite
mcporter config add mydb \
  --transport stdio \
  --command "npx @modelcontextprotocol/server-sqlite /path/to/database.db"

# PostgreSQL
mcporter config add postgres \
  --transport stdio \
  --command "npx @modelcontextprotocol/server-postgres" \
  --env "POSTGRES_CONNECTION_STRING=postgresql://user:pass@localhost:5432/mydb"
mcporter config add sentry \
  --transport http \
  --url "https://mcp.sentry.dev/sse"

# Authenticate via OAuth
mcporter auth sentry
mcporter config add notion \
  --transport stdio \
  --command "npx @notionhq/mcp-server" \
  --env "NOTION_API_KEY=ntn_your_key_here"

Advanced Configuration

For servers you use frequently, run mcporter in daemon mode to avoid cold-start delays. The daemon keeps configured STDIO servers running in the background, so tool calls are nearly instant.

Here's a complete mcporter.json configuration file with multiple servers. Save this as config/mcporter.json in your OpenClaw workspace directory.

Never hardcode API keys in config files. Use environment variables instead, and reference them in mcporter config using the ${VAR_NAME} syntax.

If you're building custom integrations, mcporter can generate TypeScript types from any server. This is invaluable when building custom OpenClaw skills that wrap MCP server functionality.

# Start the daemon
mcporter daemon start

# Check status
mcporter daemon status

# Stop when needed
mcporter daemon stop
{
  "servers": {
    "filesystem": {
      "transport": "stdio",
      "command": "npx",
      "args": ["@modelcontextprotocol/server-filesystem", "/home/user/projects"],
      "env": {}
    },
    "github": {
      "transport": "stdio",
      "command": "npx",
      "args": ["@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_xxxxx"
      }
    },
    "sentry": {
      "transport": "http",
      "url": "https://mcp.sentry.dev/sse",
      "auth": {
        "type": "oauth",
        "tokenPath": "~/.mcporter/tokens/sentry.json"
      }
    },
    "postgres": {
      "transport": "stdio",
      "command": "npx",
      "args": ["@modelcontextprotocol/server-postgres"],
      "env": {
        "POSTGRES_CONNECTION_STRING": "postgresql://user:pass@localhost:5432/mydb"
      }
    }
  }
}
# Generate TypeScript client types
mcporter emit-ts github --mode types > github-mcp-types.ts

# Generate a full client
mcporter emit-ts github --mode client > github-mcp-client.ts

Real-World Automation Workflows

Here are five practical workflows that combine OpenClaw's existing capabilities with MCP servers.

1. Morning Briefing with Live Data — Combine MCP servers with OpenClaw's cron jobs for an automated morning briefing. Every morning at 8 AM: check overnight PRs and issues via GitHub, get critical errors from the last 12 hours via Sentry, list today's assigned tasks via Linear, show today's meetings via Calendar, and deliver a summary via Telegram.

2. Automated Bug Triage — When Sentry detects a new error, your agent can automatically query Sentry for error details and stack trace, search GitHub for related issues, check the relevant code via the filesystem server, create a Linear issue with context and suggested fix, and notify the team via Discord or Slack.

3. Database-Powered Customer Support — Connect your production database (read-only!) and let your agent answer customer questions by querying user records, subscription status, and payment history.

4. Code Review Assistant — Combine GitHub MCP and filesystem access to fetch PR diffs via GitHub MCP, read related files via filesystem, and provide detailed code review with specific line-by-line feedback.

5. Research and Documentation Pipeline — Use multiple servers together to search GitHub for relevant code examples, query your database for usage statistics, read existing docs via filesystem, generate updated documentation, and create a PR with the changes.

{
  "name": "morning-briefing",
  "schedule": { "kind": "cron", "expr": "0 8 * * *", "tz": "America/New_York" },
  "payload": {
    "kind": "agentTurn",
    "message": "Run my morning briefing: check GitHub PRs, Sentry errors, Linear tasks, and today's calendar. Summarize everything concisely."
  },
  "sessionTarget": "isolated",
  "delivery": { "mode": "announce" }
}

Skip 60 minutes of setup — deploy in 60 seconds

DoneClaw handles Docker, servers, security, and updates. Your OpenClaw agent is ready to chat in under a minute.

Deploy Now

Building Your Own MCP Server

If the existing servers don't cover your needs, building a custom one is straightforward. Here's a minimal example in TypeScript using the official MCP SDK.

Or in Python using FastMCP, the high-level Python framework for MCP servers.

Then register it with mcporter and call it directly. For a deeper dive into building OpenClaw skills that wrap MCP servers, check out the custom skill tutorial and developer guide.

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";

const server = new McpServer({
  name: "my-custom-server",
  version: "1.0.0",
});

// Define a tool
server.tool(
  "get_status",
  "Check the status of a service",
  { service_name: { type: "string", description: "Name of the service" } },
  async ({ service_name }) => {
    const status = await checkServiceStatus(service_name);
    return {
      content: [{ type: "text", text: JSON.stringify(status) }],
    };
  }
);

// Connect via STDIO
const transport = new StdioServerTransport();
await server.connect(transport);
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("my-custom-server")

@mcp.tool()
async def get_status(service_name: str) -> str:
    """Check the status of a service.
    
    Args:
        service_name: Name of the service to check
    """
    status = await check_service_status(service_name)
    return f"Service {service_name}: {status}"

if __name__ == "__main__":
    mcp.run(transport="stdio")
mcporter call --stdio "node ./my-server.js" get_status service_name=api

Security Considerations

MCP servers have access to real systems. Take security seriously.

1. Principle of Least Privilege — Give each server only the access it needs. Restrict filesystem access to specific directories rather than giving access to everything.

2. Read-Only Where Possible — For database servers, use read-only connection strings to prevent accidental writes.

3. Token Scoping — When creating API tokens for GitHub, Linear, etc., use the minimum required scopes. Recommended scopes: GitHub — repo:read, issues:write (only if creating issues); Linear — read, write:issues; Notion — specific page/database access only; Sentry — project:read, event:read.

4. Network Isolation — Run mcporter behind Tailscale or a VPN for remote MCP servers. Never expose local MCP server ports to the public internet.

5. Audit Logging — Enable mcporter's JSON output for audit trails to track all tool calls.

# Good: restrict filesystem access to specific directories
npx @modelcontextprotocol/server-filesystem /home/user/documents

# Bad: give access to everything
npx @modelcontextprotocol/server-filesystem /
mcporter call github.create_issue --output json title="Bug fix" | tee -a /var/log/mcporter-audit.log

Troubleshooting Common Issues

Server Not Responding — If mcporter call hangs or times out, check the server process is running with mcporter daemon status, test the server directly with a JSON-RPC request, and check for port conflicts if using HTTP transport.

Authentication Failures — If you get 401 Unauthorized or 403 Forbidden errors, re-authenticate with mcporter auth --reset, verify token permissions in the service's settings, and check token expiration since OAuth tokens expire.

Tool Not Found — If you see "Tool not found" errors, list available tools with mcporter list --schema, check exact spelling (tool names are case-sensitive), and update the server package if needed.

JSON Parsing Errors — If you get SyntaxError in STDIO mode, the server may be writing to stdout and corrupting the JSON-RPC stream. Check that the server isn't using console.log() to stdout, and for Python servers ensure logging goes to stderr.

High Memory Usage — If the mcporter daemon uses excessive RAM, reduce the number of persistent daemon connections, only daemon-ize servers you call frequently (more than 10 times per day), and restart the daemon periodically.

MCP Servers vs OpenClaw Skills: When to Use Which

You might wonder: if OpenClaw already has skills, why use MCP servers at all?

Comparing OpenClaw Skills and MCP Servers: Setup — Skills use a SKILL.md drop-in, MCP servers need install and mcporter configuration. Portability — Skills are OpenClaw-specific, MCP servers work with any MCP client (Claude, VS Code, Cursor). Ecosystem — ClawHub offers around 200 skills, while thousands of MCP servers exist. Custom logic — Skills offer full SKILL.md flexibility, MCP servers provide a structured tool/resource model. Authentication — Skills use manual env vars and config, MCP servers have built-in OAuth and token management. Skills work best for OpenClaw-specific workflows, while MCP servers excel at standard integrations with external services.

The sweet spot: Use MCP servers for standard service integrations (GitHub, databases, APIs) and OpenClaw skills for agent-specific behavior (personality, memory, complex workflows). They complement each other perfectly.

Performance Tips

1. Use STDIO for Local Servers — STDIO transport has zero network overhead. Always prefer it for servers running on the same machine.

2. Batch Tool Calls — Instead of making 10 separate list_issues calls, use search/filter parameters to combine into one filtered call.

3. Cache Expensive Queries — For data that doesn't change frequently (like repository structure), cache results locally.

4. Use the Daemon for Frequent Servers — Cold-starting an MCP server takes 1-3 seconds. The daemon eliminates this startup latency.

5. Monitor Token Usage — MCP tool calls consume tokens in your AI model. Track usage via OpenClaw's cost tracking to avoid surprise bills.

The Future of MCP + OpenClaw

MCP adoption is exploding in 2026. Major developments include ChatGPT, Claude, VS Code, and Cursor all supporting MCP natively, GitHub's official MCP Registry making discovering servers as easy as finding npm packages, Streamable HTTP transport improvements with better auth and reliability, and early-stage MCP Apps that run interactive applications inside AI clients.

OpenClaw's mcporter integration means you get all these ecosystem benefits automatically. As new MCP servers appear, you can connect them to your agent without waiting for OpenClaw-specific integrations.

Conclusion

MCP servers transform OpenClaw from a smart chat assistant into a genuinely integrated automation platform. With mcporter, connecting to GitHub, databases, project management tools, and any custom system is straightforward and secure. The key steps: install mcporter (bundled with OpenClaw), configure your MCP servers (STDIO for local, HTTP for remote), secure your connections (least privilege, read-only where possible), and build workflows that combine multiple servers for real automation. Start with one server — the filesystem or GitHub server is a great first step. Once you see the power of natural language access to your actual tools, you'll quickly add more.

Skip the setup? DoneClaw deploys OpenClaw for you — $29/mo with 7-day free trial, zero configuration.

Skip 60 minutes of setup — deploy in 60 seconds

DoneClaw handles Docker, servers, security, and updates. Your OpenClaw agent is ready to chat in under a minute.

Deploy Now

Frequently asked questions

Do I need mcporter to use MCP servers with OpenClaw?

Yes — mcporter is OpenClaw's built-in MCP management tool. It handles server configuration, authentication, and tool invocation. It ships as a bundled skill that you can enable in Settings → Skills.

Can I use MCP servers without an internet connection?

Absolutely. STDIO-based MCP servers run locally on your machine. The filesystem server, SQLite server, and any custom server you build work completely offline. Only remote HTTP servers require internet.

How many MCP servers can I connect simultaneously?

There's no hard limit in mcporter. Practically, each STDIO server is a separate process consuming around 30-50MB of RAM. Most setups work fine with 5-10 concurrent servers. If you need more, use the daemon mode and only keep frequently-used servers active.

Are MCP tool calls billed as AI model tokens?

The tool call itself (the JSON-RPC request to the MCP server) is free. However, the AI model processes the tool's response as part of the conversation, which consumes input/output tokens. Large responses (like full file contents) can be expensive. Use selective queries and pagination to manage costs.

Is it safe to connect my production database via MCP?

Only with proper precautions: use a read-only database user, connect only to a read replica (not the primary), restrict accessible tables/schemas, and run behind a VPN or Tailscale. Never give an MCP server write access to production data unless you've thoroughly tested the server and have rollback procedures.

Can I use the same MCP servers with Claude Desktop and OpenClaw?

Yes — that's one of MCP's biggest advantages. Your mcporter.json configuration can be shared across MCP clients. The exact config format may vary slightly between clients, but the servers themselves are identical.

How do I update MCP servers?

For npm-based servers, run npm update -g @modelcontextprotocol/server-<name>. For Python-based servers, run pip install --upgrade <server-package>. After updating, restart the mcporter daemon with mcporter daemon restart.

What happens if an MCP server crashes mid-request?

mcporter handles crashes gracefully. STDIO servers are automatically restarted on the next call. HTTP servers return timeout errors that OpenClaw's retry logic can handle. You won't lose conversation state — the agent simply reports that the tool call failed and can retry.