Skills & Customization

Building Custom DoneClaw Skills — Developer Guide

13 min read · Updated 2026-03-11

By DoneClaw Team · We run managed OpenClaw deployments and write from hands-on production experience.

DoneClaw's built-in skills cover common workflows, but the real power comes from building your own. Custom skills let you teach your AI agent exactly how to handle tasks specific to your work, your tools, and your preferences. This guide walks through the complete lifecycle of building a custom skill — from understanding the file format and writing instructions, to testing locally, deploying to your agent, debugging issues, and following best practices that make skills reliable and maintainable. Code examples are included for every step.

Skill Anatomy — What Makes Up a Skill

A DoneClaw skill is a markdown file placed in the ~/.openclaw/skills/ directory inside your container. When you send a message that starts with the skill's trigger command, the agent loads the skill file and uses its instructions as context for processing your request.

Every skill file has four sections: Trigger (the command that activates it), Description (a one-line summary), Instructions (detailed directions for the agent), and Rules (constraints and guardrails). The Instructions section is where the real work happens — it is essentially a system prompt that tells the agent exactly how to behave when this skill is active.

The simplest possible skill is just a few lines of markdown. There is no compilation step, no deployment pipeline, and no configuration beyond the file itself. Drop the file in the skills directory, and it is immediately available.

# ~/.openclaw/skills/my-skill.md

## Trigger
/my-command

## Description
One-line description of what this skill does.

## Instructions
Detailed instructions for the agent. This is where you define:
- What input to expect after the trigger command
- How to process that input
- What output format to produce
- Where to store results in memory

## Rules
- Constraint 1: Keep responses under N words
- Constraint 2: Never do X
- Constraint 3: Always do Y

Writing Effective Instructions

The Instructions section is the most important part of a skill. It determines how reliably your agent handles the task. Writing good instructions is like writing good documentation — be specific, use examples, and cover edge cases.

Start with the expected input format. If the user types /invoice ACME Corp $5000 net-30, your instructions should specify that the agent should parse a company name, amount, and payment terms from the text after the trigger. Be explicit about what each part means and what to do if parts are missing.

Define the output format precisely. Instead of saying produce a summary, specify the exact structure: a heading, bullet points for key data, and a confirmation line. Agents follow structured instructions much more reliably than vague ones.

Include examples of both input and output. The agent uses these as reference patterns, and they dramatically improve consistency. A single good example is worth a paragraph of description.

## Instructions
Parse invoice requests in the format:
/invoice [company] [amount] [terms]

Extract:
- Company name (required)
- Amount in USD (required, with or without $ sign)
- Payment terms (optional, default: net-30)

Generate an invoice with:
- Invoice number: INV-[YYYY]-[sequential number]
- Date: today
- Due date: calculated from terms
- Line items: single line with the amount

Example input: /invoice ACME Corp $5000 net-15
Example output:
---
INVICE INV-2026-0042
To: ACME Corp
Date: 2026-03-11
Due: 2026-03-26
Amount: $5,000.00
Terms: Net 15
---
Saved to: invoices/2026-03/INV-2026-0042

If company name is missing, ask for it.
If amount is missing, ask for it.
If terms are not specified, default to Net 30.

Using Memory in Skills

One of the most powerful features of DoneClaw skills is access to persistent memory. Your skill can read from and write to the agent's memory, enabling skills that accumulate knowledge over time and reference past interactions.

Memory operations are implicit — you instruct the agent to store something in memory and specify the path. The agent handles the actual storage mechanism. Paths should be organized hierarchically using forward slashes, with consistent naming conventions across all your skills.

When reading from memory, be explicit about where to look. Instead of check past invoices, write search memory under invoices/ for the company name and return the most recent match. The more specific your instruction, the more reliable the retrieval.

## Instructions

When logging an expense:
- Store under: expenses/[YYYY-MM]/[sequential-id]
- Format: { amount, category, description, date }

When generating a report:
- Read all entries from: expenses/[YYYY-MM]/
- Group by category
- Calculate totals per category and grand total
- Compare to previous month: read from expenses/[prev-YYYY-MM]/

When searching:
- Look in expenses/ across all months
- Match on description or category
- Return most recent 10 matches

## Memory Paths Used
- expenses/[YYYY-MM]/ — monthly expense entries
- expenses/config — categories, budget limits
- expenses/recurring — subscriptions and fixed costs

Testing Skills Locally

Before deploying a skill to your live agent, test it by placing the file in the skills directory and sending test messages. The feedback loop is immediate — edit the file, send a message, see the result.

Start with the happy path: test the most common usage pattern and verify the output format matches your expectations. Then test edge cases: what happens with missing arguments, invalid input, or empty memory (first-time use)? Each edge case you handle in testing is a frustrating failure you avoid in daily use.

Keep a test log in your conversations. After each test, note what worked and what did not. When the output is wrong, the fix is almost always in the Instructions section — either you were not specific enough about the expected behavior, or you missed an edge case.

Testing Checklist for New Skills

1. Happy path:
   [ ] Trigger with valid, complete arguments
   [ ] Verify output format matches spec
   [ ] Verify memory storage (if applicable)

2. Edge cases:
   [ ] Trigger with no arguments
   [ ] Trigger with partial arguments
   [ ] Trigger with invalid input (wrong types, special chars)
   [ ] First use (empty memory)
   [ ] Repeated use (memory has existing data)

3. Integration:
   [ ] Works with /briefing (if applicable)
   [ ] Memory paths don't conflict with other skills
   [ ] Output length is reasonable for Telegram delivery

Get your own AI agent today

Persistent memory, channel integrations, unlimited usage. DoneClaw deploys and manages your OpenClaw instance so you just chat.

Get Started

Deploying Skills to Your Agent

Deploying a skill to your DoneClaw agent means placing the markdown file in the ~/.openclaw/skills/ directory inside your container. There are several ways to do this depending on your comfort level with the tools.

The simplest method is to paste the skill content directly in a conversation with your agent and ask it to save the file. Your agent has filesystem access inside its container and can create files in the skills directory. This works well for short skills and quick iterations.

For longer skills or batch deployments, you can use the DoneClaw web dashboard or the container's Telegram interface to upload files. Power users who have SSH access to the VPS can also write files directly to the container's volume.

Once the file is in place, the skill is immediately available — no restart required. Send the trigger command to verify it loads correctly.

Method 1: Ask your agent to create the file
> Please save this as ~/.openclaw/skills/my-skill.md:
> [paste skill content]

Method 2: Via Telegram
> Send the .md file as a document to your Telegram bot
> Ask the agent to move it to ~/.openclaw/skills/

Method 3: Direct file write (advanced)
> SSH into the VPS, write to the container's volume:
> docker cp my-skill.md openclaw-[user-id]:/root/.openclaw/skills/

Debugging Skills

When a skill does not work as expected, the issue is almost always in the Instructions section. Here is a systematic debugging approach that resolves most problems quickly.

First, check if the trigger is recognized. Send just the trigger command with no arguments. If the agent does not respond with skill-specific behavior, the file might not be in the right directory, the trigger might have a typo, or the file format might be malformed. Verify the file exists at ~/.openclaw/skills/ and has the correct ## Trigger heading.

If the trigger works but the output is wrong, the instructions are ambiguous. AI agents interpret vague instructions in unpredictable ways. Replace vague terms with specific ones: instead of write a summary, say write a 3-sentence summary covering [X], [Y], and [Z]. Add an explicit example of the expected output.

If the agent ignores rules, make the rules more prominent. Move critical constraints from the Rules section into the Instructions section itself, closer to the step where the constraint applies. Rules at the bottom of the file are less reliably followed than constraints embedded inline with the relevant instruction.

Debugging Decision Tree

Trigger not recognized?
├── Check file location: ~/.openclaw/skills/[name].md
├── Check trigger format: ## Trigger on its own line, /command on next line
└── Check for file encoding issues (must be UTF-8)

Output format wrong?
├── Add explicit output example to Instructions
├── Specify exact structure (headings, bullets, line breaks)
└── Test with simpler input to isolate the issue

Rules being ignored?
├── Move critical rules inline with the instruction step
├── Use stronger language: "NEVER" instead of "avoid"
└── Add a negative example showing what NOT to do

Memory not working?
├── Check path format (forward slashes, no spaces)
├── Verify memory was written (ask agent to recall it)
└── Check for path conflicts with other skills

Advanced Patterns

Once you are comfortable with basic skills, these advanced patterns let you build more sophisticated automations.

Skill chaining lets one skill trigger another. In your Instructions, include a line like after storing the expense, run /report to show the updated monthly summary. The agent processes the second trigger as a follow-up action. This is useful for workflows that involve multiple steps.

Conditional logic in skills uses natural language conditions. Instead of if/else code, write if the user has previously set a budget for this category (check expenses/config), compare the new expense against the budget and warn if it exceeds 80%. The agent evaluates these conditions using its memory and context.

Scheduled skills combine with DoneClaw's cron capability. You can configure a skill to run automatically at specific times — the daily briefing at 7am, the expense report on the last day of each month, or the file organizer cleanup every Sunday. This turns reactive skills into proactive automation.

## Instructions (Advanced: Skill Chaining)

When a new expense is logged:
1. Store the expense entry
2. If this is the 5th expense today, run /expense report
3. If the category total exceeds the monthly budget, send a
   Telegram alert: "Budget warning: [category] at [X]% of limit"

## Instructions (Advanced: Conditional Logic)

When generating the weekly report:
1. Read expenses from the current week
2. If total spending > last week's total by more than 20%:
   - Highlight the increase prominently
   - List the top 3 categories driving the increase
3. If any category has zero spending this week but had spending
   last week, note it as a change in pattern
4. If this is the last week of the month, include month-to-date
   vs. budget comparison

Best Practices

After building and iterating on dozens of skills, patterns emerge for what makes a skill reliable and maintainable. These best practices come from real DoneClaw user experience.

Keep skills focused on one task. A skill that tries to handle email summarization, calendar management, and expense tracking will produce mediocre results for all three. Split it into three separate skills that share memory paths instead. Each skill should do one thing well.

Use consistent memory paths across all your skills. Establish a path convention early — for example, [domain]/[YYYY-MM]/[item] for time-series data and [domain]/config for settings. Document the paths your skill reads from and writes to in a Memory Paths Used section at the bottom of the skill file.

Version your skills. When you make a significant change to a skill's behavior, keep the old version as a backup. A simple naming convention like my-skill-v2.md works. If the new version has problems, you can revert quickly.

Write skills for your future self. Six months from now, you will not remember why you added that specific rule or what edge case a particular instruction handles. Add brief comments (as markdown text) explaining the reasoning behind non-obvious choices.

  • One skill, one task — avoid multi-purpose skills
  • Consistent memory paths — document what each skill reads/writes
  • Include explicit input/output examples in every skill
  • Version significant changes — keep backups of working skills
  • Test edge cases: no arguments, invalid input, empty memory
  • Move critical constraints inline with the instruction, not just in Rules
  • Keep total skill file under 200 lines — agents follow shorter instructions better
  • Add reasoning comments for non-obvious rules

Conclusion

Building custom DoneClaw skills is straightforward: write a markdown file with a trigger, instructions, and rules, then drop it in the skills directory. The real skill is in writing clear, specific instructions that handle edge cases and produce consistent output. Start simple, test thoroughly, use memory for state, and iterate based on real usage. Your best skills will be the ones you refine over a few weeks of daily use, gradually tightening the instructions until the output is exactly what you need every time.

Skip the setup? DoneClaw deploys OpenClaw for you — $29/mo with 7-day free trial, zero configuration.

Get your own AI agent today

Persistent memory, channel integrations, unlimited usage. DoneClaw deploys and manages your OpenClaw instance so you just chat.

Get Started

Frequently asked questions

Do I need to know how to code to build skills?

No. Skills are written in plain markdown with natural language instructions. There is no programming language, no compilation, and no API to learn. If you can write clear instructions that a person would understand, you can write a skill that your AI agent will follow.

How many skills can I have active at once?

There is no hard limit on the number of skill files in your ~/.openclaw/skills/ directory. In practice, most users have 10-20 active skills. The agent only loads a skill when its trigger is used, so having many skills does not affect performance or memory usage.

Can I share my custom skills with other DoneClaw users?

Yes. Since skills are just markdown files, you can share them by posting the file content on ClawHub (the community skill library), in Discord channels, or by sending the file directly. Other users drop the file into their skills directory and it works immediately.