AI Models

Claude

Anthropic's flagship models, included with every plan.

MoClaw runs on Claude. Opus 4.7 for hard reasoning, Sonnet 4.6 for everyday speed, Haiku 4.5 for fast lightweight tasks. Switch between them in any chat, or let Auto pick the right tier per turn. All three are included in your monthly credits, with prompt caching and no markup. Bring your own Anthropic key for unlimited use at provider rates.

How it works

3 steps to wire up Claude, no engineering required.

  1. 1

    Pick a model in the model picker

    Top-right of any chat. Switch tiers mid-conversation; the rest of the thread keeps working with the new model.

  2. 2

    Or let MoClaw auto-route

    On Auto, MoClaw picks the right tier per turn. Haiku for chitchat, Sonnet for normal work, Opus when the task needs heavy reasoning. Set as default in Settings then Models.

  3. 3

    Bring your own Anthropic key (optional)

    BYOK swaps the billing source to your direct Anthropic account at provider rates with no markup. Settings then Models then BYOK.

Which Claude tier should I pick?

Pick the one that fits how you use AI.

When to use Speed and credit cost
Opus 4.7 Hard reasoning, code review, agent runs, anything where quality outweighs latency Slowest, highest credit cost
Sonnet 4.6 Default for chat, drafting, search, summaries Fast, mid credit cost
Haiku 4.5 Bulk classification, real-time bots, scheduled briefs, low-stakes replies Fastest, lowest credit cost
Auto Let MoClaw pick per turn — recommended default Routes to the cheapest tier that meets quality

Supported providers

Claude Opus 4.7

Anthropic's flagship reasoning model. Best for code review, deep analysis, agentic workflows, and any task where you'd rather wait a few seconds for a better answer. 1M token context window available.

Claude Sonnet 4.6

The default. Balanced speed and intelligence for everyday chat, drafting, summarization, and most agent runs. 200K context, fast first-token latency.

Claude Haiku 4.5

Fastest and cheapest. Best for high-volume, latency-sensitive tasks: classification, simple extraction, real-time bot replies, scheduled-job summaries. 200K context.

Try saying

Real prompts you can paste into Claude.

  • On Opus, review this 800-line PR diff and call out anything risky around concurrency or auth.
  • On Sonnet, draft a 200-word announcement for our product launch in the voice of a calm, experienced founder.
  • On Haiku, classify these 500 support tickets into bug, feature request, billing, or other in a CSV.

Step by step demo

What actually happens when you send the prompt.

Prompt 01 5 steps

“On Auto, plan a launch for our new pricing page and execute it: write the announcement, schedule a Twitter thread, and draft the Slack post.”

What MoClaw does

  1. 1 Routes the planning step to Opus 4.7 (multi-step reasoning, multiple constraints).
  2. 2 Drafts the announcement on Sonnet 4.6 (good prose, fast).
  3. 3 Generates the 4-tweet thread on Haiku 4.5 (short, formulaic).
  4. 4 Writes the Slack post on Sonnet, with @-channel call-to-action.
  5. 5 Schedules the tweet thread for 9am tomorrow via Cron and posts the Slack message immediately.
Result

9 minutes later you have: a published Slack announcement in #marketing, a scheduled tweet thread for 9am tomorrow, and a Notion page with the long-form announcement saved as a draft. Routing detail at the bottom: '3 Sonnet calls, 1 Opus call, 4 Haiku calls — 12 credits total.'

FAQ

Quick answers about pricing, privacy, and limits.

Are all three Claude models really included on every plan?
Yes, including the free tier. The difference between plans is monthly credits and feature access (cron, cloud computer, deep research), not which models you can talk to.
How does Auto routing decide which model to use?
A small classifier looks at the turn (length, complexity, presence of reasoning cues) and picks the cheapest tier likely to meet quality. You can override per-chat or pin a default in Settings.
What about prompt caching?
MoClaw handles it automatically across all three tiers. You see the credit savings on long threads (about 90% off cached tokens) without configuring anything.
Can I use Opus 4.7's 1M context?
Yes, with the Opus 1M model variant. There's a per-token credit premium beyond 200K tokens, the same way Anthropic charges. Visible in the model picker.
Do I need a separate Anthropic account for BYOK?
Yes. Get an API key from console.anthropic.com, paste it into Settings then Models then BYOK. After that every Claude call hits your account directly. See the BYOK page for details.

Try MoClaw free.

1,000 credits a month, or bring your own key for unlimited usage.

Cancel anytime