🦞

AI Models

OpenClaw supports multiple AI providers. Choose the one that fits your needs and budget.

Anthropic

Claude 3.5 SonnetRecommended

The most capable model for complex tasks. Excellent at coding, analysis, and nuanced conversations.

Input

$3.00/1M

Output

$15.00/1M

Context

200K tokens

Pros

  • Best overall quality
  • Excellent coding ability
  • Strong reasoning
  • Good at following instructions

Cons

  • Higher cost
  • Slower than smaller models
Setup Guide →

Anthropic

Claude 3.5 Haiku

Fast and affordable option for simpler tasks. Great for quick responses and high-volume usage.

Input

$0.25/1M

Output

$1.25/1M

Context

200K tokens

Pros

  • Very fast
  • Low cost
  • Good quality for price
  • Same context window

Cons

  • Less capable on complex tasks
  • May miss nuances
Setup Guide →

OpenAI

GPT-4o

OpenAI's flagship multimodal model. Good all-around performance with image understanding.

Input

$2.50/1M

Output

$10.00/1M

Context

128K tokens

Pros

  • Multimodal (images)
  • Fast
  • Good all-around
  • Large ecosystem

Cons

  • Smaller context than Claude
  • Variable quality
Setup Guide →

OpenAI

GPT-4o mini

Budget-friendly option from OpenAI. Decent performance at a fraction of the cost.

Input

$0.15/1M

Output

$0.60/1M

Context

128K tokens

Pros

  • Very affordable
  • Decent quality
  • Fast

Cons

  • Limited capabilities
  • May struggle with complex tasks
Setup Guide →

Google

Gemini 1.5 Flash

Google's fast model with massive context window. Free tier available for low usage.

Input

Free tier

Output

Then $0.075/1M

Context

1M tokens

Pros

  • Huge context window
  • Free tier
  • Fast
  • Multimodal

Cons

  • Quality varies
  • Less tested with OpenClaw
Setup Guide →

Amazon Bedrock

Multiple Providers

Access Claude, Llama, Mistral through AWS. Enterprise-grade security with existing AWS infrastructure.

Input

Varies

Output

by model

Context

Up to 200K

Pros

  • AWS integration
  • Multiple models
  • Enterprise compliance
  • Data privacy

Cons

  • Requires AWS account
  • More complex setup
Setup Guide →

OpenRouter

200+ Models

Unified API for all major providers. Access Claude, GPT-4, Llama, and more with one key.

Input

Pay-per-use

Output

No minimums

Context

Varies

Pros

  • One API for all
  • Easy switching
  • Auto failover
  • Price comparison

Cons

  • Slight latency overhead
  • Third-party dependency
Setup Guide →

Vercel AI Gateway

Edge Deployment

Run AI at the edge with automatic failover and caching. Great for Vercel deployments.

Input

Pass-through

Output

+ caching savings

Context

Varies

Pros

  • Edge-optimized
  • Auto failover
  • Built-in caching
  • Unified SDK

Cons

  • Requires Vercel
  • Limited to supported providers
Setup Guide →

Local (Ollama, LM Studio)

Llama 3.1 / Mistral

Run AI completely locally with no API costs. Supports Ollama, LM Studio, and more with model failover.

Input

Free

Output

Forever

Context

8K-128K tokens

Pros

  • 100% free
  • Complete privacy
  • No internet needed
  • Model failover

Cons

  • Requires good hardware
  • Lower quality than cloud models
  • Setup complexity
Setup Guide →

Our Recommendation

For most users, we recommend starting with Claude 3.5 Sonnet for the best quality, or Ollama with Llama 3.1 if you want to run completely free and private.

You can switch models anytime in your OpenClaw configuration. Many users configure multiple models and switch based on the task.

Ready to Get Started?

Install OpenClaw and configure your preferred AI model.