🦞
Free AI

Free AI with Ollama

Run Moltbot (formerly Clawdbot) completely free with Ollama. No API costs, no limits, full privacy.

🆓 Why Ollama?
  • Completely Free — No API costs, no subscription fees, ever
  • Unlimited Usage — Send as many messages as you want
  • Total Privacy — All data stays on your machine, never sent to the cloud
  • Works Offline — Use Moltbot without internet after initial setup
  • Easy Setup — One command to install on Mac/Linux
Cost Comparison
Provider1K Messages/mo10K Messages/mo
Ollama (Local)$0$0
Claude API~$5-10~$50-100
OpenAI GPT-4~$10-20~$100-200
Google Gemini~$2-5~$20-50

* Only cost with Ollama is electricity (~$2-5/month running 24/7)

Setup Instructions

1

Install Ollama

One command on macOS or Linux:

curl -fsSL https://ollama.ai/install.sh | sh

For Windows: Download from ollama.ai

2

Download a Model

Pull a model (we recommend Llama 3.1 8B):

ollama pull llama3.1:8b

This downloads ~5GB. First time takes a few minutes.

3

Configure Moltbot

Point Moltbot to your local Ollama:

{
  "agent": {
    "provider": "ollama",
    "model": "llama3.1:8b",
    "baseUrl": "http://localhost:11434"
  }
}
4

Start Using!

Restart Moltbot and start chatting. Your AI now runs entirely on your machine!

moltbot restart

Recommended Models

Llama 3.2 3B

Smallest and fastest. Good for simple tasks.

llama3.2:3b

RAM: 4 GB

Size: 2 GB

Speed: Very Fast

Llama 3.1 8B

Recommended

Best balance of speed and quality. Recommended for most users.

llama3.1:8b

RAM: 8 GB

Size: 4.7 GB

Speed: Fast

Mistral 7B

Excellent for coding and technical tasks.

mistral:7b

RAM: 8 GB

Size: 4.1 GB

Speed: Fast

Llama 3.1 70B

Most capable. Needs powerful hardware.

llama3.1:70b

RAM: 40+ GB

Size: 40 GB

Speed: Slow

Browse all models at ollama.ai/library

🍎 Great on Apple Silicon

If you have a Mac with Apple Silicon (M1, M2, M3), Ollama runs extremely well. The unified memory architecture means:

  • • Mac Mini M2 (16GB) can run 8B-13B models smoothly
  • • Mac Studio (64GB+) can run 70B models
  • • Responses feel nearly as fast as cloud AI
  • • Silent operation, low power consumption
📊 Quality Comparison

Be aware that local models are generally less capable than frontier cloud models:

  • Llama 3.1 8B ≈ GPT-3.5 quality for most tasks
  • Llama 3.1 70B approaches GPT-4 quality
  • • Cloud Claude/GPT-4 still better for complex reasoning

For most personal assistant tasks (reminders, web browsing, simple automation), local models work great!

Free AI Running!

Now connect your messaging apps and start chatting for free.