Free AI with Ollama
Run Moltbot (formerly Clawdbot) completely free with Ollama. No API costs, no limits, full privacy.
- ✓Completely Free — No API costs, no subscription fees, ever
- ✓Unlimited Usage — Send as many messages as you want
- ✓Total Privacy — All data stays on your machine, never sent to the cloud
- ✓Works Offline — Use Moltbot without internet after initial setup
- ✓Easy Setup — One command to install on Mac/Linux
| Provider | 1K Messages/mo | 10K Messages/mo |
|---|---|---|
| Ollama (Local) | $0 | $0 |
| Claude API | ~$5-10 | ~$50-100 |
| OpenAI GPT-4 | ~$10-20 | ~$100-200 |
| Google Gemini | ~$2-5 | ~$20-50 |
* Only cost with Ollama is electricity (~$2-5/month running 24/7)
Setup Instructions
Install Ollama
One command on macOS or Linux:
curl -fsSL https://ollama.ai/install.sh | shFor Windows: Download from ollama.ai
Download a Model
Pull a model (we recommend Llama 3.1 8B):
ollama pull llama3.1:8bThis downloads ~5GB. First time takes a few minutes.
Configure Moltbot
Point Moltbot to your local Ollama:
{
"agent": {
"provider": "ollama",
"model": "llama3.1:8b",
"baseUrl": "http://localhost:11434"
}
}Start Using!
Restart Moltbot and start chatting. Your AI now runs entirely on your machine!
moltbot restartRecommended Models
Llama 3.2 3B
Smallest and fastest. Good for simple tasks.
llama3.2:3bRAM: 4 GB
Size: 2 GB
Speed: Very Fast
Llama 3.1 8B
RecommendedBest balance of speed and quality. Recommended for most users.
llama3.1:8bRAM: 8 GB
Size: 4.7 GB
Speed: Fast
Mistral 7B
Excellent for coding and technical tasks.
mistral:7bRAM: 8 GB
Size: 4.1 GB
Speed: Fast
Llama 3.1 70B
Most capable. Needs powerful hardware.
llama3.1:70bRAM: 40+ GB
Size: 40 GB
Speed: Slow
Browse all models at ollama.ai/library
If you have a Mac with Apple Silicon (M1, M2, M3), Ollama runs extremely well. The unified memory architecture means:
- • Mac Mini M2 (16GB) can run 8B-13B models smoothly
- • Mac Studio (64GB+) can run 70B models
- • Responses feel nearly as fast as cloud AI
- • Silent operation, low power consumption
Be aware that local models are generally less capable than frontier cloud models:
- • Llama 3.1 8B ≈ GPT-3.5 quality for most tasks
- • Llama 3.1 70B approaches GPT-4 quality
- • Cloud Claude/GPT-4 still better for complex reasoning
For most personal assistant tasks (reminders, web browsing, simple automation), local models work great!
Free AI Running!
Now connect your messaging apps and start chatting for free.