Ollama

How to configure OpenClaw with Ollama without breaking tools and local models

Ollama is one of the strongest options if you want to run local models with OpenClaw. The real goal is not only seeing a model appear in a list. The correct setup keeps tool calling working, model discovery clean, and the first useful agent on a stable base.

The simplest flow

  1. Install Ollama and make sure the local runtime is active.
  2. Pull at least one model that makes sense for tools, for example:
ollama pull gpt-oss:20b
ollama pull llama3.3
ollama pull qwen2.5-coder:32b

According to the official OpenClaw docs, the platform can auto-discover Ollama models when you enable the provider without defining a full explicit model configuration.

Correct setup for auto-discovery

The cleanest route is setting any value for OLLAMA_API_KEY and then letting OpenClaw discover models from the local runtime.

export OLLAMA_API_KEY="ollama-local"
ollama list
openclaw models list

If models appear, OpenClaw is seeing the local provider and you can choose the one that fits your workflow.

The mistake worth avoiding

If Ollama runs on a remote or custom host, do not use the OpenAI-compatible endpoint with /v1. The official docs warn that tool calling can break and tool JSON may be returned as plain text instead of real tool execution.

baseUrl: "http://host:11434"

Not:

baseUrl: "http://host:11434/v1"

If Ollama is not detected

ollama serve
curl http://localhost:11434/api/tags
openclaw models list
  • If /api/tags does not answer, the issue is Ollama before OpenClaw.
  • If Ollama answers but OpenClaw sees no models, check for an old explicit config under models.providers.ollama.
  • If only a few models appear, the chosen model may not expose tool support well enough for the workflow you want.