The simplest flow
- Install Ollama and make sure the local runtime is active.
- Pull at least one model that makes sense for tools, for example:
ollama pull gpt-oss:20b
ollama pull llama3.3
ollama pull qwen2.5-coder:32bAccording to the official OpenClaw docs, the platform can auto-discover Ollama models when you enable the provider without defining a full explicit model configuration.
Correct setup for auto-discovery
The cleanest route is setting any value for OLLAMA_API_KEY and then letting OpenClaw discover models from the local runtime.
export OLLAMA_API_KEY="ollama-local"
ollama list
openclaw models listIf models appear, OpenClaw is seeing the local provider and you can choose the one that fits your workflow.
The mistake worth avoiding
If Ollama runs on a remote or custom host, do not use the OpenAI-compatible endpoint with /v1. The official docs warn that tool calling can break and tool JSON may be returned as plain text instead of real tool execution.
baseUrl: "http://host:11434"Not:
baseUrl: "http://host:11434/v1"If Ollama is not detected
ollama serve
curl http://localhost:11434/api/tags
openclaw models list- If
/api/tagsdoes not answer, the issue is Ollama before OpenClaw. - If Ollama answers but OpenClaw sees no models, check for an old explicit config under
models.providers.ollama. - If only a few models appear, the chosen model may not expose tool support well enough for the workflow you want.