vLLM
vLLM can serve open-source (and some custom) models via an OpenAI-compatible HTTP API. OpenClaw can connect to vLLM using theopenai-completions API.
OpenClaw can also auto-discover available models from vLLM when you opt in with VLLM_API_KEY (any value works if your server doesn’t enforce auth) and you do not define an explicit models.providers.vllm entry.
Quick start
- Start vLLM with an OpenAI-compatible server.
/v1 endpoints (e.g. /v1/models, /v1/chat/completions). vLLM commonly runs on:
http://127.0.0.1:8000/v1
- Opt in (any value works if no auth is configured):
- Select a model (replace with one of your vLLM model IDs):
Model discovery (implicit provider)
WhenVLLM_API_KEY is set (or an auth profile exists) and you do not define models.providers.vllm, OpenClaw will query:
GET http://127.0.0.1:8000/v1/models
models.providers.vllm explicitly, auto-discovery is skipped and you must define models manually.
Explicit configuration (manual models)
Use explicit config when:- vLLM runs on a different host/port.
- You want to pin
contextWindow/maxTokensvalues. - Your server requires a real API key (or you want to control headers).
Troubleshooting
- Check the server is reachable:
- If requests fail with auth errors, set a real
VLLM_API_KEYthat matches your server configuration, or configure the provider explicitly undermodels.providers.vllm.