LiteLLM
LiteLLM is an open-source LLM gateway that provides a unified API to 100+ model providers. Route OpenClaw through LiteLLM to get centralized cost tracking, logging, and the flexibility to switch backends without changing your OpenClaw config.Why use LiteLLM with OpenClaw?
- Cost tracking — See exactly what OpenClaw spends across all models
- Model routing — Switch between Claude, GPT-4, Gemini, Bedrock without config changes
- Virtual keys — Create keys with spend limits for OpenClaw
- Logging — Full request/response logs for debugging
- Fallbacks — Automatic failover if your primary provider is down
Quick start
Via onboarding
Manual setup
- Start LiteLLM Proxy:
- Point OpenClaw to LiteLLM:
Configuration
Environment variables
Config file
Virtual keys
Create a dedicated key for OpenClaw with spend limits:LITELLM_API_KEY.
Model routing
LiteLLM can route model requests to different backends. Configure in your LiteLLMconfig.yaml:
claude-opus-4-6 — LiteLLM handles the routing.
Viewing usage
Check LiteLLM’s dashboard or API:Notes
- LiteLLM runs on
http://localhost:4000by default - OpenClaw connects via the OpenAI-compatible
/v1/chat/completionsendpoint - All OpenClaw features work through LiteLLM — no limitations