A model alias is a per-VK redirect. The client sendsDocumentation Index
Fetch the complete documentation index at: https://langwatch.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
model: "foo"; the VK’s model_aliases map turns it into provider/model/deployment at dispatch time.
Aliases are one of three ways to reference a model. See Model Naming for when to use bare (
gpt-5-mini) vs prefixed (openai/gpt-5-mini) vs an alias.Two main use cases
1. Decouple client code from provider specifics
An OpenAI-SDK codebase shouldn’t need to know about Azure deployment names, Bedrock inference profile ARNs, or specific Anthropic model versions. Aliases move that mapping to the VK. Before:2. Swap providers without client changes
Operating under the aliasclaude pointing at anthropic/claude-haiku-4-5-20251001. Anthropic has a regional outage, or you want to test Bedrock for cost reasons. Change the alias:
model: "claude" now hits Bedrock. Switch back in one edit. Canary traffic by minting a second VK with the new alias while the primary stays on Anthropic.
Resolution rules
Given a request withmodel: "<name>":
- If
<name>has a/in it (explicitprovider/modelform), aliases are bypassed and the gateway routes directly to that provider — subject tomodels_allowedand the VK’sproviders[]list. - Otherwise,
<name>is looked up inmodel_aliases. If found, the mappedprovider/modelis used. - If not found in aliases, the gateway checks
models_allowed. If the model is allowed, it’s dispatched to the VK’s primary provider. - If none of the above, returns
403 model_not_allowed.
Aliases always win over explicit form — if both are defined
If the alias map has an entry forgpt-4o and the client sends model: "gpt-4o", the alias is applied. If the client sends model: "openai/gpt-4o" explicitly, the alias is bypassed (step 1).
This lets you have an openai/ namespace that addresses OpenAI directly for tests or debugging while the alias gpt-4o redirects to Azure for production traffic.
:fallback suffix (advanced)
Optional suffix that picks a different model when a fallback provider is used:
claude-haiku-4-5-20251001 against Anthropic. Without :fallback, it would try the same model name (gpt-5-mini) which might not exist on Anthropic.
Enforcement
Aliases do not bypassmodels_allowed. If models_allowed: ["gpt-5-mini", "claude-haiku-4-5-20251001"] and an alias maps to openai/o3, a request using the alias returns 403 model_not_allowed — the resolved model fails the allowlist.
This matters for security: don’t grant a model via aliases that you wouldn’t grant explicitly.
Discovery via /v1/models
GET /v1/models returns three deduplicated + stable-sorted groups: (1) alias names, (2) models_allowed entries (globs included, shown verbatim), and (3) provider-type shortcuts (openai/*, anthropic/*, bedrock/*, azure/*, vertex/*, gemini/* — one per provider bound on the VK). CLIs that auto-configure completion (Codex, Cursor, openai models list) consume the whole list uniformly.
Each entry carries an owned_by field indicating the resolved provider, so UIs can render “gpt-4o (Azure)” vs “gpt-4o (OpenAI)” when both are present across different VKs.
See API: GET /v1/models for the full response shape.
Trace attributes
langwatch.model— the final resolved model dispatched to the provider (post alias).langwatch.model_source— how it was resolved:alias(client sent an alias that mapped),explicit_slash(client sentprovider/model), orimplicit(client sent a bare name the gateway mapped via defaults).
model_source="alias" to see “which requests benefited from the VK’s alias table today”; filter by model to count Azure vs OpenAI dispatches.
When NOT to use aliases
- Small projects with one provider and one model — just use the provider’s native model name.
- When the alias becomes another layer of indirection nobody remembers. Keep alias names close to the provider-native names if the redirect is simple.