The langwatch prompt command provides dependency management for AI prompts as plain YAML files, enabling you to version prompts locally with Git while synchronizing with the LangWatch platform for testing, evaluation, and team collaboration.
Installation
Install the CLI globally:
Authenticate:
Quick Start
1. Initialize Your Project
Create a new prompts project:
This will create the following structure to manage your prompts:
├── prompts # directory to create your prompts
│ └── .materialized # where remote fetched prompts will be stored
├── prompts.json # prompt dependencies
└── prompts-lock.json # lock file
2. Add Your First Prompt
Create a local prompt:
langwatch prompt create my-summarizer
Or add an existing remote prompt dependency:
langwatch prompt add agent/customer-service
Your newly created or fetched prompt will be on a yaml file in the prompts/ directory, and look like this:
model: openai/gpt-5
modelParameters:
temperature: 1.0
messages:
- role: system
content: You are a helpful assistant.
- role: user
content: "{{input}}"
3. Synchronize
Sync all prompts (fetch remote, push local changes):
Go to app.langwatch.ai to see your new synced prompts.
Using the Prompts
Once you’ve created your prompts, you can use them in your application code. The LangWatch SDK provides a simple interface to fetch and compile prompts with dynamic variables.
import langwatch
from litellm import completion
# Get the latest prompt by handle
prompt = langwatch.prompts.get("my-summarizer")
# Compile prompt with variables
compiled_prompt = prompt.compile(
user_input="Summarize this article for me...",
context="Article content here"
)
# Use with LiteLLM (supports all major providers)
response = completion(
model=prompt.model,
messages=compiled_prompt.messages,
temperature=prompt.temperature,
max_tokens=prompt.max_tokens
)
print(response.choices[0].message.content)
import { LangWatch } from "langwatch";
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";
const langwatch = new LangWatch({
apiKey: process.env.LANGWATCH_API_KEY
});
// Get the latest prompt by handle
const prompt = await langwatch.prompts.get("my-summarizer");
// Compile prompt with variables
const compiledPrompt = prompt.compile({
user_input: "Summarize this article for me...",
context: "Article content here"
});
// Use with Vercel AI SDK
const result = await generateText({
model: openai(prompt.model.replace("openai/", "")),
messages: compiledPrompt.messages,
temperature: prompt.temperature,
maxTokens: prompt.maxTokens
});
console.log(result.text);
Working with Local and Remote Prompts
The SDK loads prompts dynamically at runtime, so you don’t need to worry about whether they’re local or remote:
- Local prompts: Fetched directly from your LangWatch project
- Remote prompts: Also fetched from LangWatch after being synced from the materialized files
For production deployments, the CLI materializes prompts locally so you have a complete snapshot of all dependencies. The SDK can automatically use these materialized prompts for guaranteed availability in offline or air-gapped environments. However, when online, the SDK always fetches the latest version from the server unless you specify a version.Learn more about guaranteed availability and offline deployments.
Loading Specific Versions
You can also load specific prompt versions instead of always using the latest:
# Get a specific version
prompt = langwatch.prompts.get("my-summarizer", version=5)
Prompt Variables
Prompts use {{variable_name}} syntax for dynamic content. When you compile a prompt, provide all required variables:
messages:
- role: system
content: |
You are a {{role}} assistant.
User: {{user_name}}
Context: {{context}}
- role: user
content: "{{user_input}}"
Then compile with all variables:
compiled = prompt.compile(
role="customer support",
user_name="John Doe",
context="Premium user since 2020",
user_input="I need help with my account"
)
The compiled prompt contains fully rendered messages ready to send to your LLM provider, with all variables replaced and the correct model configuration applied.
Core Concepts
Dependency Management
The CLI uses two configuration files:
prompts.json - Declares your prompt dependencies:
{
"prompts": {
"agent/customer-service": "latest",
"shared/guidelines": "5",
"my-local-prompt": "file:./prompts/my-local-prompt.prompt.yaml"
}
}
prompts-lock.json - Tracks resolved versions and materialized file paths:
{
"lockfileVersion": 1,
"prompts": {
"agent/customer-service": {
"version": 12,
"versionId": "prompt_version_scRQwSRMIyJvoSxTqP2nR",
"materialized": "prompts/.materialized/agent/customer-service.prompt.yaml"
}
}
}
Local vs Remote Prompts
Remote Prompts (agent/customer-service@latest)
- Pulled from LangWatch platform
- Fetched and materialized locally in
./prompts/.materialized/
- Read-only locally
Local Prompts (file:./prompts/my-prompt.prompt.yaml)
- Stored as local YAML files
- Version controlled with Git
- Pushed to platform during sync for sharing and evaluation
Prompts files end with .prompt.yaml extension and follow this format:
model: openai/gpt-5
modelParameters:
temperature: 0.7
max_tokens: 1000
messages:
- role: system
content: You are a helpful assistant specializing in customer service.
- role: user
content: |
Please help the customer with their inquiry:
{{customer_message}}
This is the same structure as GitHub Prompts.
Commands Reference
langwatch prompt init
Initialize a new prompts project in the current directory.
langwatch prompt add <spec> [localFile]
Add a new prompt dependency and immediately fetch/materialize it.
# Add remote prompt
langwatch prompt add shared/guidelines@latest
# Add specific version
langwatch prompt add agent/support@5
# Add local file as dependency
langwatch prompt add my-prompt ./prompts/my-prompt.prompt.yaml
Arguments:
<spec> - Prompt specification (name@version or name for latest)
[localFile] - Optional path to local YAML file to add
Behavior:
- Updates
prompts.json with new dependency
- Fetches prompt from server and materializes locally
- Updates
prompts-lock.json with resolved version
langwatch prompt remove <name>
Remove a prompt dependency and clean up associated files.
langwatch prompt remove agent/support
Behavior:
- Removes entry from
prompts.json
- Removes entry from
prompts-lock.json
- Deletes materialized file
- For local prompts: deletes source file and warns about server state
langwatch prompt create <name>
Create a new local prompt file with default content.
langwatch prompt create my-new-prompt
Behavior:
- Creates
./prompts/<name>.prompt.yaml with template content
- Automatically adds to
prompts.json as file: dependency
- Updates
prompts-lock.json
langwatch prompt sync
Synchronize all prompts between local files and the server.
Behavior:
- Fetches remote prompts if new versions available
- Pushes local prompt changes to server
- Handles conflict resolution interactively
- Cleans up orphaned materialized files
- Reports what was synced
Conflict Resolution:
When local and remote versions have both changed:
Conflict detected for my-prompt:
Local changes: Updated system message
Remote changes: Added temperature parameter
Choose resolution:
l) Use local version (will create new version on server)
r) Use remote version (will overwrite local file)
a) Abort sync for this prompt
langwatch prompt list
Display current prompt dependencies and their status.
CI/CD Integration
Integrate prompt materialization into your deployment pipeline:
.github/workflows/deploy.yml
name: Deploy with Prompts
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: "18"
- name: Install LangWatch CLI
run: npm install -g langwatch
- name: Materialize prompts
env:
LANGWATCH_API_KEY: ${{ secrets.LANGWATCH_API_KEY }}
run: langwatch prompt sync
- name: Build application
run: npm run build
- name: Deploy with materialized prompts
run: |
# Deploy application including prompts/.materialized/
# Your deployment commands here
Workflows
Team Collaboration
Setup:
- One team member initializes project with
langwatch prompt init
- Commit
prompts.json and prompts-lock.json to Git
- Add
prompts/.materialized to .gitignore
- Team members run
langwatch prompt sync after pulling
Adding Shared Prompts:
# Add organization prompts
langwatch prompt add company/brand-guidelines@latest
langwatch prompt add legal/privacy-policy@^2
# Sync to materialize
langwatch prompt sync
Creating Local Prompts:
# Create and develop locally
langwatch prompt create feature/user-onboarding
# Edit the file
vim prompts/feature/user-onboarding.prompt.yaml
# Push to platform for evaluation
langwatch prompt sync
Version Management
Pinning Versions:
{
"prompts": {
"critical/system-prompt": "5", // Exact version
"experimental/new-feature": "latest" // Auto-update
}
}
Upgrading Dependencies:
# Edit prompts.json to change versions
# Then sync to fetch new versions
langwatch prompt sync
Rolling Back:
# Restore from Git
git checkout prompts.json prompts-lock.json
# Re-sync to match committed state
langwatch prompt sync
Development Workflow:
# Create new feature prompt
langwatch prompt create features/new-capability
# Edit and test locally
# When ready, sync to platform
langwatch prompt sync
# Platform team can now evaluate and provide feedback
Coding Assistant Integration
Since prompts are just YAML files, you refer to them directly from other tools or coding assistants.
Cursor Integration
Reference prompts in a .cursor/rules/*.mdc file:
---
description:
globs:
alwaysApply: true
---
@/prompts/.materialized/company/java-code-guidelines.prompt.yaml
Cloud Code Integration
Include prompt content in cloud development environments by referencing the YAML files in the prompts/.materialized directory.