Ship reliable, testable agents – not guesses. Better Agents adds simulations, evaluations, and standards on top of any framework. Explore Better Agents
LangWatch provides a robust version control system for managing your prompts. Each prompt can have multiple versions, allowing you to track changes, experiment with different approaches, and rollback when needed.
Every prompt in LangWatch automatically maintains a version history. When you create a new prompt, it starts with version 1, and each subsequent change creates a new version with an incremented number.Important: You cannot delete individual versions - only entire prompts can be deleted. Each update operation creates a new version automatically.
Prompts have two scope levels that affect version management and conflict resolution:
PROJECT scope - Prompts are accessible only within the project. Changes are isolated to your project.
ORGANIZATION scope - Prompts are shared across all projects in the organization. Changes can affect other projects and may require conflict resolution.
Scope Conflicts: When updating an organization-scoped prompt, conflicts
may arise if other projects have made changes. The system will provide
conflict information to help resolve differences.
Click on the version history icon at the bottom of the prompt editor
Use the version selector to switch between versions
Create new versions by making changes and saving
LangWatch’s TypeScript SDK supports retrieving prompts and specific versions:
Copy
import { LangWatch } from "langwatch";// Initialize LangWatch clientconst langwatch = new LangWatch({ apiKey: process.env.LANGWATCH_API_KEY,});// Get a specific prompt (latest version by default)const prompt = await langwatch.prompts.get("customer-support-bot");// The prompt object contains version informationconsole.log(`Version: ${prompt.version}`);console.log(`Version ID: ${prompt.versionId}`);// Get a specific version of a promptconst specificVersion = await langwatch.prompts.get("customer-support-bot", { version: "version_abc123",});// Compile with variablesconst compiledPrompt = specificVersion.compile({ user_name: "John Doe", user_email: "[email protected]", input: "I need help with my account",});
Use the REST API to manage prompt versions:
Copy
# Get all versions of a promptcurl --request GET \ --url "https://app.langwatch.ai/api/prompts/{prompt_handle}/versions" \ --header "X-Auth-Token: your-api-key"# Get a specific prompt (latest version)curl --request GET \ --url "https://app.langwatch.ai/api/prompts/{prompt_handle}" \ --header "X-Auth-Token: your-api-key"# Create a new versioncurl --request POST \ --url "https://app.langwatch.ai/api/prompts/{prompt_handle}/versions" \ --header "X-Auth-Token: your-api-key" \ --header "Content-Type: application/json" \ --data '{ "prompt": "Updated prompt text...", "model": "openai/gpt-5", "commitMessage": "Improved customer support prompt", "temperature": 0.7, "maxTokens": 1000, "responseFormat": {"type": "text"}, "inputs": [{"identifier": "input", "type": "str"}], "outputs": [{"identifier": "response", "type": "str"}], "demonstrations": null, "promptingTechnique": null }'# Get a specific versioncurl --request GET \ --url "https://app.langwatch.ai/api/prompts/{prompt_handle}?version=2" \ --header "X-Auth-Token: your-api-key"
The SDK provides comprehensive CRUD operations for managing prompts programmatically:
Field Structure: All examples show the essential fields. Additional optional fields like temperature, maxTokens, responseFormat, inputs, outputs, demonstrations, and promptingTechnique can also be set. See the Data Model page for complete field documentation.
System Message Conflict: You cannot set both a prompt (system message)
and messages array with a system role in the same operation. Choose one
approach to avoid errors.
TypeScript SDK
Python SDK
create_prompt.ts
Copy
// Create a new prompt with a system prompt const prompt = await langwatch.prompts.create({ handle: "customer-support-bot", // Required scope: "PROJECT", // Required prompt: "You are a helpful customer support agent. Help with: {{input}}", // Required model: "openai/gpt-4o-mini", // Required // Optional fields: temperature: 0.7, // Optional: Model temperature (0.0-2.0) maxTokens: 1000, // Optional: Maximum tokens to generate // messages: [...], // Optional: Messages array in OpenAI format { role: "system" | "user" | "assistant", content: "..." } });console.log(`Created prompt with handle: ${prompt.handle}`);
create_prompt.py
Copy
# Create a new prompt prompt = langwatch.prompts.create( handle="customer-support-bot", # Required scope="PROJECT", # Required prompt="You are a helpful customer support agent. Help with: {{input}}", # Required model="openai/gpt-4o-mini", # Required # Optional fields: temperature=0.7, # Optional: Model temperature (0.0-2.0) max_tokens=1000, # Optional: Maximum tokens to generate # messages=[...], # Optional: Messages array in OpenAI format { role: "system" | "user" | "assistant", content: "..." } ) print(f"Created prompt with handle: {prompt.handle}")
Modify existing prompts while maintaining version history:
System Message Conflict: Same rule applies - you cannot set both a
prompt and messages array with a system role in the same operation.
You must include at least one field to update the prompt.
TypeScript SDK
Python SDK
update_prompt.ts
Copy
// Update prompt content (creates new version automatically) const updatedPrompt = await langwatch.prompts.update("customer-support-bot", { // All fields are optional for updates - only specify what you want to change prompt: "You are an expert customer support agent. Help with: {{input}}", // Optional fields: model: "openai/gpt-4o", // Optional: Change the model temperature: 0.5, // Optional: Adjust temperature maxTokens: 2000, // Optional: Change max tokens // messages: [...], // Optional: Messages array in OpenAI format { role: "system" | "user" | "assistant", content: "..." } }); console.log(`Updated prompt: ${updatedPrompt.handle}, New version: ${updatedPrompt.version}`);
update_prompt.py
Copy
# Update prompt content (creates new version automatically) updated_prompt = langwatch.prompts.update( "customer-support-bot", # All fields are optional for updates - only specify what you want to change prompt="You are an expert customer support agent. Help with: {{input}}", model="openai/gpt-4o", # Optional: Change the model temperature=0.5, # Optional: Adjust temperature max_tokens=2000, # Optional: Change max tokens # messages=[...], # Optional: Messages array in OpenAI format { role: "system" | "user" | "assistant", content: "..." } ) print(f"Updated prompt: {updated_prompt.handle}, New version: {updated_prompt.version}")
Permanent Deletion: Deleting a prompt removes ALL versions permanently.
This action cannot be undone.
TypeScript SDK
Python SDK
delete_prompt.ts
Copy
// Delete by handle (removes all versions)const result = await langwatch.prompts.delete("customer-support-bot");console.log(`Deletion result: ${result.success}`);
delete_prompt.py
Copy
# Delete by handle (removes all versions)result = langwatch.prompts.delete("customer-support-bot")print(f"Deletion result: {result}")
Temperature Control: Fine-tune creativity vs. consistency (0.0-2.0)
Token Limits: Set maxTokens to control response length and costs
Model Selection: Choose the best model for your specific use case
These advanced features are particularly powerful when combined with LangWatch’s optimization studio,
where you can A/B test different configurations and measure their impact on performance metrics.