Prompt Hub
Manage, version, and evaluate your AI prompts with Langfuse – your open-source LLM observability platform.
Sync Dataset to Langfuse
Push your lead data to Langfuse for running evaluations
Prompts
Create, version, and manage your prompt templates with full history.
Traces
Monitor LLM calls, latency, token usage, and debug issues.
Datasets
Build evaluation datasets with expected outputs for testing.
Evaluations
Run automated evals with LLM-as-a-judge or custom metrics.
Playground
Test prompts interactively with different models and parameters.
Settings
Configure API keys, team members, and project settings.
Prompt Versioning
Every change to your prompts is tracked. Compare versions, roll back, and A/B test different prompt variations.
Create your first promptLLM Observability
Track every LLM call with detailed traces. Monitor costs, latency, and quality across all your AI features.
View tracesLangfuse is running locally at http://localhost:3000