Prompt Hub

Manage, version, and evaluate your AI prompts with Langfuse – your open-source LLM observability platform.

Open in New Tab

Sync Dataset to Langfuse

Push your lead data to Langfuse for running evaluations

View Datasets

Prompt Versioning

Every change to your prompts is tracked. Compare versions, roll back, and A/B test different prompt variations.

Create your first prompt

LLM Observability

Track every LLM call with detailed traces. Monitor costs, latency, and quality across all your AI features.

View traces

Langfuse is running locally at http://localhost:3000