Skip to main content
LangSmith provides tools for developing, debugging, and deploying LLM applications. It helps you trace requests, evaluate outputs, test prompts, and manage deployments in one place. LangSmith is framework agnostic, so you can use it with or without LangChain’s open-source libraries langchain and langgraph. Prototype locally, then move to production with integrated monitoring and evaluation to build more reliable AI systems.
LangGraph Platform is now LangSmith Deployment. For more information, check out the Changelog.

Get started

Create an account

Sign up at smith.langchain.com (no credit card required). You can log in with Google, GitHub, or email.

Create an API key

Go to your Settings pageAPI KeysCreate API Key. Copy the key and save it securely.
Once your account and API key are ready, choose a quickstart to begin building with LangSmith:

Observability

Gain visibility into every step your application takes to debug faster and improve reliability.

Evaluation

Measure and track quality over time to ensure your AI applications are consistent and trustworthy.

Deployment

Deploy your agents as LangGraph Servers, ready to scale in production.

Prompt Testing

Iterate on prompts with built-in versioning and collaboration to ship improvements faster.

Studio

Use a visual interface to design, test, and refine applications end-to-end.

Hosting

Host LangSmith in the cloud, in your environment, or hybrid to match your infrastructure and compliance needs.

Workflow

LangSmith combines observability, evaluation, deployment, and hosting in one integrated workflow—from local development to production. Diagram showing how LangSmith integrates observability, evaluation, deployment, and hosting in a single workflow from development to production.
Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.