Migration GuidesFrom LangSmith

Migrating from LangSmith to TraceCraft

This guide helps you migrate from LangSmith to TraceCraft for LLM observability.

Key Differences

FeatureLangSmithTraceCraft
Vendor Lock-inLangChain ecosystemVendor-neutral
Export FormatsProprietaryOTLP, JSONL, HTML
Local-firstCloud-requiredWorks offline
PricingPer-trace pricingSelf-hosted, free

Migration Steps

1. Install TraceCraft

pip install tracecraft
# or
uv add tracecraft

2. Replace LangSmith Tracing

Before (LangSmith):

import os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = "ls_..."
 
from langchain_openai import ChatOpenAI
llm = ChatOpenAI()

After (TraceCraft):

import tracecraft
from tracecraft.adapters.langchain import TraceCraftCallbackHandler
 
# Initialize TraceCraft
tracecraft.init(console=True, jsonl=True)
 
# Use the callback handler
handler = TraceCraftCallbackHandler()
llm = ChatOpenAI()
 
# Pass handler to invoke
result = llm.invoke("Hello", config={"callbacks": [handler]})

3. Update Chain Invocations

Before:

chain = prompt | llm | parser
result = chain.invoke({"query": "test"})

After:

from tracecraft.core.context import run_context
from tracecraft.core.models import AgentRun
from datetime import UTC, datetime
 
handler = TraceCraftCallbackHandler()
run = AgentRun(name="my_chain", start_time=datetime.now(UTC))
 
with run_context(run):
    result = chain.invoke(
        {"query": "test"},
        config={"callbacks": [handler]}
    )

4. Export to OTLP (Optional)

If you want to send traces to Jaeger, Honeycomb, or other OTLP backends:

from tracecraft.exporters.otlp import OTLPExporter
 
otlp = OTLPExporter(
    endpoint="http://localhost:4317",
    service_name="my-app"
)
 
tracecraft.init(exporters=[otlp])

Feature Mapping

LangSmith FeatureTraceCraft Equivalent
@traceable decorator@tracecraft.trace_agent
Run treesNested Steps with parent_id
LangSmith HubN/A (use your own prompts)
Feedback collectionCustom attributes
Dataset creationExport to JSONL

Benefits of Migration

  1. No vendor lock-in: Export to any OTLP-compatible backend
  2. Local development: Full tracing without internet
  3. Cost savings: No per-trace charges
  4. Privacy: Your data stays on your infrastructure
  5. Framework agnostic: Works with LangChain, LlamaIndex, PydanticAI