Configuration
TraceCraft can be configured through code, a config file, or environment variables. This guide covers all available configuration options.
Configuration Precedence
Configuration is applied in this order (later overrides earlier):
- Default values
- Environment variables (
TRACECRAFT_*) .tracecraft/config.yaml(project or home directory)- Explicit parameters passed to
tracecraft.init()
Example:
# Environment variable sets the service name
export TRACECRAFT_SERVICE_NAME=env-service# Explicit param wins — "code-service" is used, not "env-service"
tracecraft.init(service_name="code-service")Config File
The easiest way to configure TraceCraft for a project is a config file at .tracecraft/config.yaml in your project root (or ~/.tracecraft/config.yaml globally). The file is loaded automatically — no code changes required.
Minimal Config
# .tracecraft/config.yaml
env: development
default:
service_name: my-agent-service
storage:
type: jsonl
jsonl_path: traces/tracecraft.jsonl
exporters:
console: true
jsonl: true
# Stream traces live to `tracecraft serve --tui`
receiver: false
receiver_endpoint: http://localhost:4318
instrumentation:
# true, false, or a list like [openai, anthropic]
auto_instrument: falseFull Config with Environments
# .tracecraft/config.yaml
env: development
default:
service_name: my-agent-service
storage:
type: jsonl
jsonl_path: traces/tracecraft.jsonl
exporters:
console: true
jsonl: true
otlp: false
receiver: false
receiver_endpoint: http://localhost:4318
instrumentation:
auto_instrument: false
processors:
redaction_enabled: false
redaction_mode: mask # mask, hash, or remove
sampling_enabled: false
sampling_rate: 1.0
enrichment_enabled: true
environments:
# Development: stream live to TUI receiver
development:
storage:
type: sqlite
sqlite_path: traces/dev.db
exporters:
console: true
receiver: true # run: tracecraft serve --tui
receiver_endpoint: http://localhost:4318
instrumentation:
auto_instrument: true # instrument all available SDKs
# Staging: SQLite + OTLP, selective auto-instrumentation
staging:
storage:
type: sqlite
sqlite_path: traces/staging.db
exporters:
console: false
jsonl: true
otlp: true
otlp_endpoint: ${OTEL_EXPORTER_OTLP_ENDPOINT}
instrumentation:
auto_instrument:
- openai
- anthropic
processors:
redaction_enabled: true
redaction_mode: mask
# Production: OTLP only, no local storage, sampled
production:
storage:
type: none
exporters:
console: false
jsonl: false
otlp: true
otlp_endpoint: ${OTEL_EXPORTER_OTLP_ENDPOINT}
otlp_headers:
Authorization: Bearer ${OTEL_AUTH_TOKEN}
instrumentation:
auto_instrument: false # use decorators in production
processors:
redaction_enabled: true
redaction_mode: hash
sampling_enabled: true
sampling_rate: 0.1 # 10% in production
# Test: no output, no instrumentation
test:
storage:
type: none
exporters:
console: false
jsonl: false
otlp: false
receiver: false
instrumentation:
auto_instrument: falseSet the active environment:
export TRACECRAFT_ENV=staging # via env varor in the config file:
env: productionQuick Start
Basic Initialization
import tracecraft
# Loads .tracecraft/config.yaml automatically
tracecraft.init()Common Configurations
# Local development — stream live to TUI receiver
tracecraft.init(
auto_instrument=True,
receiver=True,
service_name="my-agent",
)tracecraft serve --tui # start receiver + TUI# Local development — write to file, open TUI separately
tracecraft.init(
auto_instrument=True,
jsonl=True,
service_name="my-agent",
)tracecraft tui# Production — OTLP export, no local output
tracecraft.init(
service_name="production-agent",
console=False,
jsonl=False,
exporters=[OTLPExporter(endpoint="https://otlp.example.com")],
)Configuration Options
Service Identification
tracecraft.init(
service_name="my-agent-service", # shown in TUI and OTLP traces
)Config file:
default:
service_name: my-agent-serviceEnvironment variable:
export TRACECRAFT_SERVICE_NAME=my-serviceTUI Receiver Shorthand
Stream traces live to the tracecraft serve --tui receiver without any extra setup:
# receiver=True → connect to http://localhost:4318 (default)
tracecraft.init(
auto_instrument=True,
receiver=True,
service_name="my-agent",
)# receiver=<url> → custom receiver address
tracecraft.init(
receiver="http://remote-host:4318",
service_name="my-agent",
)Config file:
default:
exporters:
receiver: true
receiver_endpoint: http://localhost:4318 # optional, this is the defaultStart the receiver:
tracecraft serve --tuiAuto-Instrumentation
Automatically capture all LLM calls without decorators:
# Instrument all supported SDKs (OpenAI, Anthropic, LangChain, LlamaIndex)
tracecraft.init(auto_instrument=True)
# Instrument specific SDKs only
tracecraft.init(auto_instrument=["openai", "langchain"])Config file:
default:
instrumentation:
auto_instrument: true # all SDKs
# or selectively:
# auto_instrument:
# - openai
# - anthropicEnvironment variable:
export TRACECRAFT_AUTO_INSTRUMENT=true
export TRACECRAFT_AUTO_INSTRUMENT=openai,langchain # selectiveInitialize Before Importing SDKs
tracecraft.init() must be called before importing OpenAI, Anthropic,
LangChain, or LlamaIndex. TraceCraft patches at import time — importing first
means the patch won’t apply.
Console Output
tracecraft.init(
console=True, # Enable console output (default: True)
console_verbose=False, # Show all attributes
)Config file:
default:
exporters:
console: trueEnvironment variable:
export TRACECRAFT_CONSOLE_ENABLED=trueJSONL File Export
tracecraft.init(
jsonl=True, # Enable JSONL export
jsonl_path="./my-traces/", # Output path
)Config file:
default:
exporters:
jsonl: true
storage:
type: jsonl
jsonl_path: traces/tracecraft.jsonlEnvironment variable:
export TRACECRAFT_JSONL_ENABLED=true
export TRACECRAFT_JSONL_PATH=./my-traces/OTLP Export
tracecraft.init(
otlp_endpoint="http://localhost:4317",
otlp_insecure=True,
otlp_headers={"Authorization": "Bearer token"},
)Config file:
default:
exporters:
otlp: true
otlp_endpoint: ${OTEL_EXPORTER_OTLP_ENDPOINT}
otlp_headers:
Authorization: Bearer ${OTEL_AUTH_TOKEN}Environment variables:
export TRACECRAFT_OTLP_ENDPOINT=http://localhost:4317
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
export OTEL_EXPORTER_OTLP_HEADERS=Authorization=Bearer tokenSampling
tracecraft.init(
sampling_rate=0.1, # Sample 10% of traces
always_keep_errors=True, # Always keep error traces
always_keep_slow=True, # Always keep slow traces
slow_threshold_ms=5000, # >5s is slow
)Config file:
default:
processors:
sampling_enabled: true
sampling_rate: 0.1Environment variables:
export TRACECRAFT_SAMPLING_RATE=0.1
export TRACECRAFT_ALWAYS_KEEP_ERRORS=truePII Redaction
from tracecraft.core.config import RedactionConfig, RedactionMode
tracecraft.init(
enable_pii_redaction=True,
redaction_mode=RedactionMode.MASK, # or REMOVE, HASH
redaction_patterns=[
r"\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,}\b", # Emails
r"\b\d{3}-\d{2}-\d{4}\b", # SSN
]
)Config file:
default:
processors:
redaction_enabled: true
redaction_mode: mask # mask, hash, or removeEnvironment variables:
export TRACECRAFT_REDACTION_ENABLED=true
export TRACECRAFT_REDACTION_MODE=maskProcessor Order
Control the order of the processing pipeline:
from tracecraft.core.config import ProcessorOrder
tracecraft.init(
processor_order=ProcessorOrder.SAFETY, # or EFFICIENCY
)- SAFETY (default): Enrich → Redact → Sample. Better for compliance.
- EFFICIENCY: Sample → Redact → Enrich. Better for high throughput.
Max Step Depth
Limit trace hierarchy depth:
tracecraft.init(
max_step_depth=100, # Maximum nesting level
)Custom Exporters
Create and use custom exporters:
from tracecraft.exporters import BaseExporter, ConsoleExporter, JSONLExporter
# Use multiple exporters alongside built-in ones
tracecraft.init(
exporters=[
ConsoleExporter(),
JSONLExporter(filepath="traces.jsonl"),
MyCustomExporter(),
]
)Custom Processors
Add custom processors:
from tracecraft.processors.base import BaseProcessor
from tracecraft.core.models import AgentRun
class MyCustomProcessor(BaseProcessor):
def process(self, run: AgentRun) -> AgentRun | None:
run.metadata["custom_field"] = "value"
return run
from tracecraft import TraceCraftRuntime, TraceCraftConfig
config = TraceCraftConfig(...)
runtime = TraceCraftRuntime(config=config)
runtime.add_processor(MyCustomProcessor())Cloud Platform Configurations
AWS AgentCore
from tracecraft.core.config import AWSAgentCoreConfig
tracecraft.init(
aws_agentcore=AWSAgentCoreConfig(
enabled=True,
use_xray_propagation=True,
session_id="conversation-123",
)
)Environment variables:
export TRACECRAFT_AWS_AGENTCORE_ENABLED=true
export TRACECRAFT_AWS_XRAY_PROPAGATION=true
export TRACECRAFT_AWS_SESSION_ID=conversation-123
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317 # ADOT collectorAzure AI Foundry
from tracecraft.core.config import AzureFoundryConfig
tracecraft.init(
azure_foundry=AzureFoundryConfig(
enabled=True,
connection_string=os.environ["APPLICATIONINSIGHTS_CONNECTION_STRING"],
enable_content_recording=True,
agent_name="customer-support",
agent_id="agent-v1",
)
)Environment variables:
export TRACECRAFT_AZURE_FOUNDRY_ENABLED=true
export APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=...GCP Vertex Agent
from tracecraft.core.config import GCPVertexAgentConfig
tracecraft.init(
gcp_vertex_agent=GCPVertexAgentConfig(
enabled=True,
project_id="my-project",
session_id="session-123",
agent_name="support-agent",
enable_content_recording=True,
)
)Remote Storage Backends (TUI Read-Only)
The TUI can pull traces from cloud observability platforms without copying data locally. These backends are read-only — they never write to the platform.
StorageConfig Type Values
type | Description | Required Extra |
|---|---|---|
jsonl | JSONL file (default) | built-in |
sqlite | SQLite database | built-in |
mlflow | MLflow tracking server | tracecraft[mlflow] |
none | No local storage | built-in |
xray | AWS X-Ray (read-only) | tracecraft[storage-xray] |
cloudtrace | GCP Cloud Trace (read-only) | tracecraft[storage-cloudtrace] |
azuremonitor | Azure Monitor (read-only) | tracecraft[storage-azuremonitor] |
datadog | DataDog APM (read-only) | tracecraft[storage-datadog] |
X-Ray Config
default:
storage:
type: xray
xray_region: us-east-1
xray_service_name: my-bedrock-agent # optional, None = all services
xray_lookback_hours: 1
xray_cache_ttl_seconds: 60
# Auth: AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY / AWS_PROFILE / instance profileCloud Trace Config
default:
storage:
type: cloudtrace
cloudtrace_project_id: my-gcp-project # or set GOOGLE_CLOUD_PROJECT
cloudtrace_service_name: my-agent # optional
cloudtrace_lookback_hours: 1
cloudtrace_cache_ttl_seconds: 60
# Auth: GOOGLE_APPLICATION_CREDENTIALS / gcloud ADC / Workload IdentityAzure Monitor Config
default:
storage:
type: azuremonitor
# Never hardcode workspace_id — use AZURE_MONITOR_WORKSPACE_ID env var
azuremonitor_workspace_id: null
azuremonitor_service_name: my-agent # optional (cloud_RoleName)
azuremonitor_lookback_hours: 1
azuremonitor_cache_ttl_seconds: 60
# Auth: DefaultAzureCredential (managed identity → az login → env vars)DataDog Config
default:
storage:
type: datadog
datadog_site: us1 # us1, us3, us5, eu1, ap1
datadog_service: my-service # optional
datadog_lookback_hours: 1
datadog_cache_ttl_seconds: 60
# Secrets: DD_API_KEY and DD_APP_KEY must be set as env vars — never in configFor full details on authentication, CLI usage, and troubleshooting, see the Remote Trace Sources guide.
Next Steps
- Terminal UI Guide — Explore traces in the TUI
- Remote Trace Sources — Pull from X-Ray, Cloud Trace, Azure Monitor, DataDog
- Auto-Instrumentation — Zero-code LLM tracing
- Exporters — Export to any backend
- Processors — Configure data processing
- Deployment — Production deployment patterns