DeploymentOverview

Deployment

TraceCraft is designed to run everywhere - from a developer laptop to high-throughput production clusters on managed cloud platforms. This section covers how to configure, deploy, and operate TraceCraft in each environment.

Deployment Options

Choosing a Deployment Model

ScenarioRecommended Guide
First production deploymentProduction Configuration
Running on AWS BedrockAWS AgentCore
Running on Azure AI FoundryAzure AI Foundry
Running on GCP Vertex AIGCP Vertex Agent
Kubernetes clusterKubernetes
Very high trace volumeHigh Throughput

Quick Start: Production-Ready Config

The following configuration is a safe starting point for any production deployment. Adjust sampling_rate and exporter settings to match your environment.

import os
import tracecraft
 
tracecraft.init(
    service_name=os.getenv("SERVICE_NAME", "my-agent"),
    environment="production",
    console=False,
    otlp_endpoint=os.getenv("OTLP_ENDPOINT"),
    sampling_rate=float(os.getenv("TRACECRAFT_SAMPLING_RATE", "0.1")),
    always_keep_errors=True,
    enable_pii_redaction=True,
)
SERVICE_NAME=my-agent
OTLP_ENDPOINT=https://otlp.example.com:4317
TRACECRAFT_SAMPLING_RATE=0.1

Next Steps

Start with Production Configuration for the foundational settings, then move to the platform-specific guide that matches your infrastructure.