Deployment
TraceCraft is designed to run everywhere - from a developer laptop to high-throughput production clusters on managed cloud platforms. This section covers how to configure, deploy, and operate TraceCraft in each environment.
Deployment Options
Baseline settings for production: sampling, PII redaction, async export, health checks,
Deploy TraceCraft alongside AWS Bedrock AgentCore. Covers IAM roles, CloudWatch
Integrate with Azure AI Foundry agents. Includes managed identity setup, Azure Monitor
Run TraceCraft with Vertex AI Agent Builder. Covers Workload Identity, Cloud Trace
Complete Kubernetes deployment guide: Helm values, ConfigMaps, Secrets, sidecar
Optimize for millions of traces per day. Covers async batching, aggressive sampling,
Choosing a Deployment Model
| Scenario | Recommended Guide |
|---|---|
| First production deployment | Production Configuration |
| Running on AWS Bedrock | AWS AgentCore |
| Running on Azure AI Foundry | Azure AI Foundry |
| Running on GCP Vertex AI | GCP Vertex Agent |
| Kubernetes cluster | Kubernetes |
| Very high trace volume | High Throughput |
Quick Start: Production-Ready Config
The following configuration is a safe starting point for any production deployment. Adjust
sampling_rate and exporter settings to match your environment.
import os
import tracecraft
tracecraft.init(
service_name=os.getenv("SERVICE_NAME", "my-agent"),
environment="production",
console=False,
otlp_endpoint=os.getenv("OTLP_ENDPOINT"),
sampling_rate=float(os.getenv("TRACECRAFT_SAMPLING_RATE", "0.1")),
always_keep_errors=True,
enable_pii_redaction=True,
)SERVICE_NAME=my-agent
OTLP_ENDPOINT=https://otlp.example.com:4317
TRACECRAFT_SAMPLING_RATE=0.1Next Steps
Start with Production Configuration for the foundational settings, then move to the platform-specific guide that matches your infrastructure.