Instrumentation Examples
Examples of using span metrics to debug performance issues and monitor application behavior across frontend and backend services.
These examples assume you have already set up tracing in your application.
This guide provides practical examples of using span metrics to solve common monitoring and debugging challenges across your entire application stack. Each example demonstrates how to instrument both frontend and backend components, showing how they work together within a distributed trace to provide end-to-end visibility.
Challenge: Understanding bottlenecks and failures in multi-step file processing operations across client and server components.
Solution: Track the entire file processing pipeline with detailed metrics at each stage, from client-side upload preparation through server-side processing.
Frontend Instrumentation:
// Client-side file upload handling
Sentry.startSpan(
{
name: "Client File Upload",
op: "file.upload.client",
attributes: {
// Client-side file preparation
"file.size_bytes": 15728640, // 15MB
"file.type": "image/jpeg",
"file.name": "user-profile.jpg",
// Client processing metrics
"client.chunk_size": 1048576, // 1MB chunks
"client.chunks_total": 15,
"client.compression_applied": true,
// Upload progress tracking
"upload.chunks_sent": 15,
"upload.retry_count": 0,
"upload.total_time_ms": 3500,
},
},
async () => {
// Client-side upload implementation
},
);
Backend Instrumentation:
// Server-side processing
Sentry.startSpan(
{
name: "Server File Processing",
op: "file.process.server",
attributes: {
// Server processing steps
"processing.steps_completed": [
"virus_scan",
"resize",
"compress",
"metadata",
],
"processing.output_size_bytes": 524288, // 512KB
"processing.compression_ratio": 0.033,
// Storage operations
"storage.provider": "s3",
"storage.region": "us-west-2",
"storage.upload_time_ms": 850,
// CDN configuration
"cdn.provider": "cloudfront",
"cdn.propagation_ms": 1500,
},
},
async () => {
// Server-side processing implementation
},
);
How the Trace Works Together: The frontend span initiates the trace and handles the file upload process. It propagates the trace context to the backend through the upload request headers. The backend span continues the trace, processing the file and storing it. This creates a complete picture of the file's journey from client to CDN, allowing you to:
- Identify bottlenecks at any stage (client prep, upload, server processing, CDN propagation)
- Track end-to-end processing times and success rates
- Monitor resource usage across the stack
- Correlate client-side upload issues with server-side processing errors
Challenge: Managing cost (token usage) and performance of LLM integrations across frontend and backend coponents.
Solution: Tracking of the entire LLM interaction flow, from user input to response rendering.
Frontend Instrumentation:
// Client-side LLM interaction handling
Sentry.startSpan(
{
name: "LLM Client Interaction",
op: "ai.client",
attributes: {
// User interaction metrics
"input.char_count": 280,
"input.language": "en",
"input.type": "question",
// UI performance
"ui.time_to_first_token_ms": 245,
"ui.tokens_per_second": 15,
"ui.total_render_time_ms": 3250,
// Stream handling
"stream.buffer_size": 10,
"stream.chunks_received": 42,
"stream.rendering_mode": "markdown",
},
},
async () => {
// Client-side LLM handling
},
);
Backend Instrumentation:
// Server-side LLM processing
Sentry.startSpan(
{
name: "LLM API Processing",
op: "ai.server",
attributes: {
// Model configuration
"llm.model": "gpt-4",
"llm.temperature": 0.7,
"llm.max_tokens": 2000,
// Token usage metrics
"llm.prompt_tokens": 425,
"llm.completion_tokens": 632,
"llm.total_tokens": 1057,
// Performance tracking
"llm.api_latency_ms": 2800,
"llm.queue_time_ms": 150,
// Cost tracking
"llm.cost_usd": 0.076,
"llm.rate_limit_remaining": 95,
},
},
async () => {
// Server-side LLM processing
},
);
How the Trace Works Together: The frontend span captures the user interaction and UI rendering performance, while the backend span tracks the actual LLM API interaction. The distributed trace shows the complete flow from user input to rendered response, enabling you to:
- Analyze end-to-end response times and user experience
- Track costs and token usage patterns
- Optimize streaming performance and UI rendering
- Monitor rate limits and queue times
- Correlate user inputs with model performance
Challenge: Understanding the complete purchase flow and identifying revenue-impacting issues across the entire stack.
Solution: Track the full checkout process from cart interaction to order fulfillment.
Frontend Instrumentation:
// Client-side checkout process
Sentry.startSpan(
{
name: "Checkout UI Flow",
op: "commerce.checkout.client",
attributes: {
// Cart interaction metrics
"cart.items_added": 3,
"cart.items_removed": 0,
"cart.update_count": 2,
// User interaction tracking
"ui.form_completion_time_ms": 45000,
"ui.payment_method_changes": 1,
"ui.address_validation_retries": 0,
// Client performance
"client.page_load_time_ms": 850,
"client.payment_widget_load_ms": 650,
"client.total_interaction_time_ms": 120000,
},
},
async () => {
// Client-side checkout implementation
},
);
Backend Instrumentation:
// Server-side order processing
Sentry.startSpan(
{
name: "Order Processing",
op: "commerce.order.server",
attributes: {
// Order details
"order.id": "ord_123456789",
"order.total_amount": 159.99,
"order.currency": "USD",
"order.items": ["SKU123", "SKU456", "SKU789"],
// Payment processing
"payment.provider": "stripe",
"payment.method": "credit_card",
"payment.processing_time_ms": 1200,
// Inventory checks
"inventory.check_time_ms": 150,
"inventory.all_available": true,
// Fulfillment
"fulfillment.warehouse": "WEST-01",
"fulfillment.shipping_method": "express",
"fulfillment.estimated_delivery": "2024-03-20",
},
},
async () => {
// Server-side order processing
},
);
How the Trace Works Together: The frontend span tracks the user's checkout experience, while the backend span handles order processing and fulfillment. The distributed trace provides visibility into the entire purchase flow, allowing you to:
- Analyze checkout funnel performance and drop-off points
- Track payment processing success rates and timing
- Monitor inventory availability impact on conversions
- Measure end-to-end order completion times
- Identify friction points in the user experience
Challenge: Understanding performance and reliability of distributed job processing systems, from job creation through completion.
Solution: Comprehensive tracking of job lifecycle across scheduling, queueing, and processing stages.
Frontend Instrumentation:
// Client-side job submission and monitoring
Sentry.startSpan(
{
name: "Job Submission Flow",
op: "job.client",
attributes: {
// Job configuration
"job.type": "video_transcoding",
"job.priority": "high",
"job.estimated_duration_ms": 300000,
// Input metrics
"input.size_bytes": 52428800, // 50MB
"input.format": "mp4",
"input.segments": 5,
// Client-side scheduling
"schedule.requested_start": "2024-03-15T10:00:00Z",
"schedule.deadline": "2024-03-15T11:00:00Z",
// Progress monitoring
"monitor.polling_interval_ms": 5000,
"monitor.status_updates_received": 12,
"monitor.last_progress_percent": 45,
},
},
async () => {
// Job submission and progress tracking implementation
},
);
Backend Instrumentation:
// Server-side job processing
Sentry.startSpan(
{
name: "Job Processing Pipeline",
op: "job.server",
attributes: {
// Queue metrics
"queue.name": "video-processing",
"queue.provider": "redis",
"queue.length_at_enqueue": 23,
"queue.wait_time_ms": 45000,
// Worker metrics
"worker.id": "worker-pod-123",
"worker.current_load": 0.75,
"worker.memory_usage_mb": 1024,
// Processing stages
"processing.stages_completed": [
"download",
"transcode",
"thumbnail",
],
"processing.stage_durations_ms": {
download: 12000,
transcode: 180000,
thumbnail: 5000,
},
// Resource utilization
"resource.cpu_percent": 85,
"resource.gpu_utilization": 0.92,
"resource.memory_peak_mb": 2048,
// Job outcome
"outcome.status": "completed",
"outcome.retry_count": 0,
"outcome.output_size_bytes": 31457280, // 30MB
},
},
async () => {
// Job processing implementation
},
);
How the Trace Works Together: The frontend span tracks job submission and monitoring, while the backend span captures queue management and processing details. The distributed trace provides visibility into the entire job lifecycle, enabling you to:
- Monitor end-to-end job processing times and success rates
- Track queue health and worker resource utilization
- Identify bottlenecks in specific processing stages
- Analyze job scheduling efficiency and queue wait times
- Optimize resource allocation based on job characteristics
For more information about implementing these examples effectively, see our Span Metrics guide which includes detailed best practices and implementation guidelines.
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").