The initial problem was maddeningly familiar. A user reports a vague error on the frontend—a form submission fails intermittently. Our Angular application’s console logs show a generic HTTP 500 error. On the backend, our Go-Gin service logs are a torrential flood of concurrent requests. Sifting through them to find the specific API call that failed for that specific user at that specific time was an exercise in futility. We were flying blind, correlating frontend and backend events using nothing but timestamps and guesswork. This approach doesn’t scale past a toy project.
Our first thought was a simple custom X-Request-ID
header. The frontend would generate a UUID, send it with the request, and the Go backend would log it. This worked, for a while. But it was a fragile, bespoke solution. What happens when a second service is called? Do we manually propagate the header? What about standards for naming and format? It was clear we were reinventing a wheel, and poorly. The real goal wasn’t just to link two systems, but to build a foundation for genuine observability. We needed to adopt a standardized context propagation model. This led us to OpenTelemetry and the W3C Trace Context specification.
The plan became more concrete:
- Instrument the Angular frontend to initiate a trace for user interactions and automatically inject W3C
traceparent
headers into all outgoing API requests. - Instrument the Go-Gin backend to receive this
traceparent
header, continue the trace, and embed thetrace_id
andspan_id
into every single structured log message generated during that request’s lifecycle. - Deploy Fluentd as a log aggregator to collect these structured logs, parse them, and prepare them for a downstream system like Elasticsearch or Loki, with the trace context preserved as first-class metadata.
This wasn’t about adding a few logs; it was about architecting a traceable data flow across the entire stack.
Part 1: Frontend Instrumentation with Angular and OpenTelemetry
The first step is to make the client browser the origin of our trace. In a real-world project, you cannot rely solely on the backend to initiate traces, as you lose all visibility into the user’s experience before the request even hits the server.
We need several OpenTelemetry packages for the browser environment. A common mistake is to grab the Node.js packages, but the web requires its own set of instrumentations.
npm install @opentelemetry/api \
@opentelemetry/sdk-trace-web \
@opentelemetry/context-zone \
@opentelemetry/instrumentation-xml-http-request \
@opentelemetry/instrumentation-fetch \
@opentelemetry/exporter-trace-otlp-http \
@opentelemetry/propagator-w3c
The core of the setup is a dedicated TracingService
in Angular. This service is responsible for initializing the OpenTelemetry provider, which orchestrates instrumentations, processors, and exporters. This service must be initialized once, as early as possible in the application’s lifecycle, typically in app.module.ts
.
Here is the complete tracing.service.ts
:
// src/app/tracing.service.ts
import { Injectable } from '@angular/core';
import { WebTracerProvider, BatchSpanProcessor } from '@opentelemetry/sdk-trace-web';
import { W3CTraceContextPropagator } from '@opentelemetry/propagator-w3c';
import { ZoneContextManager } from '@opentelemetry/context-zone';
import { registerInstrumentations } from '@opentelemetry/instrumentation';
import { XMLHttpRequestInstrumentation } from '@opentelemetry/instrumentation-xml-http-request';
import { FetchInstrumentation } from '@opentelemetry/instrumentation-fetch';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { environment } from '../environments/environment';
@Injectable({
providedIn: 'root',
})
export class TracingService {
private isInitialized = false;
constructor() {}
public initialize(): void {
if (this.isInitialized || !environment.production) {
// Avoid re-initializing in dev mode with hot-reloading
// Or if tracing is disabled in the environment config.
return;
}
const provider = new WebTracerProvider();
// The OTLP exporter sends trace data to a collector.
// In this example, we're focusing on context propagation, so the endpoint
// might not be a primary concern, but it's required for a complete setup.
// Point this to your OpenTelemetry Collector endpoint.
const exporter = new OTLPTraceExporter({
url: 'http://localhost:4318/v1/traces', // Replace with your OTel Collector URL
});
provider.addSpanProcessor(new BatchSpanProcessor(exporter));
// To ensure context propagation works correctly across async operations in Angular
// we need to use the ZoneContextManager.
provider.register({
contextManager: new ZoneContextManager(),
propagator: new W3CTraceContextPropagator(),
});
// Registering instrumentations automatically patches browser APIs
// to create spans and propagate context.
registerInstrumentations({
instrumentations: [
new XMLHttpRequestInstrumentation({
// We must disable the default `traceparent` header injection here
// because we will handle it manually in an Angular HttpInterceptor.
// This gives us more control and ensures it works with Angular's DI system.
// If you don't do this, you might get duplicate headers.
propagateTraceHeaderCorsUrls: [
/.+/g, // This regex is overly permissive for demo; tighten in production
],
}),
new FetchInstrumentation(),
],
});
this.isInitialized = true;
console.log('OpenTelemetry Tracing Initialized for Web');
}
}
This service is then initialized in the root AppComponent
or AppModule
constructor.
// src/app/app.component.ts
import { Component, OnInit } from '@angular/core';
import { TracingService } from './tracing.service';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.css'],
})
export class AppComponent implements OnInit {
title = 'client-app';
constructor(private tracingService: TracingService) {}
ngOnInit() {
this.tracingService.initialize();
}
}
With registerInstrumentations
, OpenTelemetry automatically patches XMLHttpRequest
and fetch
. When Angular’s HttpClient
makes a request, the instrumentation intercepts it, creates a span, and injects the traceparent
header. This header contains the trace ID, parent span ID, and sampling flags, following the W3C standard. A typical traceparent
header looks like 00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01
.
Part 2: Backend Instrumentation with Go-Gin
The next step is to receive and process this context on the server. Our backend is a Go application using the Gin framework. The goal is to extract the traceparent
header, continue the trace, and, most critically, enrich all our logs with the trace and span IDs. Structured logging is non-negotiable for this to work. We’ll use zerolog
for its performance and API.
First, the necessary Go packages:
go get go.opentelemetry.io/otel
go get go.opentelemetry.io/otel/trace
go get go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin
go get go.opentelemetry.io/otel/propagation
go get github.com/gin-gonic/gin
go get github.com/rs/zerolog/log
The core of the backend implementation is a Gin middleware. This middleware will be responsible for three things:
- Extracting the trace context from incoming HTTP headers.
- Starting a new child span for the server-side processing.
- Injecting a context-aware logger into the Gin context, so that every subsequent handler can log with the trace information automatically.
Here is the architecture of our Go main.go
:
// main.go
package main
import (
"context"
"os"
"github.com/gin-gonic/gin"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
"go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/propagation"
"go.opentelemetry.io/otel/trace"
// For demo purposes, we use a stdout exporter. In production, this would be
// an OTLP exporter pointing to a collector.
"go.opentelemetry.io/otel/exporters/stdout/stdouttrace"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
)
// A custom context key to store our logger.
type loggerKey struct{}
// initTracer initializes the OpenTelemetry tracer provider.
func initTracer() (*sdktrace.TracerProvider, error) {
// This exporter writes traces to stdout. It's useful for debugging
// but should be replaced with an OTLP exporter in production.
exporter, err := stdouttrace.New(stdouttrace.WithPrettyPrint())
if err != nil {
return nil, err
}
tp := sdktrace.NewTracerProvider(
sdktrace.WithBatcher(exporter),
// In a real project, you'd configure sampling based on traffic.
// AlwaysSample is not for production use.
sdktrace.WithSampler(sdktrace.AlwaysSample()),
)
otel.SetTracerProvider(tp)
otel.SetTextMapPropagator(propagation.NewCompositeTextMapPropagator(propagation.TraceContext{}, propagation.Baggage{}))
return tp, nil
}
// tracingMiddleware creates a gin.HandlerFunc that injects a trace-aware logger.
func tracingMiddleware() gin.HandlerFunc {
return func(c *gin.Context) {
// Use the otelgin middleware to start the span
otelgin.Middleware("my-gin-server")(c)
// After otelgin has run, a span is available in the request context.
ctx := c.Request.Context()
span := trace.SpanFromContext(ctx)
if span.SpanContext().IsValid() {
// Create a child logger with trace_id and span_id fields.
// This is the critical step for correlating logs.
logger := log.With().
Str("trace_id", span.SpanContext().TraceID().String()).
Str("span_id", span.SpanContext().SpanID().String()).
Logger()
// Store the logger in the Gin context.
c.Request = c.Request.WithContext(context.WithValue(ctx, loggerKey{}, &logger))
}
c.Next()
}
}
// GetLogger retrieves the trace-aware logger from the context.
// If no logger is found, it returns the global logger.
func GetLogger(ctx context.Context) *zerolog.Logger {
if logger, ok := ctx.Value(loggerKey{}).(*zerolog.Logger); ok {
return logger
}
return &log.Logger
}
func main() {
// Configure zerolog for structured JSON output.
zerolog.TimeFieldFormat = zerolog.TimeFormatUnix
log.Logger = zerolog.New(os.Stdout).With().Timestamp().Logger()
tp, err := initTracer()
if err != nil {
log.Fatal().Err(err).Msg("Failed to initialize tracer")
}
defer func() {
if err := tp.Shutdown(context.Background()); err != nil {
log.Error().Err(err).Msg("Error shutting down tracer provider")
}
}()
r := gin.New()
// The pitfall here is middleware order. Our custom tracingMiddleware
// must run *after* the otelgin middleware that creates the span.
r.Use(tracingMiddleware())
r.Use(gin.Recovery())
// Example API endpoint
r.POST("/api/submit", func(c *gin.Context) {
// Retrieve our context-aware logger.
logger := GetLogger(c.Request.Context())
logger.Info().Msg("Received submission request")
var jsonData map[string]interface{}
if err := c.ShouldBindJSON(&jsonData); err != nil {
logger.Error().Err(err).Msg("Failed to bind JSON")
c.JSON(400, gin.H{"error": "Invalid JSON"})
return
}
logger.Info().Interface("data", jsonData).Msg("Processing data")
// Simulate some work
// Create a child span to trace this specific operation.
_, childSpan := otel.Tracer("submitHandler").Start(c.Request.Context(), "process-data-span")
defer childSpan.End()
// In a real application, you would pass the context to other functions.
// For example: processData(c.Request.Context(), jsonData)
c.JSON(200, gin.H{"status": "ok"})
})
log.Info().Msg("Starting server on port 8080")
if err := r.Run(":8080"); err != nil {
log.Fatal().Err(err).Msg("Failed to start server")
}
}
When a request comes in from our Angular app with the traceparent
header, the otelgin
middleware automatically extracts it and creates a server-side span that is a child of the frontend span. Our custom tracingMiddleware
then retrieves this span’s context (trace_id
and span_id
) and creates a new zerolog
instance with these fields baked in. Any log message created using this logger will now automatically contain the trace context.
A log line from this service will look like this in stdout
:
{"level":"info","trace_id":"4bf92f3577b34da6a3ce929d0e0e4736","span_id":"8a3a5e3c8c0a4e3c","time":1667884800,"message":"Received submission request"}
This is exactly what we need: self-contained, machine-readable logs enriched with correlation IDs.
Part 3: Aggregation with Fluentd
The final piece of the puzzle is collecting these logs. While the Go application logs to stdout
, in a containerized environment (like Docker or Kubernetes), this output is captured by the container runtime. We can configure Fluentd to tail these container logs, parse them, and forward them.
For a local development setup, we can use Docker Compose.
docker-compose.yml
:
version: '3.8'
services:
go-api:
build:
context: ./backend # Assuming your Go app is in a 'backend' directory
dockerfile: Dockerfile
ports:
- "8080:8080"
logging:
# Use the fluentd log driver
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: "docker.go-api"
fluentd:
image: fluent/fluentd:v1.16-1
ports:
- "24224:24224"
- "24224:24224/udp"
volumes:
- ./fluentd/conf:/fluentd/etc
Now, the fluentd.conf
file is where the magic happens. Fluentd will receive logs from the Docker driver. These logs are themselves wrapped in a JSON object by Docker. Our job is to parse the outer JSON, extract the actual log message (which is also JSON), and then parse that inner JSON.
fluentd/conf/fluent.conf
:
# This input plugin listens for logs from the Docker logging driver.
<source>
@type forward
port 24224
bind 0.0.0.0
</source>
# This match block processes logs with the tag 'docker.go-api'.
<match docker.go-api>
@type rewrite_tag_filter
<rule>
key log
pattern /^{.*}$/ # Only match if the 'log' field is a JSON string
tag parsed.go-api.${tag}
</rule>
# If the log field is not JSON, it will be ignored by the above rule and matched here.
# We can just dump it to stdout for debugging.
<rule>
key log
pattern /.+/
tag unparsed.${tag}
</rule>
</match>
# This block handles the logs we've identified as JSON.
<match parsed.go-api.**>
@type parser
key_name log # The field containing the JSON string from our Go app
reserve_data true # Keep the original fields from the Docker log driver
<parse>
@type json
</parse>
tag processed.go-api
</match>
# The final destination. For this example, we'll just print to Fluentd's stdout.
# In a production system, this would be an elasticsearch, loki, or other output plugin.
<match processed.**>
@type stdout
</match>
<match unparsed.**>
@type stdout
</match>
The flow inside Fluentd is:
-
in_forward
receives the log from Docker. The log record looks roughly like{"log": "{\"level\":\"info\",...}", "container_id": "...", ...}
. - The first
<match>
block inspects thelog
field. If it looks like a JSON object (/^{.*}$/
), it retags the log toparsed.go-api...
and sends it on. - The
parsed.go-api.**
block uses theparser
filter to parse the string content of thelog
field as JSON. This “unpacks” our structured Go log, promotinglevel
,trace_id
,span_id
, etc., to top-level fields in the Fluentd record. - Finally, the
processed.**
block matches the fully parsed record and prints it to standard out.
The final output from Fluentd for a single log message would be a clean, structured record:
processed.go-api: {"container_id":"...", "container_name":"...", "source":"stdout", "level":"info", "trace_id":"4bf92f3577b34da6a3ce929d0e0e4736", "span_id":"8a3a5e3c8c0a4e3c", "time":1667884800, "message":"Processing data", "data":{"foo":"bar"}}
Now we have a log record in our aggregation layer that contains not only the application message but also the full trace context and Docker metadata. When this is sent to a system like Elasticsearch, we can now search for all logs with trace_id: "4bf92f3577b34da6a3ce929d0e0e4736"
and instantly see the entire lifecycle of that user’s request, from the browser to the server and back.
sequenceDiagram participant Angular as Angular Frontend participant GoGin as Go-Gin API participant Fluentd as Fluentd Aggregator participant Backend as Log Backend (e.g., Loki) Angular->>+GoGin: POST /api/submit (with 'traceparent' header) GoGin->>GoGin: Middleware extracts trace context GoGin->>GoGin: Creates child span GoGin->>GoGin: Injects logger with trace_id into context GoGin->>Fluentd: Writes structured JSON log to stdout Fluentd->>Fluentd: Parses container log Fluentd->>Fluentd: Parses application JSON log GoGin-->>-Angular: 200 OK Fluentd->>+Backend: Forwards enriched log record Backend-->>-Fluentd: Ack
This architecture solves the initial problem of disconnected logs. The core principle is maintaining and propagating context across every boundary. By leveraging OpenTelemetry for a standardized context format and structured logging for machine-readability, we create a powerful foundation for debugging and analysis in a distributed system.
The current setup, however, is only a partial implementation of observability. We have log correlation, but we are not yet sending the actual trace span data to a proper tracing backend like Jaeger or Honeycomb. The OTLP exporters in both the Angular and Go code are configured but may be pointing to a dummy endpoint in this example. A production implementation would require standing up an OpenTelemetry Collector to receive both logs and traces, correlating them, and forwarding them to appropriate backends. Furthermore, we are sampling 100% of traces, which is unsustainable under load. A robust sampling strategy, potentially tail-based sampling performed at the collector level, would be a necessary next step to manage cost and performance in a real-world environment.