The initial problem was a familiar form of operational blindness. Our stack is modern—an Astro frontend for content, a Rust backend API for performance-critical data fetching, and Dgraph for storing our complex, interconnected data model. But when an error occurred, our observability was fragmented. We’d get a JavaScript error in Sentry from a user’s browser, and fifteen minutes later, a separate, context-free panic alert from the Rust service. Correlating the two was manual, time-consuming guesswork. We were flying blind, unable to answer the simple question: did this specific user action on the frontend cause this specific crash on the backend? This disconnect rendered our error tracking nearly useless for any complex, cross-boundary issue.
The goal was to establish a continuous trace, a single thread of context that followed a user’s request from their browser, through our Astro static shell, into the Rust API, down to the Dgraph query, and back. A failure at any point in this chain should be attached to this single, unified trace. In Sentry, this would manifest as a single, cohesive transaction, where a frontend error and a backend error are not just two isolated events, but parent and child in a causal relationship.
Our technology choices were deliberate. Astro provides incredible frontend performance through its islands architecture. Rust, with Actix-web, gives us a memory-safe, high-throughput API layer to handle Dgraph’s graph-structured responses. Dgraph itself is non-negotiable for our data shape. The challenge was not to replace these components, but to weave an observability fabric through them using Sentry and automate the entire build, symbolication, and release process with GitLab CI/CD. The key mechanism for this is the propagation of W3C Trace Context headers—specifically sentry-trace
and baggage
—across the network boundary.
The Rust API: Intercepting and Continuing the Trace
The first step was to make the Rust backend aware of traces initiated by the frontend. A vanilla sentry-actix
integration is straightforward but insufficient; it captures panics but knows nothing of the transactions they belong to. The critical piece is a custom Actix-web middleware that inspects incoming HTTP requests for the sentry-trace
header.
Here is the dependency setup in Cargo.toml
. We need sentry
for the core SDK, sentry-actix
for the integration, and actix-web
. The Dgraph client dgraph-tonic
is also essential.
# Cargo.toml
[dependencies]
actix-web = "4"
sentry = { version = "0.32.2", features = ["backtrace", "contexts", "debug-images", "panic", "tracing"] }
sentry-actix = "0.32.2"
tokio = { version = "1", features = ["full"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
dgraph-tonic = { version = "0.10.0", features = ["dgraph-1-1", "acl"] }
The core logic resides in a custom middleware. If the sentry-trace
header is present, we parse it to create a transaction context. This context links the new backend transaction to the parent transaction started by the Sentry SDK in the user’s browser. If the header is missing, we start a new, untethered transaction. This ensures every request is traced, whether it originates from our instrumented frontend or an external client.
// src/main.rs
use actix_web::{web, App, HttpRequest, HttpResponse, HttpServer, Responder};
use sentry::Hub;
use sentry_actix::Sentry;
use std::sync::Arc;
use dgraph_tonic::{Client, Query};
// A simple middleware to create a new Hub for each request
// This is crucial for isolating request data in a concurrent environment.
struct SentryHubMiddleware;
impl<S, B> actix_web::dev::Transform<S, actix_web::dev::ServiceRequest> for SentryHubMiddleware
where
S: actix_web::dev::Service<
actix_web::dev::ServiceRequest,
Response = actix_web::dev::ServiceResponse<B>,
Error = actix_web::Error,
>,
S::Future: 'static,
B: 'static,
{
type Response = actix_web::dev::ServiceResponse<B>;
type Error = actix_web::Error;
type Transform = SentryHubMiddlewareService<S>;
type InitError = ();
type Future = std::future::Ready<Result<Self::Transform, Self::InitError>>;
fn new_transform(&self, service: S) -> Self::Future {
std::future::ready(Ok(SentryHubMiddlewareService { service }))
}
}
pub struct SentryHubMiddlewareService<S> {
service: S,
}
impl<S, B> actix_web::dev::Service<actix_web::dev::ServiceRequest> for SentryHubMiddlewareService<S>
where
S: actix_web::dev::Service<
actix_web::dev::ServiceRequest,
Response = actix_web::dev::ServiceResponse<B>,
Error = actix_web::Error,
},
S::Future: 'static,
B: 'static,
{
type Response = actix_web::dev::ServiceResponse<B>;
type Error = actix_web::Error;
type Future = S::Future;
actix_web::dev::forward_ready!(service);
fn call(&self, req: actix_web::dev::ServiceRequest) -> Self::Future {
// Create a new Hub for this request. This isolates transactions and scopes.
let hub = Arc::new(Hub::new_from_top(Hub::current()));
let sentry_trace_header = req.headers().get("sentry-trace").and_then(|h| h.to_str().ok());
let transaction_context = if let Some(header) = sentry_trace_header {
// If the header exists, continue the transaction from the frontend.
sentry::TransactionContext::from_trace_header(
header,
&format!("{} {}", req.method(), req.path()),
)
} else {
// Otherwise, start a new transaction.
sentry::TransactionContext::new(
&format!("{} {}", req.method(), req.path()),
"http.server",
)
};
// Bind the hub to the Sentry scope for the duration of the request.
let _guard = sentry::Hub::run(hub, || {
let transaction = sentry::start_transaction(transaction_context);
sentry::configure_scope(|scope| {
scope.set_span(Some(transaction.clone().into()));
});
});
self.service.call(req)
}
}
#[derive(serde::Deserialize, serde::Serialize)]
struct GraphData {
name: String,
// other fields
}
#[derive(serde::Deserialize, serde::Serialize)]
struct QueryResult {
all: Vec<GraphData>,
}
async fn get_graph_data(req: HttpRequest, dgraph_client: web::Data<Client>) -> impl Responder {
// The transaction is retrieved from the current scope's Hub.
let transaction = sentry::get_current_scope().get_span();
let query_span = if let Some(tx) = transaction {
// Create a child span for the database operation.
// This is what isolates the Dgraph query time in the Sentry UI.
tx.start_child("db.graphql.query", "Query for graph data")
} else {
// Fallback if no transaction is active.
sentry::start_span("db.graphql.query", Some("Query for graph data"))
};
let query = r#"
query all() {
all(func: has(name)) {
name
}
}
"#;
let mut txn = dgraph_client.new_read_only_txn();
let response = match txn.query(query).await {
Ok(res) => {
query_span.set_status(sentry::protocol::SpanStatus::Ok);
query_span.finish();
let data: QueryResult = serde_json::from_slice(&res.json).expect("Failed to parse Dgraph response");
HttpResponse::Ok().json(data)
},
Err(e) => {
// Important: Tag the span with an error status.
query_span.set_status(sentry::protocol::SpanStatus::InternalError);
query_span.finish();
sentry::capture_error(&e);
HttpResponse::InternalServerError().body(format!("Dgraph query failed: {}", e))
}
};
response
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
// Sentry is initialized from environment variables: SENTRY_DSN, SENTRY_RELEASE
let _guard = sentry::init(sentry::ClientOptions {
traces_sample_rate: 1.0, // Sample all transactions in development
..Default::default()
});
let dgraph_endpoint = std::env::var("DGRAPH_ENDPOINT").unwrap_or_else(|_| "http://localhost:9080".to_string());
let dgraph_client = Client::new(dgraph_endpoint).expect("Failed to create Dgraph client");
println!("Server running at http://127.0.0.1:8080/");
HttpServer::new(move || {
App::new()
.app_data(web::Data::new(dgraph_client.clone()))
.wrap(SentryHubMiddleware) // Our custom middleware
.wrap(Sentry::new()) // The default sentry-actix middleware for panic handling
.route("/api/data", web::get().to(get_graph_data))
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}
A common mistake here is failing to create a new Hub
for each request. In a multi-threaded server like Actix-web, the Hub::current()
is thread-local. Without explicitly creating and binding a new hub for the request’s scope, you risk context bleeding between concurrent requests, where one request’s user data could be attached to another request’s error report. The SentryHubMiddleware
explicitly prevents this by creating a fresh hub for every incoming request.
Inside the get_graph_data
handler, we create a child span specifically for the Dgraph query. This is crucial for performance monitoring. In the Sentry UI, this will appear as a distinct bar in the transaction waterfall, allowing us to precisely measure database latency and separate it from business logic execution time.
The Astro Frontend: Initiating and Propagating the Trace
On the frontend, the Sentry Astro SDK handles most of the heavy lifting. The key is configuring it to inject the sentry-trace
header into outgoing fetch
requests destined for our Rust API. This is done via the tracePropagationTargets
option in the Sentry client configuration.
Here’s the setup for Sentry in an Astro project.
// sentry.client.config.ts
import * as Sentry from "@sentry/astro";
Sentry.init({
dsn: import.meta.env.PUBLIC_SENTRY_DSN,
integrations: [
new Sentry.BrowserTracing({
// Set 'tracePropagationTargets' to control for which URLs distributed tracing should be enabled
tracePropagationTargets: ["localhost", /^https:\/\/your-api\.com/],
}),
new Sentry.Replay(),
],
// Performance Monitoring
tracesSampleRate: 1.0,
// Session Replay
replaysSessionSampleRate: 0.1,
replaysOnErrorSampleRate: 1.0,
release: import.meta.env.PUBLIC_SENTRY_RELEASE,
environment: import.meta.env.PUBLIC_SENTRY_ENVIRONMENT,
});
sentry.server.config.ts
will have a similar, but simpler, configuration for server-side rendering. The tracePropagationTargets
array is the critical link. It tells the Sentry SDK: “If a fetch
request is made to an origin matching these patterns, attach the trace context headers.” Without this, the frontend transaction would terminate at the browser, and the backend would see an isolated, parent-less request.
A sample Astro component making the API call is now fully instrumented without any special code.
---
// src/components/DataFetcher.astro
---
<div id="data-container">
<button id="fetch-button">Fetch Graph Data</button>
<pre id="output"></pre>
</div>
<script>
document.getElementById('fetch-button').addEventListener('click', async () => {
const outputEl = document.getElementById('output');
outputEl.textContent = 'Fetching...';
try {
// Because `tracePropagationTargets` is configured, Sentry automatically adds
// `sentry-trace` and `baggage` headers to this request.
const response = await fetch('/api/data');
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
const data = await response.json();
outputEl.textContent = JSON.stringify(data, null, 2);
} catch (error) {
// This captured error will be linked to the transaction in Sentry.
Sentry.captureException(error);
outputEl.textContent = `Error: ${error.message}`;
}
});
</script>
When a user clicks the button, the Sentry SDK starts a transaction. The fetch
call is automatically wrapped in a child span. The sentry-trace
header is added, and when it hits our Rust middleware, the backend transaction is correctly linked. If the fetch
fails or the Dgraph query in the backend panics, the resulting Sentry event will contain the full, end-to-end trace context.
The CI/CD Pipeline: Automation and Symbolication
Having code instrumentation is only half the battle. For Sentry to be truly useful in a production environment, it needs two things: releases and debugging symbols. Releases allow you to track when errors are introduced or resolved. Debugging symbols (source maps for JavaScript, DWARF/PDB files for Rust) are what transform minified, cryptic stack traces into readable file names and line numbers.
GitLab CI/CD is the perfect place to automate this. The pipeline must be a multi-stage process that handles both the frontend and backend artifacts, communicates with the Sentry API via sentry-cli
, and ensures everything is associated with the same release version.
A pitfall in designing this pipeline is treating the frontend and backend as completely separate builds. They must be coordinated under a single Sentry release to maintain the link between them. We use the commit SHA as a stable and unique release identifier.
Here is the complete .gitlab-ci.yml
that orchestrates this process:
# .gitlab-ci.yml
variables:
# Use the commit SHA for a unique and traceable release name.
# This ensures frontend and backend artifacts are tied to the same release.
SENTRY_RELEASE: $CI_COMMIT_SHORT_SHA
# SENTRY_AUTH_TOKEN, SENTRY_ORG, and SENTRY_PROJECT should be configured
# in GitLab's CI/CD variables for security.
# SENTRY_DSN is needed at build time by Astro.
# SENTRY_URL is for self-hosted Sentry instances.
stages:
- setup
- build
- finalize_release
- deploy
# This job runs first to create the release in Sentry.
# Subsequent jobs will associate artifacts with this release.
create_sentry_release:
stage: setup
image: getsentry/sentry-cli
script:
- echo "Creating Sentry release ${SENTRY_RELEASE}..."
- sentry-cli releases new "$SENTRY_RELEASE"
- sentry-cli releases set-commits "$SENTRY_RELEASE" --auto
# Job for building the Rust backend.
build_backend:
stage: build
image: rust:1.73
before_script:
# Install sentry-cli for uploading debug symbols
- curl -sL https://sentry.io/get-cli/ | bash
script:
- echo "Building Rust binary with debug info..."
# The key is to build in release mode for performance, but with debug info (`-g`).
# Cargo's default release profile strips symbols, so we override it.
- cargo build --release
- echo "Uploading Rust debug symbols to Sentry..."
# sentry-cli automatically finds the executable and its debug symbols.
- sentry-cli debug-files upload --release "$SENTRY_RELEASE" ./target/release/your_binary_name
artifacts:
paths:
- ./target/release/your_binary_name
expire_in: 1 day
# Job for building the Astro frontend.
build_frontend:
stage: build
image: node:18
before_script:
- npm install
# Install sentry-cli for uploading source maps
- npm install @sentry/cli
script:
- echo "Building Astro frontend..."
# We must pass the release version to the Astro build process.
# The Astro code (sentry.client.config.ts) reads this env var.
- PUBLIC_SENTRY_RELEASE="$SENTRY_RELEASE" npm run build
- echo "Uploading JavaScript source maps to Sentry..."
# The command points to the build output directory and includes the JS files and their maps.
# The --url-prefix `~/` tells Sentry to strip the domain from stack trace frames.
- ./node_modules/.bin/sentry-cli releases files "$SENTRY_RELEASE" upload-sourcemaps ./dist --url-prefix '~/'
artifacts:
paths:
- ./dist
expire_in: 1 day
# Once all artifacts are uploaded, finalize the release.
# This marks it as "complete" in Sentry and can trigger notifications.
finalize_sentry_release:
stage: finalize_release
image: getsentry/sentry-cli
script:
- echo "Finalizing Sentry release ${SENTRY_RELEASE}..."
- sentry-cli releases finalize "$SENTRY_RELEASE"
# A placeholder for the actual deployment job.
# This would take the artifacts from the build jobs and deploy them.
deploy_to_production:
stage: deploy
script:
- echo "Deploying backend binary and frontend assets..."
# Your deployment script here
needs:
- build_backend
- build_frontend
This CI pipeline is now the backbone of our observability. A git push
triggers a cascade: Sentry is notified of a new release, the Rust binary is compiled with debug information which is then uploaded, the Astro site is built with source maps which are also uploaded, and finally, the release is marked as complete. When an error comes in from a deployed version, Sentry can now map any address in a crash report back to our original Rust source code, and any minified function name in a JavaScript stack trace back to our Astro/TypeScript component code.
sequenceDiagram participant UserBrowser as User's Browser (Astro) participant RustAPI as Rust API (Actix-web) participant DgraphDB as Dgraph participant Sentry UserBrowser->>UserBrowser: User Action (e.g., button click) Sentry-->>UserBrowser: Starts Transaction T1 (trace_id: A) UserBrowser->>RustAPI: GET /api/data
(Header: sentry-trace=A-...) activate RustAPI RustAPI->>RustAPI: Middleware parses sentry-trace Sentry-->>RustAPI: Continues Transaction T1
(parent: T1, trace_id: A) RustAPI->>RustAPI: Start Span S1 (db.query) RustAPI->>DgraphDB: Execute GraphQL Query activate DgraphDB DgraphDB-->>RustAPI: Return Query Result deactivate DgraphDB RustAPI->>RustAPI: Finish Span S1 RustAPI-->>UserBrowser: 200 OK (JSON Data) deactivate RustAPI Sentry-->>UserBrowser: Finishes Transaction T1
The diagram visualizes the flow. The sentry-trace
header is the baton passed from the frontend to the backend, ensuring Sentry understands this is a single, continuous operation.
The current implementation provides a robust, end-to-end trace. However, it is not without limitations. The trace context stops at our Rust application; we have no visibility into Dgraph’s internal query execution path, as Dgraph does not natively support OpenTelemetry context propagation. True database-level observability would require instrumentation within the database itself. Furthermore, we are only using the sentry-trace
header. The complementary baggage
header offers a powerful way to propagate business context—like feature flags, A/B test groups, or user IDs—from the frontend to the backend, enriching error reports and traces. Future iterations should focus on leveraging baggage to add this application-specific context. Finally, our CI pipeline can become a bottleneck as the Rust project grows; optimizing the debug symbol upload process, perhaps by using a dedicated symbol server or more advanced caching, will be necessary for maintaining fast build cycles.