The core pain point was a bottleneck in our CI/CD pipeline: end-to-end testing for frontend canary releases was a manual, brittle process. Our Cypress test suite was hardcoded to a single staging URL. Validating a new canary version meant a developer had to manually update test configurations, redeploy the test runner, or worse, have a separate, parallel test suite just for canaries. This introduced delays and inconsistencies. We needed a system where the test environment itself could dynamically adapt to the current deployment strategy, querying a single source of truth to decide which application version to target.
The initial concept was to decouple the test runner’s configuration from its codebase. Instead of a static baseUrl
in cypress.config.js
, the test runner should fetch its target environment at runtime from a reliable, distributed configuration store. This would allow us to change the test target for the entire E2E suite by updating a single key-value pair, without touching the test code or its deployment definition. Docker provides the necessary environment isolation, Cypress is our testing tool, and for the configuration store, etcd
was the logical choice. In a real-world project, using a simple file or database for this kind of critical operational configuration is a recipe for race conditions and scalability issues. etcd
provides the consistency and watch mechanisms required for a robust implementation.
This post documents the build process for this dynamic testing harness. We will construct a multi-container Docker environment running two distinct versions of a frontend application (simulating a blue/green or canary deployment), an etcd
instance for configuration, and a Cypress runner that dynamically configures itself by querying etcd
before executing the test suite.
System Architecture and Component Setup
The entire system will be orchestrated using Docker Compose. The architecture consists of the following components:
-
frontend-blue
: A Docker container serving version1.0.0
of our React application. -
frontend-green
: A Docker container serving version1.1.0
(the canary candidate) of the same application. -
etcd
: A single-nodeetcd
cluster to store our dynamic configuration. -
cypress-runner
: A container with Cypress and its dependencies, including theetcd3
Node.js client, which will execute our E2E tests. - Orchestration Scripts: Shell and Node.js scripts to manage the test flow and interact with
etcd
.
The data flow for a test run is as follows:
sequenceDiagram participant Operator as Developer/CI Script participant etcd as etcd Store participant CypressRunner as Cypress Runner participant BlueApp as Frontend v1.0.0 participant GreenApp as Frontend v1.1.0 Operator->>etcd: SET key 'config/frontend/target' to 'blue' Operator->>CypressRunner: docker-compose run cypress-runner CypressRunner->>etcd: GET key 'config/frontend/target' etcd-->>CypressRunner: Return value 'blue' CypressRunner->>CypressRunner: Dynamically set baseUrl to http://frontend-blue:3000 CypressRunner->>BlueApp: Execute tests against v1.0.0 BlueApp-->>CypressRunner: Test results Operator->>etcd: SET key 'config/frontend/target' to 'green' Operator->>CypressRunner: docker-compose run cypress-runner CypressRunner->>etcd: GET key 'config/frontend/target' etcd-->>CypressRunner: Return value 'green' CypressRunner->>CypressRunner: Dynamically set baseUrl to http://frontend-green:3000 CypressRunner->>GreenApp: Execute tests against v1.1.0 GreenApp-->>CypressRunner: Test results
Let’s begin by defining the project structure.
.
├── docker-compose.yml
├── frontend-app/
│ ├── public/
│ ├── src/
│ │ ├── App.css
│ │ ├── App.js # Main React component
│ │ └── index.js
│ ├── Dockerfile
│ └── package.json
├── cypress-tests/
│ ├── cypress/
│ │ ├── e2e/
│ │ │ └── smoke.cy.js
│ │ └── support/
│ ├── cypress.config.js # Core of the dynamic logic
│ └── package.json
└── scripts/
├── run-e2e.sh
└── set-target.js
Step 1: Containerizing the Frontend Application
We need two versions of our application to simulate a canary deployment. The difference will be a simple text change in the main App.js
component.
frontend-app/src/App.js
(for version 1.0.0):
import './App.css';
function App() {
return (
<div className="App">
<header className="App-header">
<h1 data-testid="app-version-header">Frontend Application Version 1.0.0 (Blue)</h1>
<p>This is the stable, production release.</p>
</header>
</div>
);
}
export default App;
For version 1.1.0
, we’ll simply change the h1
tag content to "Frontend Application Version 1.1.0 (Green)"
.
The Dockerfile
for the frontend will use a multi-stage build for efficiency. A common mistake is to ship the entire Node.js development environment into production. We use a builder
stage and then copy the static assets to a lightweight nginx
server.
frontend-app/Dockerfile
:
# ---- Stage 1: Build ----
# Use a specific Node version for reproducibility
FROM node:18-alpine AS builder
WORKDIR /app
# Copy package files and install dependencies
# This layer is cached unless package.json or package-lock.json changes
COPY package*.json ./
RUN npm install
# Copy the rest of the application source code
COPY . .
# Argument to control which version we are building
ARG APP_VERSION_TAG=1.0.0
ENV REACT_APP_VERSION=$APP_VERSION_TAG
# Build the production-ready static files
RUN npm run build
# ---- Stage 2: Production ----
# Use a lightweight Nginx server
FROM nginx:1.23-alpine
# Copy the built static files from the builder stage
COPY /app/build /usr/share/nginx/html
# Expose port 80 for Nginx
EXPOSE 80
# The default Nginx command will start the server
CMD ["nginx", "-g", "daemon off;"]
This Dockerfile
is generic. We will build two distinct images from it using Docker build arguments in our orchestration script later.
Step 2: Setting Up the Docker Compose Environment
The docker-compose.yml
file is the heart of our local testing environment. It defines and connects all the services.
docker-compose.yml
:
version: '3.8'
services:
etcd:
image: bitnami/etcd:3.5
environment:
- ALLOW_NONE_AUTHENTICATION=yes
- ETCD_ADVERTISE_CLIENT_URLS=http://etcd:2379
ports:
- "2379:2379"
volumes:
- etcd_data:/bitnami/etcd
frontend-blue:
build:
context: ./frontend-app
args:
APP_VERSION_TAG: 1.0.0 # Build v1.0.0
image: frontend-app:1.0.0
container_name: frontend-blue
ports:
- "3001:80" # Expose on host port 3001
frontend-green:
build:
context: ./frontend-app
args:
APP_VERSION_TAG: 1.1.0 # Build v1.1.0
image: frontend-app:1.1.0
container_name: frontend-green
ports:
- "3002:80" # Expose on host port 3002
cypress-runner:
build:
context: ./cypress-tests
container_name: cypress-runner
# Depend on services it needs to test
depends_on:
- etcd
- frontend-blue
- frontend-green
# Environment variables for configuration
environment:
- CYPRESS_ETCD_ENDPOINTS=http://etcd:2379
- CYPRESS_TARGET_CONFIG_KEY=config/frontend/target
volumes:
# Mount test reports and videos out of the container
- ./cypress-tests/cypress/reports:/app/cypress/reports
- ./cypress-tests/cypress/videos:/app/cypress/videos
- ./cypress-tests/cypress/screenshots:/app/cypress/screenshots
volumes:
etcd_data:
A key detail here is that cypress-runner
does not expose any ports. It’s an ephemeral container designed to run a task and exit. We also pass configuration like the etcd
endpoint and the target key via environment variables, a best practice for containerized applications.
Step 3: Creating a Dynamic Cypress Configuration
This is where the core logic resides. We need to modify the Cypress setup to be etcd
-aware. First, we install the necessary dependencies in the cypress-tests
directory.
cd cypress-tests
npm init -y
npm install cypress etcd3
Next, we create a Dockerfile
for the Cypress runner.
cypress-tests/Dockerfile
:
# Use an official Cypress base image
FROM cypress/browsers:node18.12.0-chrome107
WORKDIR /app
# Copy package files and install dependencies
COPY package*.json ./
RUN npm install
# Copy the rest of the Cypress configuration and tests
COPY . .
# This is the command that will be executed by `docker-compose run`
# We don't define a CMD or ENTRYPOINT because we will specify it in the run command
Now, the most critical piece: cypress.config.js
. We will use the setupNodeEvents
function to run code in the Node.js process before the browser launcher starts. This is the perfect place to fetch our configuration.
cypress-tests/cypress.config.js
:
const { defineConfig } = require('cypress');
const { Etcd3 } = require('etcd3');
// Configuration mapping for our application versions
const targetMap = {
blue: 'http://frontend-blue:80', // Service name from docker-compose
green: 'http://frontend-green:80',
};
/**
* Asynchronously fetches the target configuration from etcd.
* @returns {Promise<string>} The configured baseUrl.
*/
async function getBaseUrlFromEtcd() {
const etcdEndpoints = process.env.CYPRESS_ETCD_ENDPOINTS;
const configKey = process.env.CYPRESS_TARGET_CONFIG_KEY;
if (!etcdEndpoints || !configKey) {
throw new Error('ETCD_ENDPOINTS and TARGET_CONFIG_KEY must be set in the environment.');
}
console.log(`[ETCD] Connecting to ${etcdEndpoints}...`);
const client = new Etcd3({ hosts: etcdEndpoints });
try {
const targetValue = await client.get(configKey).string();
console.log(`[ETCD] Fetched target key '${configKey}': '${targetValue}'`);
if (!targetValue) {
// In a real-world project, you'd fail loudly here.
// A missing configuration key is a critical failure.
console.error(`[ETCD] Error: Configuration key '${configKey}' not found in etcd.`);
throw new Error(`Configuration key not found: ${configKey}`);
}
const baseUrl = targetMap[targetValue];
if (!baseUrl) {
console.error(`[ETCD] Error: Unknown target value '${targetValue}'. Valid targets are: ${Object.keys(targetMap).join(', ')}.`);
throw new Error(`Invalid target value from etcd: ${targetValue}`);
}
console.log(`[Cypress] Setting baseUrl to: ${baseUrl}`);
return baseUrl;
} catch (err) {
console.error('[ETCD] Failed to fetch configuration from etcd.', err);
// Propagate the error to fail the test run immediately.
// Don't default to a fallback URL, as that can lead to tests passing against the wrong environment.
throw err;
} finally {
// It's good practice to close the client connection, although it might not be strictly necessary
// for a short-lived process like the Cypress config setup.
await client.close();
}
}
module.exports = defineConfig({
e2e: {
async setupNodeEvents(on, config) {
// Fetch the dynamic baseUrl and merge it into the Cypress config object.
// This config object is then returned and used by Cypress.
const baseUrl = await getBaseUrlFromEtcd();
config.baseUrl = baseUrl;
// Make sure to return the config object as it might have been modified by the plugin.
return config;
},
video: true,
videosFolder: 'cypress/videos',
screenshotsFolder: 'cypress/screenshots',
reporter: 'junit',
reporterOptions: {
mochaFile: 'cypress/reports/junit-[hash].xml',
},
},
});
This configuration includes robust error handling. A common mistake is to provide a default baseUrl
if the etcd
lookup fails. This is dangerous. It can lead to a “false green” pipeline, where tests pass because they ran against an old, stable version of the application, completely missing the canary. The system must fail fast and loud if its configuration source is unavailable or contains invalid data.
The Cypress test itself is straightforward; its purpose is to verify which version it’s hitting.
cypress-tests/cypress/e2e/smoke.cy.js
:
describe('Frontend Application Smoke Test', () => {
it('loads the homepage and correctly displays the version header', () => {
// cy.visit('/') will use the dynamic baseUrl set in cypress.config.js
cy.visit('/');
// We don't know which version we are targeting in the test code itself,
// and that is the entire point. The test is agnostic.
// It just checks for a header and ensures it contains expected text.
cy.get('[data-testid="app-version-header"]')
.should('be.visible')
.and('include.text', 'Frontend Application Version');
});
it('conditionally checks for version-specific text', () => {
// This demonstrates a more advanced pattern where a test can adapt
// based on the environment it detects.
cy.visit('/');
cy.get('[data-testid="app-version-header"]').invoke('text').then((headerText) => {
if (headerText.includes('1.0.0')) {
cy.log('Detected Blue Version (1.0.0)');
cy.contains('This is the stable, production release.').should('be.visible');
} else if (headerText.includes('1.1.0')) {
cy.log('Detected Green Version (1.1.0)');
// In a real test, you would assert on a feature/text specific to the canary.
// For now, we just confirm we didn't find the old text.
cy.contains('This is the stable, production release.').should('not.exist');
} else {
throw new Error('Could not determine application version from header text!');
}
});
});
});
Step 4: Orchestration and Execution
Finally, we need scripts to tie everything together. First, a Node.js script to set the target in etcd
.
scripts/set-target.js
:
const { Etcd3 } = require('etcd3');
const target = process.argv[2];
const validTargets = ['blue', 'green'];
if (!target || !validTargets.includes(target)) {
console.error(`Usage: node set-target.js <blue|green>`);
process.exit(1);
}
const client = new Etcd3({ hosts: 'http://localhost:2379' });
const configKey = 'config/frontend/target';
async function main() {
try {
console.log(`Setting '${configKey}' to '${target}' in etcd...`);
await client.put(configKey).value(target);
console.log('Successfully set the target.');
} catch (err) {
console.error('Failed to communicate with etcd.', err);
process.exit(1);
}
}
main();
Note that this script connects to localhost:2379
because we’ll run it from the host machine, where docker-compose
has exposed the port.
Now, the main orchestration shell script.
scripts/run-e2e.sh
:
#!/bin/bash
set -e # Exit immediately if a command exits with a non-zero status.
TARGET_VERSION=$1
if [ -z "$TARGET_VERSION" ]; then
echo "Usage: ./run-e2e.sh <blue|green>"
exit 1
fi
echo "---- Building Docker images if they don't exist ----"
docker-compose build
echo "---- Starting services in detached mode ----"
docker-compose up -d etcd frontend-blue frontend-green
# Wait a moment for services to be ready
echo "---- Waiting for services to initialize... ----"
sleep 5
echo "---- Setting E2E test target to '$TARGET_VERSION' in etcd ----"
node ./scripts/set-target.js "$TARGET_VERSION"
echo "---- Running Cypress tests against '$TARGET_VERSION' target ----"
# Run the cypress runner. It will connect to the other services via the Docker network.
# We add --rm to automatically remove the container after it exits.
docker-compose run --rm cypress-runner npx cypress run
echo "---- E2E test run complete. Tearing down services. ----"
docker-compose down
echo "---- Done. ----"
To execute the entire flow:
Test the blue version:
./scripts/run-e2e.sh blue
You will see logs fromcypress.config.js
confirming it fetchedblue
and set thebaseUrl
tohttp://frontend-blue:80
. The tests will pass.Test the green/canary version:
./scripts/run-e2e.sh green
This time, the logs will show thebaseUrl
being set tohttp://frontend-green:80
, and Cypress will execute the same test suite against the new version of the application.
This solution successfully decouples the test logic from the environment configuration. The CI/CD pipeline’s only responsibility is to call the orchestration script with the correct parameter (blue
or green
), which it can determine from the deployment context (e.g., a Git branch or a deployment tool’s state).
Limitations and Future Iterations
The current implementation tests each application version in isolation by directly targeting its container. A true canary testing scenario involves a load balancer splitting traffic between versions. The next evolution of this system would be to introduce a programmable reverse proxy (like Nginx with Lua, or Traefik) into the docker-compose
setup. This proxy would also query etcd
to determine its routing rules (e.g., 90% traffic to blue, 10% to green). The Cypress baseUrl
would then be fixed to the proxy’s address. The tests would need to become more sophisticated, potentially setting specific request headers to force routing to the canary for certain tests, or running statistical checks to verify the traffic split over many requests.
Furthermore, etcd
‘s watch
feature is currently unused. A more advanced system could have a long-running “test coordinator” service that watches for changes to the config/frontend/target
key in etcd
and automatically triggers the appropriate E2E test suite. This moves from a command-driven model to a more reactive, event-driven testing architecture, which is far more powerful in a dynamic, continuously deployed environment.