Our CI bill was the first indicator of a serious architectural problem. A one-line documentation change in our fastify-api-gateway
service was triggering a full 20-minute build and test cycle for the java-user-service
, followed by a full redeployment of both services to our Kubernetes staging environment. The feedback loop for developers was glacial, and the wasted compute was becoming unjustifiable. This was the direct result of a monolithic CI process applied to a burgeoning polyglot microservices monorepo, and it was unsustainable.
Our initial, naive approach was to use CircleCI’s when
clauses with git
commands piped into them. The config grew into an unmanageable mess of complex shell logic embedded directly within YAML, making it brittle and nearly impossible to debug. A new service onboarding meant a painful refactoring of the entire pipeline.
The turning point was embracing CircleCI’s dynamic configuration feature, which allows a pipeline to generate and then execute a new configuration based on custom logic. This let us decouple the decision-making from the pipeline definition itself. The strategy was to create a preliminary setup job that would inspect the git changes, determine which microservices were affected, and then construct a tailored config.yml
on the fly containing only the necessary jobs.
Project Structure: The Foundation for Filtering
Effective path filtering starts with a sane directory structure. Our monorepo was organized to create clear boundaries between services and shared infrastructure code.
.
├── .circleci/
│ ├── config.yml # Initial static config
│ ├── generate_config.sh # The dynamic config generation script
│ └── templates/
│ ├── fastify_pipeline.yml # YAML fragment for the Fastify service
│ ├── java_pipeline.yml # YAML fragment for the Java service
│ └── deploy_pipeline.yml # YAML fragment for K8s deployment
├── services/
│ ├── fastify-api-gateway/
│ │ ├── src/
│ │ ├── test/
│ │ ├── Dockerfile
│ │ ├── package.json
│ │ └── .eslintrc.js
│ └── java-user-service/
│ ├── src/
│ ├── pom.xml
│ └── Dockerfile
└── k8s-manifests/
├── api-gateway-deployment.yaml
└── user-service-deployment.yaml
This layout is critical. Any change inside services/fastify-api-gateway/
should only trigger the Fastify pipeline. A change in k8s-manifests/
should trigger a deployment, but not necessarily a rebuild of the services if their source code is untouched.
The Initial Static Configuration
The entry point is a minimal .circleci/config.yml
. Its sole purpose is to check out the code and execute our generation script. It doesn’t define any build, test, or deploy jobs itself.
# .circleci/config.yml
version: 2.1
# Orbs provide packaged reusable configuration.
# The path-filtering orb is essential for determining changed files.
# The continuation orb is used to trigger the dynamically generated config.
orbs:
path-filtering: circleci/path-[email protected]
continuation: circleci/[email protected]
# The main workflow that starts on every commit.
workflows:
generate-and-run:
jobs:
- generate-config:
# We need a context with access to a CircleCI API token
# to use the continuation API.
context: org-global-context
The magic happens in the generate-config
job, which we define next.
# .circleci/config.yml (continued)
jobs:
generate-config:
docker:
- image: cimg/base:2023.08
steps:
- checkout
- run:
name: "Generate Dynamic Configuration"
command: |
# Make the script executable and run it
chmod +x .circleci/generate_config.sh
.circleci/generate_config.sh
# The continuation orb takes the generated config and starts a new pipeline.
- continuation/continue:
configuration_path: .circleci/generated_config.yml
This setup is clean and delegates all the complexity to our generate_config.sh
script.
The Dynamic Configuration Generation Script
This shell script is the brain of the operation. It uses git
to find changed files and builds the generated_config.yml
by concatenating predefined YAML templates.
#!/bin/bash
# .circleci/generate_config.sh
# Exit on error
set -e
# The base of the dynamically generated config file.
# It includes orb definitions and a placeholder for jobs.
BASE_CONFIG=".circleci/templates/base.yml"
GENERATED_CONFIG=".circleci/generated_config.yml"
# Copy the base template to start
cp $BASE_CONFIG $GENERATED_CONFIG
echo "Detecting changed services..."
# We use git to determine the diff against the main branch.
# In a real-world project, you'd compare against the merge-base for better accuracy on feature branches.
# `git diff --name-only main...HEAD`
# For this example, we'll simulate it. Let's assume this is populated by a real git command.
# Forcing a change for demonstration purposes:
CHANGED_FILES=$( (git diff --name-only main...HEAD) || true )
if [ -z "$CHANGED_FILES" ]; then
echo "No changes detected. Exiting."
# We can create a minimal config that does nothing.
echo "jobs: []" >> $GENERATED_CONFIG
echo "workflows: {build-and-deploy: {jobs: []}}" >> $GENERATED_CONFIG
exit 0
fi
echo "Changed files:"
echo "$CHANGED_FILES"
# Flags to track which pipelines to trigger
RUN_FASTIFY_PIPELINE=false
RUN_JAVA_PIPELINE=false
RUN_DEPLOY_PIPELINE=false
# Logic to determine which pipelines to run
if echo "$CHANGED_FILES" | grep -q "^services/fastify-api-gateway/"; then
echo "Changes detected in fastify-api-gateway service."
RUN_FASTIFY_PIPELINE=true
RUN_DEPLOY_PIPELINE=true # A service rebuild requires a deployment
fi
if echo "$CHANGED_FILES" | grep -q "^services/java-user-service/"; then
echo "Changes detected in java-user-service service."
RUN_JAVA_PIPELINE=true
RUN_DEPLOY_PIPELINE=true # A service rebuild requires a deployment
fi
if echo "$CHANGED_FILES" | grep -q "^k8s-manifests/"; then
echo "Changes detected in k8s-manifests."
RUN_DEPLOY_PIPELINE=true
fi
# Append jobs and workflows to the generated config
{
echo "jobs:"
if [ "$RUN_FASTIFY_PIPELINE" = true ]; then cat .circleci/templates/fastify_pipeline.yml; fi
if [ "$RUN_JAVA_PIPELINE" = true ]; then cat .circleci/templates/java_pipeline.yml; fi
# The deploy job is always included if the flag is set, but its execution
# depends on the upstream build jobs succeeding.
if [ "$RUN_DEPLOY_PIPELINE" = true ]; then cat .circleci/templates/deploy_pipeline.yml; fi
echo ""
echo "workflows:"
echo " build-and-deploy:"
echo " jobs:"
if [ "$RUN_FASTIFY_PIPELINE" = true ]; then
echo " - build-fastify"
fi
if [ "$RUN_JAVA_PIPELINE" = true ]; then
echo " - build-java-user-service"
fi
if [ "$RUN_DEPLOY_PIPELINE" = true ]; then
# The deploy job requires the upstream build jobs that were triggered
deploy_requires=""
if [ "$RUN_FASTIFY_PIPELINE" = true ]; then
deploy_requires+="build-fastify"
fi
if [ "$RUN_JAVA_PIPELINE" = true ]; then
# Add a comma if a previous requirement exists
[ -n "$deploy_requires" ] && deploy_requires+=", "
deploy_requires+="build-java-user-service"
fi
echo " - approve-deployment:"
echo " type: approval"
# Ensure deployment runs after any triggered builds
if [ -n "$deploy_requires" ]; then
echo " requires: [$deploy_requires]"
fi
echo " - deploy-to-k8s:"
echo " requires: [approve-deployment]"
fi
} >> "$GENERATED_CONFIG"
echo "Generated configuration:"
cat "$GENERATED_CONFIG"
This script is now the single source of truth for our pipeline’s logic. Adding a new service means adding a new template file and another if
block here—a far more maintainable approach.
The Fastify Service Pipeline Fragment
Our API gateway is a Node.js service using Fastify. Its quality gates include linting with ESLint, unit testing with tap
, and building a Docker image.
Here’s the pipeline fragment, .circleci/templates/fastify_pipeline.yml
. This is not a complete CircleCI config, but a snippet to be appended by our script.
# .circleci/templates/fastify_pipeline.yml
build-fastify:
docker:
- image: cimg/node:18.17
working_directory: ~/project/services/fastify-api-gateway
steps:
- checkout:
path: ~/project
- restore_cache:
keys:
- v1-npm-deps-{{ checksum "package-lock.json" }}
# Fallback to the latest cache if no exact match is found
- v1-npm-deps-
- run:
name: "Install Dependencies"
command: npm install
- save_cache:
key: v1-npm-deps-{{ checksum "package-lock.json" }}
paths:
- node_modules
- run:
name: "Run ESLint"
command: npm run lint
- run:
name: "Run Unit Tests"
command: npm test
- setup_remote_docker:
version: 20.10.18
- run:
name: "Build and Push Docker Image"
command: |
# Use the pipeline ID for a unique image tag
TAG="v1.${CIRCLE_PIPELINE_ID}"
# Docker Hub username stored as an environment variable
docker build -t $DOCKER_USERNAME/fastify-api-gateway:$TAG .
echo $DOCKER_PASSWORD | docker login -u $DOCKER_USERNAME --password-stdin
docker push $DOCKER_USERNAME/fastify-api-gateway:$TAG
# Persist the generated tag for the deploy job
echo "export IMAGE_TAG_GATEWAY=${TAG}" >> $BASH_ENV
To make this runnable, the Fastify service needs a production-grade structure.
// services/fastify-api-gateway/src/index.js
const fastify = require('fastify')({
logger: {
level: 'info',
transport: {
target: 'pino-pretty',
},
},
});
fastify.get('/health', async (request, reply) => {
// A real health check would query downstream services or databases.
return { status: 'ok', timestamp: new Date().toISOString() };
});
fastify.get('/api/v1/user/:id', async (request, reply) => {
const { id } = request.params;
if (isNaN(parseInt(id, 10))) {
reply.code(400);
return { error: 'Invalid user ID format' };
}
// In a real application, this would call the java-user-service.
// We simulate a successful response for demonstration.
return { id, name: `User ${id}`, fetchedAt: new Date().toISOString() };
});
const start = async () => {
try {
const port = process.env.PORT || 3000;
await fastify.listen({ port, host: '0.0.0.0' });
} catch (err) {
fastify.log.error(err);
process.exit(1);
}
};
start();
The corresponding unit test ensures our logic is sound.
// services/fastify-api-gateway/test/health.test.js
const { test } = require('tap');
const fastify = require('fastify')();
// Manually register the route for isolated testing
fastify.get('/health', async () => ({ status: 'ok' }));
test('GET /health route', async (t) => {
const response = await fastify.inject({
method: 'GET',
url: '/health',
});
t.equal(response.statusCode, 200, 'returns a status code of 200');
t.type(JSON.parse(response.payload).status, 'string', 'status is a string');
t.equal(JSON.parse(response.payload).status, 'ok', 'status is ok');
});
The ESLint configuration enforces code style and prevents common errors.
// services/fastify-api-gateway/.eslintrc.js
module.exports = {
env: {
commonjs: true,
es2021: true,
node: true,
},
extends: 'eslint:recommended',
parserOptions: {
ecmaVersion: 12,
},
rules: {
'no-console': 'warn',
'indent': ['error', 2],
'quotes': ['error', 'single'],
'semi': ['error', 'always'],
},
};
The JPA/Hibernate Service Pipeline Fragment
The user service is a standard Java Spring Boot application using JPA/Hibernate for persistence. Its pipeline fragment is similar but uses Maven for building and testing.
# .circleci/templates/java_pipeline.yml
build-java-user-service:
docker:
- image: cimg/openjdk:17.0
working_directory: ~/project/services/java-user-service
steps:
- checkout:
path: ~/project
- restore_cache:
keys:
- v1-maven-deps-{{ checksum "pom.xml" }}
- v1-maven-deps-
- run:
name: "Download Maven Dependencies"
command: mvn dependency:go-offline
- save_cache:
key: v1-maven-deps-{{ checksum "pom.xml" }}
paths:
- ~/.m2
- run:
name: "Run Tests with Maven"
command: mvn clean verify
- setup_remote_docker:
version: 20.10.18
- run:
name: "Build and Push Docker Image"
command: |
# Use the pipeline ID for a unique image tag
TAG="v1.${CIRCLE_PIPELINE_ID}"
docker build -t $DOCKER_USERNAME/java-user-service:$TAG .
echo $DOCKER_PASSWORD | docker login -u $DOCKER_USERNAME --password-stdin
docker push $DOCKER_USERNAME/java-user-service:$TAG
# Persist the tag for the deploy job
echo "export IMAGE_TAG_USER_SVC=${TAG}" >> $BASH_ENV
The Java code includes a standard JPA entity and repository.
// services/java-user-service/src/main/java/com/example/users/User.java
package com.example.users;
import javax.persistence.Entity;
import javax.persistence.Id;
import javax.persistence.GeneratedValue;
@Entity
public class User {
@Id
@GeneratedValue
private Long id;
private String name;
private String email;
// Getters and setters omitted for brevity
}
The repository test uses Spring Boot’s @DataJpaTest
, which provides an in-memory H2 database, avoiding the need for a live database during CI.
// services/java-user-service/src/test/java/com/example/users/UserRepositoryTest.java
package com.example.users;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.autoconfigure.orm.jpa.DataJpaTest;
import org.springframework.boot.test.autoconfigure.orm.jpa.TestEntityManager;
import static org.assertj.core.api.Assertions.assertThat;
@DataJpaTest
public class UserRepositoryTest {
@Autowired
private TestEntityManager entityManager;
@Autowired
private UserRepository userRepository;
@Test
public void whenFindByEmail_thenReturnUser() {
// given
User testUser = new User();
testUser.setName("Test User");
testUser.setEmail("[email protected]");
entityManager.persist(testUser);
entityManager.flush();
// when
User found = userRepository.findByEmail(testUser.getEmail());
// then
assertThat(found.getName()).isEqualTo(testUser.getName());
}
}
The Kubernetes Deployment Fragment
Finally, the deployment job ties everything together. It uses kubectl
to apply manifests and update the image tags for the services that were rebuilt.
# .circleci/templates/deploy_pipeline.yml
deploy-to-k8s:
docker:
- image: cimg/base:2023.08 # An image with kubectl and other tools
steps:
- checkout
- run:
name: "Install kubectl"
command: |
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
- run:
name: "Configure kubectl"
command: |
# Configure access to your K8s cluster
# This would typically involve decoding a kubeconfig file
# from a CircleCI environment variable.
echo $KUBECONFIG_DATA | base64 -d > kubeconfig
export KUBECONFIG=./kubeconfig
kubectl version --client
- run:
name: "Deploy Services to Kubernetes"
command: |
source $BASH_ENV
# Only update the image if a new one was built
if [ -n "$IMAGE_TAG_GATEWAY" ]; then
echo "Updating API Gateway deployment with tag: $IMAGE_TAG_GATEWAY"
kubectl set image deployment/api-gateway-deployment api-gateway=$DOCKER_USERNAME/fastify-api-gateway:$IMAGE_TAG_GATEWAY -n my-namespace
fi
if [ -n "$IMAGE_TAG_USER_SVC" ]; then
echo "Updating User Service deployment with tag: $IMAGE_TAG_USER_SVC"
kubectl set image deployment/user-service-deployment user-service=$DOCKER_USERNAME/java-user-service:$IMAGE_TAG_USER_SVC -n my-namespace
fi
# Always apply manifests from the repo to catch other changes
kubectl apply -f k8s-manifests/ -n my-namespace
This ensures that only the relevant deployments are updated with new image tags, while still applying any changes made directly to the manifest files.
The resulting workflow is now intelligent and context-aware.
graph TD A[Git Push] --> B{CircleCI Trigger}; B --> C[Run generate_config.sh]; C --> D{Analyze git diff}; D --> E1[Changes in Fastify?]; D --> E2[Changes in Java?]; D --> E3[Changes in K8s?]; E1 -- Yes --> F1[Append fastify_pipeline.yml]; E2 -- Yes --> F2[Append java_pipeline.yml]; E1 -- Yes --> G{Set Deploy Flag}; E2 -- Yes --> G; E3 -- Yes --> G; G --> H[Append deploy_pipeline.yml]; subgraph "Dynamic Pipeline Execution" F1 --> I1(Build Fastify Image); F2 --> I2(Build Java Image); I1 --> J(Approval); I2 --> J; H --> J; J --> K(Deploy to K8s); end
The average pipeline time for documentation or minor frontend changes dropped from over 20 minutes to under 3. Our CI costs were cut by more than half, but more importantly, the developer feedback loop was restored to a reasonable state.
This approach is not without its own complexities. The git diff
logic comparing against a static main
branch can be insufficient for long-running feature branches; a more robust solution would diff against the merge-base of the branch (git diff --name-only $(git merge-base main HEAD)...HEAD
). As more services are added, the shell script for configuration generation, while effective, risks becoming a new source of technical debt. Migrating this logic to a more structured language like Python or Go could be a necessary future step. Furthermore, this system does not yet address inter-service dependencies within the monorepo; a change to a shared library requires a more sophisticated dependency graph analysis to trigger rebuilds of all consumers, a problem better solved by dedicated build systems like Bazel or Nx.