Our internal security mandate was uncompromising: all service-to-service communication must use mutual TLS (mTLS), with no exceptions. This clashed directly with our goal of rapid, automated deployments for even minor internal tools. The project in question was a seemingly trivial CV generation service—a simple Node.js application with a React front-end to render team member profiles into PDFs. The technical pain point wasn’t the application itself, but the operational overhead of managing certificate lifecycles in a CI/CD environment. Manually issuing, distributing, and rotating certificates for each deployment was a non-starter, as it would cripple developer velocity and introduce significant risk of human error. The core problem became clear: how do we automate the entire lifecycle of short-lived mTLS certificates as an integral part of the application deployment process itself?
Our initial concept was to make the infrastructure automation tool, Chef, responsible for more than just package installation and service configuration. It needed to become the orchestrator of identity. The plan was to leverage HashiCorp Vault’s PKI secrets engine as a dynamic Certificate Authority (CA). Chef, on each convergence run, would authenticate the node to Vault, request a new, short-lived certificate for the CV generation service, place the credentials on the filesystem with strict permissions, and then start the application. This approach would tightly couple the service’s identity to its deployment. For the application’s build process, we chose esbuild. Our previous projects were bogged down by slow Webpack builds in the CI pipeline, and esbuild’s performance promised to keep our deployment cycles fast, ensuring that the security overhead didn’t translate into developer friction.
The technology selection was driven by pragmatism. Chef was our established configuration management tool, and its ecosystem has mature integrations for Vault. Using Vault as a CA was the logical choice for dynamic secret management, avoiding the complexities of building a custom PKI solution. The decision for mTLS was policy-driven, a cornerstone of our move toward a zero-trust network model where identity is proven, not assumed. The CV generator was simply a useful, low-risk testbed for this pattern. It had two distinct components—a web frontend server and a PDF generation API—making it a perfect candidate for demonstrating secure inter-service communication.
Vault PKI Backend Initialization
Before Chef can do anything, Vault must be configured to act as our internal Certificate Authority. This is a one-time setup performed by a Vault administrator. In a real-world project, this process itself would be automated with Terraform, but for clarity, here are the raw Vault CLI commands.
First, we enable the PKI secrets engine at a dedicated path, pki_internal
. We’ll tune the default lease TTL to 87600 hours (10 years) for the root certificate.
# Enable the PKI secrets engine
$ vault secrets enable pki_internal
# Set the max lease TTL for certificates generated by this backend
# This is for the CA itself, not the leaf certificates
$ vault secrets tune -max-lease-ttl=87600h pki_internal
Next, we generate the root certificate. The Common Name (CN) is critical for trust. We’ll store the certificate public key so clients can use it to verify server certificates.
# Generate the root certificate and save the CA cert to a file
$ vault write -field=certificate pki_internal/root/generate/internal \
common_name="internal.my-company.com" \
ttl=87600h > internal_ca.crt
Now, we configure the URLs for the Certificate Revocation List (CRL) and the issuing certificate endpoints. These are embedded in the certificates Vault generates, allowing clients to validate the entire chain.
# Configure the CRL and issuing certificate endpoints
$ vault write pki_internal/config/urls \
issuing_certificates="$VAULT_ADDR/v1/pki_internal/ca" \
crl_distribution_points="$VAULT_ADDR/v1/pki_internal/crl"
The most critical part is creating a role. A role in Vault’s PKI engine defines the parameters for certificate creation. We’ll create a role named cv-generator-service
that our Chef clients will use. A common mistake here is being too permissive. We lock it down tightly: only allow CNs matching a specific pattern, enforce a short TTL of 72 hours, and disallow bare domains.
# Create a role that defines certificate parameters
$ vault write pki_internal/roles/cv-generator-service \
allowed_domains="api.cv-generator.service.my-company.com,web.cv-generator.service.my-company.com" \
allow_subdomains=true \
max_ttl="72h" \
require_cn=true \
generate_lease=true
Finally, to allow Chef to authenticate, we’ll use the AppRole auth method. This is more secure than passing around tokens.
# Enable AppRole auth method
$ vault auth enable approle
# Create a policy that grants access to the PKI role
$ vault policy write chef-pki-policy - <<EOF
path "pki_internal/issue/cv-generator-service" {
capabilities = ["create", "update"]
}
EOF
# Create an AppRole for Chef nodes
$ vault write auth/approle/role/chef-nodes \
secret_id_ttl="10m" \
token_num_uses="10" \
token_ttl="20m" \
token_max_ttl="30m" \
policies="chef-pki-policy"
# Retrieve the RoleID (public)
$ vault read auth/approle/role/chef-nodes/role-id
# Generate a SecretID (private, to be delivered securely to Chef)
$ vault write -f auth/approle/role/chef-nodes/secret-id
The RoleID
and SecretID
are the credentials our Chef nodes will use. The RoleID
can be stored directly in the Chef node attributes, but the SecretID
must be handled securely, typically injected at bootstrap time.
The Chef Cookbook Implementation
Our Chef cookbook is where the automation logic resides. It’s broken into several recipes, each with a distinct responsibility.
vault_auth
Recipe: Authenticating with Vault
This recipe’s job is to use the node’s RoleID
and the securely provided SecretID
to get a Vault token. We use the chef-vault_approle
community cookbook for this. The token is written to a file for other recipes to use. The pitfall here is file permissions; this token is sensitive and must only be readable by the root user.
# cookbooks/cv_generator/recipes/vault_auth.rb
# Ensure the run directory for vault exists
directory '/var/run/vault' do
owner 'root'
group 'root'
mode '0700'
action :create
end
# Use the chef-vault_approle LWRP to perform AppRole login
# Node attributes would contain:
# node.default['cv_generator']['vault']['role_id'] = '...'
# node.default['cv_generator']['vault']['secret_id'] = '...' (from a secure source)
chef_vault_approle node['cv_generator']['vault']['role_id'] do
secret_id node['cv_generator']['vault']['secret_id']
path '/var/run/vault/vault-token' # The token will be written here
# vault_addr and other vault options can be specified here
end
certificate_manager
Recipe: Issuing mTLS Certificates
This is the heart of the solution. This recipe reads the Vault token and uses the vault_pki_cert
resource from the hashicorp-vault
cookbook to request a certificate. It dynamically sets the Common Name based on a node attribute, allowing the same recipe to be used for both the web
and api
nodes of our service.
# cookbooks/cv_generator/recipes/certificate_manager.rb
# Define service user and paths
app_user = 'cvgen'
app_group = 'cvgen'
cert_dir = '/etc/cv-generator/ssl'
cert_path = "#{cert_dir}/service.crt"
key_path = "#{cert_dir}/service.key"
chain_path = "#{cert_dir}/ca.pem"
# Create user and group for the application
group app_group
user app_user do
group app_group
system true
shell '/bin/false'
end
# Create the directory for SSL certificates
directory cert_dir do
owner 'root'
group app_group
mode '0750'
recursive true
action :create
end
# Request the certificate from Vault
# A common mistake is not specifying the full path to the token file
# or not ensuring the vault_auth recipe runs first.
vault_pki_cert 'cv-generator-cert' do
# e.g., node.default['cv_generator']['common_name'] = 'web.cv-generator.service.my-company.com'
common_name node['cv_generator']['common_name']
role_name 'cv-generator-service'
pki_mount 'pki_internal'
token_path '/var/run/vault/vault-token'
# The cookbook handles writing these files out for us
certificate_path cert_path
private_key_path key_path
chain_path chain_path
# This action block ensures we restart the service when a new cert is issued.
action :create
notifies :restart, 'systemd_unit[cv-generator.service]', :delayed
end
# A critical security step: Set strict permissions on the private key.
# It should only be readable by the application user.
file key_path do
owner 'root'
group app_group
mode '0640'
end
build_deploy
Recipe: Building with esbuild and Deploying the App
This recipe handles the application code. It checks out the source from git, installs dependencies, runs the esbuild
process, and sets up the systemd
service.
# cookbooks/cv_generator/recipes/build_deploy.rb
app_dir = '/opt/cv-generator'
app_user = 'cvgen'
app_group = 'cvgen'
# Sync application code from a repository
git app_dir do
repository 'https://git.my-company.com/team/cv-generator.git'
revision 'main'
action :sync
notifies :run, 'execute[npm_install]', :immediately
notifies :run, 'execute[esbuild_bundle]', :immediately
end
# Install Node.js dependencies
execute 'npm_install' do
command 'npm install --production'
cwd app_dir
user 'root' # npm may need root to build certain native modules
action :nothing # Only run when notified by git checkout
end
# Run the esbuild bundler
execute 'esbuild_bundle' do
command 'npm run build' # Assumes a build script in package.json
cwd app_dir
user app_user
environment ({ 'NODE_ENV' => 'production' })
action :nothing # Only run when notified
notifies :restart, 'systemd_unit[cv-generator.service]', :delayed
end
# Configure the systemd service
systemd_unit 'cv-generator.service' do
content({
Unit: {
Description: 'CV Generator Service',
After: 'network.target',
},
Service: {
Type: 'simple',
User: app_user,
Group: app_group,
WorkingDirectory: app_dir,
ExecStart: '/usr/bin/node server.js',
Restart: 'on-failure',
Environment: [
'NODE_ENV=production',
"CERT_PATH=#{node['cv_generator']['cert_path']}",
"KEY_PATH=#{node['cv_generator']['key_path']}",
"CA_PATH=#{node['cv_generator']['chain_path']}"
],
},
Install: {
WantedBy: 'multi-user.target',
},
})
action [:create, :enable, :start]
end
This recipe is idempotent. The npm install
and esbuild
commands only run if the git repository has changed, making subsequent Chef runs fast.
The Application and Build Tooling
esbuild
Configuration
The package.json
file defines the build script that Chef executes.
// package.json
{
"name": "cv-generator",
"version": "1.0.0",
"scripts": {
"build": "node build.js"
},
"dependencies": {
"express": "^4.18.2",
"react": "^18.2.0",
"react-dom": "^18.2.0"
},
"devDependencies": {
"esbuild": "^0.19.5"
}
}
The build.js
script leverages esbuild’s JavaScript API for more control. In a real-world project, this would be more complex, handling CSS, images, and code splitting.
// build.js
const esbuild = require('esbuild');
const path = require('path');
// A simple esbuild script for bundling the React frontend.
// The key advantage is its speed, which keeps CI/CD pipelines fast.
esbuild.build({
entryPoints: ['src/client/index.js'],
bundle: true,
minify: true,
sourcemap: true,
platform: 'browser',
outfile: 'public/bundle.js',
logLevel: 'info',
}).catch((err) => {
console.error('Build failed:', err);
process.exit(1);
});
Node.js Server with mTLS
The Node.js server code is where the certificates managed by Chef are finally used. We use the built-in https
module to create a server that both presents its own certificate and demands one from any client making a connection.
// server.js
const https = require('https');
const fs = require('fs');
const express = require('express');
const path = require('path');
// Environment variables are passed from the systemd unit file
const PORT = 443;
const CERT_PATH = process.env.CERT_PATH;
const KEY_PATH = process.env.KEY_PATH;
const CA_PATH = process.env.CA_PATH;
if (!CERT_PATH || !KEY_PATH || !CA_PATH) {
console.error('FATAL: Certificate, Key, or CA path not provided in environment.');
process.exit(1);
}
const app = express();
// Serve the static assets bundled by esbuild
app.use(express.static(path.join(__dirname, 'public')));
// Simple API endpoint for demonstration
app.get('/api/health', (req, res) => {
// We can inspect the client's certificate
const clientCert = req.socket.getPeerCertificate();
if (req.client.authorized) {
console.log(`Received authorized request from client with CN: ${clientCert.subject.CN}`);
res.status(200).json({ status: 'ok', client: clientCert.subject.CN });
} else {
console.warn(`Received unauthorized request: ${req.client.authorizationError}`);
res.status(401).json({ status: 'error', reason: 'Client certificate not authorized.' });
}
});
const httpsOptions = {
key: fs.readFileSync(KEY_PATH),
cert: fs.readFileSync(CERT_PATH),
ca: [fs.readFileSync(CA_PATH)], // The CA cert to trust for client certs
requestCert: true, // Request a certificate from the client
rejectUnauthorized: true, // Reject connections if the client cert is not signed by our CA
};
const server = https.createServer(httpsOptions, app);
server.listen(PORT, () => {
console.log(`CV Generator listening on port ${PORT} with mTLS enabled.`);
});
server.on('tlsClientError', (err, tlsSocket) => {
console.error(`TLS Client Error: ${err.message}`, {
remoteAddress: tlsSocket.remoteAddress,
authorizationError: tlsSocket.authorizationError
});
});
The flow is now complete. A developer pushes a code change, the CI/CD pipeline triggers a chef-client
run on the target node. Chef syncs the code, authenticates to Vault, fetches a fresh 72-hour certificate, runs the blazing-fast esbuild
process, and restarts the Node.js service, which immediately begins serving traffic over mTLS with its new identity. The process is fully automated, secure, and fast.
sequenceDiagram participant CI/CD as CI/CD Pipeline participant Chef as Chef Client participant Vault as HashiCorp Vault participant Node as Node.js Host participant App as CV Generator App CI/CD ->>+ Chef: Trigger chef-client run Chef->>Chef: Run recipes: vault_auth Chef->>+Vault: Authenticate with AppRole Vault-->>-Chef: Grant Vault Token Chef->>Chef: Run recipes: certificate_manager Chef->>+Vault: Request certificate for CN=web.cv... Vault-->>-Chef: Issue short-lived certificate Chef->>+Node: Write key, cert, ca.pem to disk Node-->>-Chef: Filesystem updated Chef->>Chef: Run recipes: build_deploy Chef->>Node: Sync git repository Chef->>Node: Run 'npm install' & 'npm run build' (esbuild) Chef->>+Node: Configure and restart systemd service Node->>+App: Start process with new certs App-->>-Node: App running Node-->>-Chef: Service restarted Chef-->>-CI/CD: Chef run complete
The true success of this pattern is not the CV generator itself, but the elimination of manual security tasks. We’ve shifted certificate management left, making it a declarative part of our infrastructure code. Developers can focus on building features, confident that the deployed application will conform to our security policies automatically.
This solution, however, is not without its own set of considerations. Certificate renewal is currently coupled to a Chef convergence. If a node doesn’t run Chef for more than 72 hours, its certificate will expire. A more advanced implementation might use a dedicated agent like consul-template
or a custom daemon to watch certificate expiry and renew from Vault directly, decoupling it from the main deployment workflow. Furthermore, the initial delivery of the Vault SecretID
to a new node remains a critical bootstrap problem; in a production environment, this would be handled by a trusted orchestrator or baked into a golden machine image. The esbuild
configuration presented is minimal. A large-scale front-end would require a more robust setup to handle CSS Modules, asset fingerprinting, and dynamic imports, adding complexity that needs to be managed within the build script. Finally, while this pattern works well for a handful of services, managing Vault roles and Chef policies at scale requires its own layer of automation and governance to prevent sprawl.