The CI pipeline for our monorepo was grinding to a halt. A significant portion of the build time, often exceeding two minutes, was consumed by a fragile chain of npm scripts
dedicated to processing GraphQL assets. This process involved traversing directories, linting individual .graphql
schema files, concatenating them using cat
, validating the combined blob with another tool, and finally formatting it with a standalone Prettier command. Each step introduced I/O overhead, process startup costs, and a potential point of failure. The developer feedback loop was broken, and build costs were escalating. We needed a solution that was not just faster, but fundamentally more efficient and atomic.
The initial thought was to find a pre-existing tool, but nothing quite fit our specific needs. We required schema validation, concatenation of dozens of fragments into a single service schema, and strict formatting, all within a single, fast operation. This led to the concept of leveraging a modern build tool’s plugin architecture. While Webpack and Rollup have powerful plugin systems, their primary focus on JavaScript module bundling felt like using a sledgehammer to crack a nut. esbuild
stood out. Its core is written in Go, enabling exceptional performance through parallelism, making it an ideal candidate for a task that was becoming a critical bottleneck. The decision was made: we would build a custom esbuild
plugin to replace the entire GraphQL asset pipeline.
The plugin’s responsibilities would be clear:
- Intercept any imports of
.graphql
files. - Read and parse each file’s content into a GraphQL Abstract Syntax Tree (AST).
- Collect all parsed ASTs from the build’s entry points.
- At the end of the build, merge all collected ASTs into a single, valid schema AST.
- Convert the merged AST back into a string (the bundled schema).
- Programmatically format this string using Prettier.
- Write the final, formatted schema to a specified output file.
This approach would consolidate multiple processes into a single, in-memory operation, drastically reducing I/O and overhead. The esbuild
plugin API provides the necessary hooks (onResolve
, onLoad
, onEnd
) to orchestrate this entire flow seamlessly.
Phase 1: Project Scaffolding and Dependencies
Before writing the plugin, a minimal project structure is necessary to test and develop it. In a real-world project, this would be part of a larger monorepo package.
Directory Structure:
.
├── dist/ # Output directory for bundled assets
├── package.json
├── schemas/ # Source GraphQL schema fragments
│ ├── base.graphql
│ ├── user.graphql
│ └── product.graphql
├── src/ # Source code for build scripts and entry points
│ ├── build.ts
│ ├── index.ts
│ └── plugins/
│ └── graphql-bundler.ts
└── tsconfig.json
Dependencies:
The package.json
defines the core tools for this task. We need esbuild
for the build runner, graphql
for its powerful AST parsing and printing capabilities, and prettier
for programmatic formatting. We’ll use TypeScript for better code organization and safety.
{
"name": "esbuild-graphql-bundler",
"version": "1.0.0",
"private": true,
"scripts": {
"build": "ts-node src/build.ts"
},
"devDependencies": {
"@types/node": "^18.17.1",
"@types/prettier": "^2.7.3",
"esbuild": "^0.19.2",
"graphql": "^16.8.0",
"prettier": "^3.0.3",
"ts-node": "^10.9.1",
"typescript": "^5.2.2"
}
}
The entry point for our asset bundling process, src/index.ts
, will simply import the GraphQL fragments. esbuild
will use this file to discover the dependency graph.
// src/index.ts
// This file serves as the entry point for esbuild to discover our .graphql files.
// It doesn't need to execute; its static import graph is what matters.
import './../schemas/base.graphql';
import './../schemas/user.graphql';
import './../schemas/product.graphql';
Phase 2: The Core esbuild Plugin Structure
An esbuild
plugin is an object with a name
and a setup
function. The setup
function is called once per build and receives the build
object, which provides access to hooks for different stages of the build process.
Our plugin, src/plugins/graphql-bundler.ts
, starts with this skeleton. We will use a shared array, definitions
, to collect the AST nodes from each .graphql
file we process.
// src/plugins/graphql-bundler.ts
import type { Plugin, BuildOptions } from 'esbuild';
import * as fs from 'fs/promises';
import * as path from 'path';
import { parse, print, DefinitionNode, DocumentNode, Kind } from 'graphql';
// Plugin options to specify where the final bundled schema should be written.
export interface GraphQLBundlerPluginOptions {
outfile: string;
}
export const graphqlBundlerPlugin = (options: GraphQLBundlerPluginOptions): Plugin => {
return {
name: 'graphql-bundler',
setup(build) {
// Shared state to collect definitions from all resolved .graphql files.
const definitions: DefinitionNode[] = [];
// Hook into the resolution phase for files ending in .graphql
build.onResolve({ filter: /\.graphql$/ }, (args) => {
// The business logic for resolving file paths will go here.
// For now, we just acknowledge we can handle it.
return {
path: path.resolve(args.resolveDir, args.path),
namespace: 'graphql-file', // Use a custom namespace to claim these files.
};
});
// Hook into the loading phase for files in our custom namespace
build.onLoad({ filter: /.*/, namespace: 'graphql-file' }, async (args) => {
// The logic for reading, parsing, and validating schemas will go here.
const contents = await fs.readFile(args.path, 'utf8');
return {
contents,
loader: 'text', // Treat it as raw text for now.
};
});
// Hook into the end of the build process to assemble the final schema.
build.onEnd(async (result) => {
// The logic for merging, formatting, and writing the final file will go here.
if (result.errors.length > 0) {
console.error('Build failed with errors, skipping GraphQL bundling.');
return;
}
console.log(`Collected ${definitions.length} GraphQL definitions.`);
});
},
};
};
This structure outlines the three key stages of our plugin’s lifecycle: resolving paths, loading and processing content, and finalizing the output.
Phase 3: Deep Dive into Plugin Implementation
Now we flesh out the logic within each hook. The core challenge is managing the state (definitions
) correctly across multiple onLoad
calls and then processing it in onEnd
.
The flow of data inside our plugin can be visualized as follows:
graph TD A[esbuild starts] --> B{src/index.ts}; B -- import './schemas/user.graphql' --> C[onResolve]; C -- path, namespace --> D[onLoad]; D -- reads & parses file --> E{GraphQL AST}; E -- stores definitions --> F[Plugin's shared 'definitions' array]; B -- import './schemas/product.graphql' --> C2[onResolve]; C2 -- path, namespace --> D2[onLoad]; D2 -- reads & parses file --> E2{GraphQL AST}; E2 -- stores definitions --> F; A -- all files processed --> G[onEnd]; G -- reads from 'definitions' array --> H{Merge ASTs}; H -- print() --> I[Formatted Schema String]; I -- Prettier.format() --> J[Final String]; J -- writeFile() --> K[dist/schema.graphql];
Implementing onResolve
and onLoad
The onResolve
hook is straightforward. It confirms that our plugin can handle .graphql
files and provides an absolute path. Using a custom namespace (graphql-file
) is a common esbuild
pattern to ensure that the onLoad
hook only operates on files that onResolve
has explicitly marked.
The onLoad
hook is where the heavy lifting happens. For each resolved .graphql
file, we must:
- Read its contents from the filesystem.
- Attempt to parse it using
graphql.parse()
. - A pitfall here is error handling. If
graphql.parse()
fails, it throws an exception. We must catch this and convert it intoesbuild
‘s error format. This ensures that a single malformed GraphQL file fails the entire build with a clear, actionable error message pointing to the right file and line. - If parsing is successful, we add the parsed definitions to our shared
definitions
array. - Crucially, we must return a
contents
field. Even though we are producing a side-effect (the final bundled file),esbuild
expects each loaded module to have content. Returning an empty string with thejs
loader effectively tellsesbuild
to treat this import as an empty JavaScript module, preventing it from breaking the build chain.
Here is the enhanced implementation:
// src/plugins/graphql-bundler.ts (continued)
// ... inside setup(build) ...
build.onResolve({ filter: /\.graphql$/ }, (args) => {
return {
path: path.resolve(args.resolveDir, args.path),
namespace: 'graphql-file',
};
});
build.onLoad({ filter: /.*/, namespace: 'graphql-file' }, async (args) => {
try {
const source = await fs.readFile(args.path, 'utf8');
const document = parse(source);
// A common mistake is to not validate the AST structure.
// We only care about top-level definitions.
document.definitions.forEach(def => {
definitions.push(def);
});
// This module resolves to empty JS content. The real output is a side effect.
return {
contents: '/* GraphQL module processed by graphql-bundler */',
loader: 'js',
};
} catch (e: any) {
// Convert GraphQL parse errors into esbuild's error format.
// This provides proper error reporting in the console.
const errors = [{
text: e.message,
location: {
file: args.path,
// The graphql library provides location info in its errors.
line: e.locations?.[0]?.line,
column: e.locations?.[0]?.column,
lineText: e.source?.body.split(/\r\n|\r|\n/g)[e.locations?.[0]?.line - 1] ?? '',
},
}];
return { errors };
}
});
Implementing onEnd
with Merging and Formatting
The onEnd
hook is the final step. It executes only after esbuild
has processed all files and encountered no blocking errors in the previous phases.
- Create the Merged Document: We construct a new
DocumentNode
AST object, usingKind.DOCUMENT
and providing our collecteddefinitions
. This is the canonical representation of our entire schema. - Print to String:
graphql.print()
converts this AST object back into a well-formatted GraphQL schema string. - Programmatic Prettier: Instead of shelling out to the Prettier CLI, we use its Node.js API. We first try to resolve a Prettier configuration file relative to the project’s root. This ensures our programmatic formatting respects the project’s established code style. If no config is found, we fall back to sensible defaults. A key detail is specifying the
parser: 'graphql'
so Prettier knows how to handle the syntax. - Write to Disk: Finally, we write the formatted string to the output file specified in the plugin options. We must ensure the output directory exists before writing.
Here is the complete onEnd
implementation, including robust Prettier integration:
// src/plugins/graphql-bundler.ts (final part)
import * as prettier from 'prettier';
// ... inside setup(build) ...
build.onEnd(async (result) => {
if (result.errors.length > 0) {
// If there were errors during onLoad (e.g., syntax errors),
// we should not proceed with bundling.
return;
}
if (definitions.length === 0) {
// No GraphQL files were found or processed.
return;
}
// 1. Create the final Document AST
const mergedDocument: DocumentNode = {
kind: Kind.DOCUMENT,
definitions,
};
// 2. Print the AST to a string
const printedSchema = print(mergedDocument);
// 3. Format the string with Prettier
let formattedSchema = printedSchema;
try {
const prettierConfig = await prettier.resolveConfig(process.cwd());
formattedSchema = await prettier.format(printedSchema, {
...prettierConfig,
parser: 'graphql',
});
} catch (e) {
console.warn('Could not resolve Prettier config. Using default formatting.');
// Fallback formatting if config resolution fails.
formattedSchema = await prettier.format(printedSchema, { parser: 'graphql' });
}
// 4. Write the final file to disk
try {
const outputDir = path.dirname(options.outfile);
await fs.mkdir(outputDir, { recursive: true });
await fs.writeFile(options.outfile, formattedSchema);
console.log(`✅ GraphQL schema successfully bundled to ${options.outfile}`);
} catch (e) {
console.error(`Failed to write bundled GraphQL schema:`, e);
// We should add an error to the result to signal build failure.
result.errors.push({
text: `Failed to write to ${options.outfile}`,
detail: e,
});
}
});
Phase 4: Tying It All Together
With the plugin fully implemented, we create the build script src/build.ts
that configures and runs esbuild
.
// src/build.ts
import * as esbuild from 'esbuild';
import * as path from 'path';
import { graphqlBundlerPlugin } from './plugins/graphql-bundler';
async function runBuild() {
console.time('GraphQL asset build');
try {
const result = await esbuild.build({
entryPoints: [path.resolve(__dirname, 'index.ts')],
bundle: true,
platform: 'node',
outfile: path.resolve(__dirname, '../dist/app.js'), // esbuild needs an outfile
plugins: [
graphqlBundlerPlugin({
// This is our custom option, telling the plugin where to put the schema.
outfile: path.resolve(__dirname, '../dist/schema.graphql'),
}),
],
// We log errors ourselves, so disable esbuild's default logging.
logLevel: 'silent',
});
if (result.errors.length > 0 || result.warnings.length > 0) {
console.error("Build failed.");
result.errors.forEach(err => console.error(err));
result.warnings.forEach(warn => console.warn(warn));
process.exit(1);
}
} catch (e) {
console.error('An unexpected error occurred during the build:', e);
process.exit(1);
} finally {
console.timeEnd('GraphQL asset build');
}
}
runBuild();
Let’s populate our sample schema files:
# schemas/base.graphql
"Specifies the entry points for queries."
type Query {
_service: String
}
# schemas/user.graphql
type User {
id: ID!
username: String!
email: String
}
extend type Query {
"Fetches a user by their ID."
user(id: ID!): User
}
# schemas/product.graphql
type Product {
id: ID!
name: String!
price: Float!
}
extend type Query {
"Fetches a product by its ID."
product(id: ID!): Product
}
Running npm run build
now executes our entire pipeline. The console output shows success, and the dist/
directory contains our desired artifacts.
$ npm run build
> [email protected] build
> ts-node src/build.ts
✅ GraphQL schema successfully bundled to /path/to/project/dist/schema.graphql
GraphQL asset build: 125.432ms
The output file, dist/schema.graphql
, is a single, perfectly formatted schema:
"Specifies the entry points for queries."
type Query {
_service: String
"Fetches a user by their ID."
user(id: ID!): User
"Fetches a product by its ID."
product(id: ID!): Product
}
type User {
id: ID!
username: String!
email: String
}
type Product {
id: ID!
name: String!
price: Float!
}
The result is a build process that completes in milliseconds, down from minutes. This is a transformative improvement for CI performance and local development loops.
The primary limitation of this implementation is its lack of support for source maps. If the graphql.parse
function throws an error, our logging points to the correct original file, but a more advanced system might integrate with esbuild
‘s source map capabilities to allow for even richer debugging. Furthermore, this bundler performs syntactic validation but does not handle more complex semantic validation specific to federated schemas, such as checking for @key
directives or resolving entity relationships. That would require a more sophisticated approach, potentially integrating libraries like those from the Apollo Federation toolkit directly within the onEnd
hook. For our immediate goal of accelerating the CI pipeline for schema fragment bundling, this focused, high-performance tool proved to be the correct architectural choice.