Our core application, a complex data visualization platform built on Next.js, was grinding to a halt. Not in production, but in development. The Hot Module Replacement (HMR) cycle for our main dashboard, which renders a deeply nested graph structure from our Dgraph backend, was averaging between 15 to 25 seconds. A full server restart took nearly two minutes. This wasn’t just an annoyance; it was a direct drain on engineering velocity, turning trivial CSS tweaks into coffee breaks. The culprit was a mature, but byzantine, Webpack configuration accumulated over three years, laden with custom loaders, complex aliasing, and a codegen step tightly coupled to the build process.
A full rewrite was a non-starter. The application’s production stability, built upon a battle-tested Webpack setup, was too valuable to discard. The initial concept was radical yet pragmatic: could we surgically replace the development server with Vite for its near-instantaneous HMR, while retaining the existing Webpack pipeline for production builds? This would create a hybrid build system—a technical debt in its own right, but one that promised to solve our immediate, crippling pain point. Concurrently, we recognized a parallel user-facing problem: our application’s reliance on constant Dgraph queries made it brittle on unstable networks. We decided to tackle this by implementing a robust Service Worker caching layer, a task that would benefit from the faster iteration cycle our new dev environment would provide.
The decision to maintain Webpack for production was rooted in risk management. Our next.config.js
and associated Webpack chain contained critical, nuanced optimizations for code splitting, asset handling, and environment-specific transformations that had been refined over dozens of production releases. Migrating this complexity to Vite’s Rollup-based production builder would be a painstaking and high-risk project. In a real-world project, minimizing production risk often outweighs the elegance of a single, unified build tool. Vite’s primary value proposition for us was its development speed, stemming from its no-bundle, native ESM approach. We would leverage that for dev
and accept the overhead of maintaining two configurations for build
.
Phase 1: Architecting the Hybrid Development Server
The core challenge was to make Vite serve a Next.js application without being the primary framework orchestrator. Next.js expects to control the server. Our solution was to run two servers in parallel during development: Vite’s dev server would handle all our React component and asset requests, while the standard next dev
server would run in the background to handle API routes, server-side props, and initial page rendering. We then used Vite’s server proxy to route requests accordingly.
First, we set up our vite.config.ts
. This configuration is critical for replicating the Next.js environment, including React Fast Refresh, path aliases, and environment variable handling.
// vite.config.ts
import { defineConfig, loadEnv } from 'vite';
import react from '@vitejs/plugin-react';
import path from 'path';
export default defineConfig(({ mode }) => {
// Mimic Next.js's process.env handling for NEXT_PUBLIC_ variables
const env = loadEnv(mode, process.cwd(), '');
const processEnv = {};
for (const key in env) {
if (key.startsWith('NEXT_PUBLIC_')) {
processEnv[`process.env.${key}`] = JSON.stringify(env[key]);
}
}
return {
plugins: [
// Enables HMR and React-specific transformations
react(),
],
// Define a root outside the .next directory to avoid conflicts
root: 'src',
// We need to serve from the project root to access public assets
publicDir: '../public',
server: {
port: 3000,
// The magic happens here: proxying to the real Next.js server
proxy: {
// Proxy API routes
'/api': {
target: 'http://localhost:3001',
changeOrigin: true,
},
// Proxy Next.js specific internals and SSR pages
'/_next': {
target: 'http://localhost:3001',
changeOrigin: true,
},
// Proxy any page route that doesn't resolve to a static asset
// This regex looks for paths that don't have a file extension
'^/[^.]*$': {
target: 'http://localhost:3001',
changeOrigin: true,
// Important: We need to rewrite the request to be a plain GET
// so Next.js router handles it correctly.
selfHandleResponse: true,
configure: (proxy, options) => {
proxy.on('proxyReq', (proxyReq, req, res) => {
// Let Next.js handle the HTML serving for the initial page load
proxyReq.setHeader('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9');
});
},
},
},
},
resolve: {
// Replicate tsconfig.json paths for module resolution
alias: {
'@': path.resolve(__dirname, './src'),
},
},
// This is crucial for Vite to understand how to handle env vars
// in the same way Next.js does.
define: processEnv,
build: {
// We explicitly DO NOT use Vite's build.
// This is just a placeholder.
outDir: '../.next-vite-build',
// We need to ensure Vite doesn't try to clear the .next directory
emptyOutDir: false,
},
};
});
To manage this dual-server setup, we created a custom startup script using concurrently
.
// package.json (scripts section)
"scripts": {
"dev:next": "next dev -p 3001",
"dev:vite": "vite",
"dev": "concurrently \"npm:dev:next\" \"npm:dev:vite\""
}
This setup means a developer runs npm run dev
. Vite starts on port 3000, and Next.js starts on 3001. When the browser hits localhost:3000
, Vite serves the initial index.html
(which we need to create inside /src
). Vite then intercepts requests for JavaScript modules (.tsx
, .ts
) and provides them on-the-fly. Any request it can’t handle, like /api/graphql
or an SSR page request, is proxied to the Next.js server on port 3001.
Here’s a diagram of the development request flow:
sequenceDiagram participant Browser participant ViteDevServer (Port 3000) participant Next.jsServer (Port 3001) Browser->>ViteDevServer: GET /dashboard Note right of ViteDevServer: Initial request is a page route ViteDevServer->>Next.jsServer: Proxy GET /dashboard (SSR) Next.jsServer-->>ViteDevServer: Returns HTML shell ViteDevServer-->>Browser: Returns HTML shell Browser->>ViteDevServer: GET /_vite/client ViteDevServer-->>Browser: Vite client script for HMR Browser->>ViteDevServer: GET /components/Chart.tsx ViteDevServer-->>Browser: Transpiled Chart.tsx module Browser->>ViteDevServer: GET /api/graphql Note right of ViteDevServer: Path matches proxy rule ViteDevServer->>Next.jsServer: Proxy GET /api/graphql Next.jsServer-->>ViteDevServer: GraphQL JSON response ViteDevServer-->>Browser: GraphQL JSON response
A common mistake here is mishandling Next.js-specific features like next/image
or next/router
. The proxy setup is coarse and can fail on complex internal routing. The solution was to create mock implementations for development under Vite. For example, we created a src/lib/next-mocks.ts
file that provided a basic implementation of next/image
that just rendered a standard <img>
tag, and used vite.config.ts
aliasing to swap it in during dev. This is a trade-off: we lose some framework-specific features in the Vite environment, but gain immense speed.
Phase 2: Decoupling Dgraph Codegen
Our build process was entangled with a GraphQL codegen step that generated TypeScript types from our Dgraph schema. This was previously run via a Webpack plugin, contributing to the slow startup time. In the new world, this needed to be an independent, file-system-aware process.
We used @graphql-codegen/cli
and chokidar
to create a standalone watcher.
// scripts/codegen-watcher.ts
import { exec } from 'child_process';
import chokidar from 'chokidar';
import path from 'path';
const SCHEMA_PATH = path.resolve(__dirname, '../src/graphql/schema.graphql');
const CODEGEN_CONFIG_PATH = path.resolve(__dirname, '../codegen.yml');
// A simple debounce to prevent rapid-fire executions
let timeoutId: NodeJS.Timeout | null = null;
const DEBOUNCE_MS = 500;
const runCodegen = () => {
console.log('Schema change detected. Running GraphQL codegen...');
const command = `graphql-codegen --config ${CODEGEN_CONFIG_PATH}`;
const child = exec(command);
child.stdout?.on('data', (data) => {
process.stdout.write(data);
});
child.stderr?.on('data', (data) => {
process.stderr.write(`CODEGEN_ERROR: ${data}`);
});
child.on('close', (code) => {
if (code === 0) {
console.log('GraphQL codegen completed successfully.');
} else {
console.error(`GraphQL codegen exited with code ${code}.`);
}
});
};
const watcher = chokidar.watch(SCHEMA_PATH, {
persistent: true,
ignoreInitial: true,
});
console.log(`Watching for changes in ${SCHEMA_PATH}...`);
watcher.on('all', (event, path) => {
console.log(`Event '${event}' detected on path '${path}'`);
if (timeoutId) {
clearTimeout(timeoutId);
}
timeoutId = setTimeout(runCodegen, DEBOUNCE_MS);
});
runCodegen(); // Run once on startup
We added this to our package.json
dev script:
"scripts": {
"dev:next": "next dev -p 3001",
"dev:vite": "vite",
"dev:codegen": "ts-node-dev --respawn scripts/codegen-watcher.ts",
"dev": "concurrently \"npm:dev:next\" \"npm:dev:vite\" \"npm:dev:codegen\""
}
Now, whenever a developer modifies the schema.graphql
file, the watcher automatically regenerates the TypeScript types. Vite’s file watcher picks up the changes in the generated type files and triggers a near-instant HMR update in the browser. This decoupling was a massive win for both performance and architectural cleanliness.
Phase 3: A Resilient Service Worker Caching Layer
With development velocity restored, we turned to the user-facing performance issue. We decided to use a Service Worker to implement a StaleWhileRevalidate
caching strategy for our Dgraph API endpoint. This means the app will first serve data from the cache for instant UI rendering, then send a request to the network in the background to update the cache with fresh data.
Since our production build still used Webpack, we could leverage workbox-webpack-plugin
.
// next.config.js
const { GenerateSW } = require('workbox-webpack-plugin');
module.exports = {
// ... other next config
webpack: (config, { isServer, dev }) => {
if (!isServer && !dev) {
config.plugins.push(
new GenerateSW({
clientsClaim: true,
skipWaiting: true,
// Our service worker will be generated in the public directory
swDest: 'sw.js',
// Define runtime caching rules
runtimeCaching: [
{
// Match our Dgraph GraphQL endpoint
urlPattern: ({ url }) => url.pathname === '/api/graphql',
// Use StaleWhileRevalidate for fast responses
handler: 'StaleWhileRevalidate',
options: {
// Use a custom cache name
cacheName: 'dgraph-api-cache',
// Only cache successful responses
cacheableResponse: {
statuses: [0, 200],
},
// Expire cached entries after 1 day
expiration: {
maxAgeSeconds: 60 * 60 * 24 * 1,
},
},
},
],
})
);
}
return config;
},
};
Next, we needed to register the service worker in our application’s entry point.
// src/pages/_app.tsx
import { useEffect } from 'react';
import type { AppProps } from 'next/app';
function MyApp({ Component, pageProps }: AppProps) {
useEffect(() => {
if ('serviceWorker' in navigator && process.env.NODE_ENV === 'production') {
window.addEventListener('load', function () {
navigator.serviceWorker.register('/sw.js').then(
function (registration) {
console.log('Service Worker registration successful with scope: ', registration.scope);
},
function (err) {
console.error('Service Worker registration failed: ', err);
}
);
});
}
}, []);
return <Component {...pageProps} />;
}
export default MyApp;
This setup works for production, but what about our Vite dev environment? We don’t run Webpack there, so the plugin is useless. For development, we needed a way to test the service worker logic. The solution was to write a manual service worker and serve it using a custom Vite plugin.
// public/dev-sw.js
// A simplified version of the Workbox logic for development testing
const CACHE_NAME = 'dgraph-api-cache-dev';
const API_URL = '/api/graphql';
self.addEventListener('fetch', (event) => {
if (event.request.url.includes(API_URL)) {
event.respondWith(
caches.open(CACHE_NAME).then((cache) => {
return cache.match(event.request).then((cachedResponse) => {
const fetchPromise = fetch(event.request).then((networkResponse) => {
// Important: clone the response as it can be consumed only once
cache.put(event.request, networkResponse.clone());
return networkResponse;
});
// Return cached response immediately, while revalidating in the background
return cachedResponse || fetchPromise;
});
})
);
}
});
We then updated _app.tsx
to register this development-specific worker.
// src/pages/_app.tsx (updated useEffect)
useEffect(() => {
const swUrl = process.env.NODE_ENV === 'production' ? '/sw.js' : '/dev-sw.js';
if ('serviceWorker' in navigator) {
window.addEventListener('load', function () {
navigator.serviceWorker.register(swUrl).then(
// ... success/error handling
);
});
}
}, []);
Phase 4: Testing the Hybrid Architecture
Testing this setup required a multi-pronged approach.
- Unit Testing Service Worker Logic: We used
jest-service-worker-mock
to test the caching logic in isolation, ensuring our fetch handlers behaved correctly under various network conditions.
// src/sw.test.js
import 'jest-service-worker-mock';
import { StaleWhileRevalidate } from 'workbox-strategies';
// This is more conceptual as testing the webpack plugin generated code is tricky.
// A better approach is testing the caching strategy class itself.
describe('StaleWhileRevalidate Strategy', () => {
it('should return cached response and revalidate', async () => {
const request = new Request('/api/graphql');
const cache = await caches.open('dgraph-api-cache');
const cachedResponse = new Response(JSON.stringify({ data: 'cached' }), {
headers: { 'Content-Type': 'application/json' },
});
await cache.put(request, cachedResponse.clone());
// Mock network response
const networkResponse = new Response(JSON.stringify({ data: 'fresh' }), {
headers: { 'Content-Type': 'application/json' },
});
global.fetch.mockResolvedValue(networkResponse.clone());
const strategy = new StaleWhileRevalidate({ cacheName: 'dgraph-api-cache' });
const handler = strategy.handle({ request });
// The handler should resolve with the cached response first
const initialResponse = await handler;
const initialData = await initialResponse.json();
expect(initialData.data).toBe('cached');
// Wait for the background fetch/revalidation to complete
await new Promise(resolve => setTimeout(resolve, 0));
// Verify the cache has been updated
const updatedCachedResponse = await cache.match(request);
const updatedData = await updatedCachedResponse.json();
expect(updatedData.data).toBe('fresh');
});
});
- Integration Testing with Playwright: We wrote end-to-end tests to verify the development proxy. A test would navigate to a page, assert that component assets were served by Vite (checking response headers), and then make a fetch to an API route, asserting that the response came from the Next.js server. For the Service Worker, we used Playwright’s network interception APIs to simulate being offline and verify that the UI correctly rendered stale data from the cache.
The final result was a dramatic improvement. HMR updates that once took 20 seconds now completed in under 800ms. The development team was unshackled. In production, the service worker caching significantly improved perceived performance, especially for users on mobile or unstable connections. The dashboard now loads instantly with cached data, refreshing moments later with live data from Dgraph.
This hybrid architecture is not without its own technical debt. We now have two build configurations to maintain, and any new dependency or build-time feature must be evaluated for compatibility with both Vite and Webpack. The mock layer for Next.js features in the Vite environment is a point of potential divergence and requires discipline to maintain. The long-term goal is to eventually migrate the production build to Vite once its ecosystem matures and we can confidently replicate our existing optimizations. Furthermore, our current Service Worker cache invalidation strategy is naive; it doesn’t intelligently handle GraphQL mutations. A future iteration will need to implement a more sophisticated system where mutations can post messages to the Service Worker to precisely invalidate or update relevant cached queries, preventing stale data from persisting until the cache expires.