The front-end performance cliff was steep and unforgiving. We were feeding a real-time analytics dashboard with a high-frequency data stream from an Elixir and Phoenix backend. The Phoenix Channels implementation was flawless, handling tens of thousands of concurrent connections with predictable, low latency. The problem was on the client. For each of the thousands of messages received per second, the browser’s JavaScript engine had to parse the payload, run a series of complex aggregations, and update the UI. The main thread was perpetually blocked, resulting in a frozen interface and an unacceptable user experience.
Our initial attempt using Web Workers to offload the computation provided only marginal relief. The core issue remained: we were trying to perform computationally-bound work in an environment not optimized for it. The data serialization and transfer costs between the main thread and the worker often negated the benefits of concurrency. A more fundamental shift was necessary. The decision was made to rewrite the entire client-side aggregation logic in a systems-level language and compile it to WebAssembly. Rust was the obvious choice, given its performance, lack of a garbage collector, and first-class WASM tooling. This post-mortem documents the journey of building that pipeline, from the Rust core to the Elixir backend, and critically, how we used Vitest to ensure the complex TypeScript-to-WASM boundary was robust and reliable.
The Technical Pain Point: A Drowning Main Thread
The data structure pushed from our Phoenix Channel was an array of market tick objects.
[
{"symbol": "BTCUSD", "price": 60010.5, "volume": 0.5, "timestamp": 1677610001},
{"symbol": "ETHUSD", "price": 4005.2, "volume": 12.2, "timestamp": 1677610002},
// ... thousands more entries
]
The client needed to calculate, for each incoming batch, several statistical measures: volume-weighted average price (VWAP), total volume, price volatility (standard deviation), and a count of ticks per symbol, all within a 1-second rolling window. The pure JavaScript implementation looked something like this, and it was the source of our performance bottleneck.
// The OLD, slow, pure JavaScript implementation
function processTicks(ticks: any[]): Record<string, any> {
const bySymbol = ticks.reduce((acc, tick) => {
if (!acc[tick.symbol]) {
acc[tick.symbol] = [];
}
acc[tick.symbol].push(tick);
return acc;
}, {});
const aggregates: Record<string, any> = {};
for (const symbol in bySymbol) {
const symbolTicks = bySymbol[symbol];
let totalPriceVolume = 0;
let totalVolume = 0;
const prices: number[] = [];
symbolTicks.forEach(t => {
totalPriceVolume += t.price * t.volume;
totalVolume += t.volume;
prices.push(t.price);
});
const vwap = totalVolume > 0 ? totalPriceVolume / totalVolume : 0;
const meanPrice = prices.reduce((a, b) => a + b, 0) / prices.length;
const variance = prices.map(p => Math.pow(p - meanPrice, 2)).reduce((a, b) => a + b, 0) / prices.length;
const stdDev = Math.sqrt(variance);
aggregates[symbol] = {
vwap,
totalVolume,
stdDev,
count: symbolTicks.length,
};
}
return aggregates;
}
With payloads of 50,000+ ticks, this function could take hundreds of milliseconds, completely saturating the CPU and freezing any attempt at user interaction.
The Rust Core: Performance and Safety at the Edge
The core idea was to replace processTicks
with a Rust equivalent running in WASM. The Rust implementation needed to be fast and, critically, avoid unnecessary data copying across the JS/WASM boundary. A common mistake is to JSON.parse
in JavaScript and then pass a large object structure into WASM. This is inefficient. A much better approach is to pass the raw JSON string directly into the WASM module and let Rust’s highly optimized serde_json
handle the parsing.
First, let’s define our data structures in src/lib.rs
.
// file: rust-aggregator/src/lib.rs
use wasm_bindgen::prelude::*;
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
// This macro makes the Rust code accessible from JavaScript.
#[wasm_bindgen]
extern "C" {
// We can import browser APIs like console.log for debugging.
#[wasm_bindgen(js_namespace = console)]
fn log(s: &str);
}
// Define the input data structure.
// `Deserialize` allows us to parse this from JSON.
#[derive(Deserialize, Debug, Clone)]
struct Tick {
symbol: String,
price: f64,
volume: f64,
// We don't need the timestamp for this calculation,
// so we can omit it to save parsing time.
}
// Define the output structure.
// `Serialize` allows us to convert this back into a JS object.
#[derive(Serialize, Debug)]
struct Aggregate {
vwap: f64,
total_volume: f64,
std_dev: f64,
count: usize,
}
/// This is the main exported function. It accepts a raw JSON string
/// and returns another raw JSON string with the aggregated results.
/// In a real-world project, error handling is critical. We use Result
/// to propagate parsing errors back to the JavaScript caller.
#[wasm_bindgen]
pub fn process_ticks_from_json(json_string: &str) -> Result<String, JsValue> {
// The `?` operator here is powerful. If `serde_json::from_str` fails,
// it will immediately return an error `JsValue` that JS can catch.
let ticks: Vec<Tick> = serde_json::from_str(json_string)
.map_err(|e| JsValue::from_str(&format!("JSON parsing failed: {}", e)))?;
// Group ticks by symbol using a HashMap for efficiency.
let mut grouped_by_symbol: HashMap<String, Vec<Tick>> = HashMap::new();
for tick in ticks {
grouped_by_symbol.entry(tick.symbol.clone()).or_default().push(tick);
}
// Create a map to hold the final results.
let mut results: HashMap<String, Aggregate> = HashMap::new();
for (symbol, symbol_ticks) in grouped_by_symbol {
let mut total_price_volume = 0.0;
let mut total_volume = 0.0;
let prices: Vec<f64> = symbol_ticks.iter().map(|t| t.price).collect();
for tick in &symbol_ticks {
total_price_volume += tick.price * tick.volume;
total_volume += tick.volume;
}
let vwap = if total_volume > 0.0 { total_price_volume / total_volume } else { 0.0 };
let count = prices.len();
let mean_price = prices.iter().sum::<f64>() / (count as f64);
let variance = prices.iter().map(|p| {
let diff = p - mean_price;
diff * diff
}).sum::<f64>() / (count as f64);
let std_dev = variance.sqrt();
results.insert(symbol, Aggregate {
vwap,
total_volume,
std_dev,
count,
});
}
// Serialize the final HashMap back to a JSON string.
serde_json::to_string(&results)
.map_err(|e| JsValue::from_str(&format!("Result serialization failed: {}", e)))
}
To build this, our Cargo.toml
is configured for WASM output.
# file: rust-aggregator/Cargo.toml
[package]
name = "rust-aggregator"
version = "0.1.0"
edition = "2021"
[lib]
crate-type = ["cdylib"]
[dependencies]
wasm-bindgen = "0.2"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
We compile this using wasm-pack
:wasm-pack build --target web --out-dir ../app/src/wasm
This command generates the .wasm
binary, a JavaScript glue file, and TypeScript definitions (.d.ts
), which is invaluable for type-safe integration.
The Elixir Backend: A High-Frequency Data Source
The Phoenix backend’s job is to reliably broadcast data to all connected clients. We’ll simulate a market data feed using a simple GenServer that periodically pushes a large batch of mock data through the market:data
channel.
# file: my_app/lib/my_app_web/channels/market_channel.ex
defmodule MyAppWeb.MarketChannel do
use MyAppWeb, :channel
def join("market:data", _payload, socket) do
# Start the data broadcaster for the first user who joins.
# In a real app, this would likely be a globally managed process.
if connected_users() == 0 do
MyApp.DataBroadcaster.start_link(self())
end
{:ok, socket}
end
# Helper to check number of subscribers. A more robust solution would use Phoenix.Tracker.
defp connected_users() do
case Phoenix.PubSub.subscribers(MyApp.PubSub, "market:data") do
list when is_list(list) -> length(list)
_ -> 0
end
end
end
The DataBroadcaster
GenServer simulates the feed.
# file: my_app/lib/my_app/data_broadcaster.ex
defmodule MyApp.DataBroadcaster do
use GenServer
def start_link(channel_pid) do
GenServer.start_link(__MODULE__, channel_pid, name: __MODULE__)
end
@impl true
def init(channel_pid) do
# Send the first batch immediately, then schedule subsequent broadcasts.
send_data_batch(channel_pid)
Process.send_after(self(), :broadcast, 1000) # Broadcast every second
{:ok, %{channel_pid: channel_pid}}
end
@impl true
def handle_info(:broadcast, state) do
send_data_batch(state.channel_pid)
Process.send_after(self(), :broadcast, 1000)
{:noreply, state}
end
defp send_data_batch(_channel_pid) do
# Generate a large payload of mock tick data.
# In a production system, this data would come from an external source.
payload = generate_mock_ticks(50_000)
# We broadcast directly to the topic. All channel processes subscribed
# to "market:data" will push this to their respective clients.
MyAppWeb.Endpoint.broadcast!(
"market:data",
"new_data",
%{ticks: payload}
)
end
defp generate_mock_ticks(count) do
symbols = ["BTCUSD", "ETHUSD", "SOLUSD", "ADAUSD"]
for _ <- 1..count do
%{
symbol: Enum.random(symbols),
price: :rand.uniform() * 50000 + 10000,
volume: :rand.uniform() * 10,
timestamp: DateTime.to_unix(DateTime.utc_now())
}
end
end
end
This simple setup provides the high-throughput firehose our client needs to handle.
TypeScript Integration: Bridging Two Worlds
The TypeScript layer is the glue. It’s responsible for loading the WASM module, managing the WebSocket connection, and orchestrating the data flow between them.
// file: app/src/services/market-data-service.ts
import init, { process_ticks_from_json } from '../wasm/rust_aggregator';
import { Socket } from 'phoenix';
// Represents the structure of the final aggregated data.
export interface AggregationResult {
[symbol: string]: {
vwap: number;
totalVolume: number;
stdDev: number;
count: number;
};
}
class MarketDataService {
private socket: Socket;
private channel: any;
private onDataCallback?: (data: AggregationResult) => void;
private wasmInitialized = false;
constructor() {
this.socket = new Socket('/socket', { params: {} });
}
// Asynchronously initialize the WASM module.
// This must be called before any other methods.
public async initialize(): Promise<void> {
try {
await init(); // This loads and compiles the .wasm file.
this.wasmInitialized = true;
console.log('WASM module initialized successfully.');
} catch (err) {
console.error('Failed to initialize WASM module:', err);
// In a real app, we should have a fallback or show an error state.
}
}
public connect(onData: (data: AggregationResult) => void) {
if (!this.wasmInitialized) {
throw new Error('WASM module not initialized. Call initialize() first.');
}
this.onDataCallback = onData;
this.socket.connect();
this.channel = this.socket.channel('market:data', {});
this.channel.on('new_data', (payload: { ticks: any[] }) => {
this.handleNewData(payload);
});
this.channel.join()
.receive('ok', () => console.log('Joined Phoenix Channel successfully'))
.receive('error', (resp: any) => console.error('Unable to join', resp));
}
private handleNewData(payload: { ticks: any[] }) {
if (!this.onDataCallback) return;
// This is the performance-critical path.
// 1. We stringify the array of tick objects. This seems counter-intuitive,
// but it prepares the data for our optimized Rust function.
const jsonString = JSON.stringify(payload.ticks);
try {
// 2. We pass the entire raw string to WASM. All heavy parsing and
// computation happens inside Rust, off the main thread.
const resultString = process_ticks_from_json(jsonString);
// 3. The result is a much smaller, aggregated JSON string.
// Parsing this is extremely fast.
const aggregatedData: AggregationResult = JSON.parse(resultString);
// 4. Send the processed data to the UI layer.
this.onDataCallback(aggregatedData);
} catch (e) {
// The Rust function can throw an error (as a JsValue).
// We catch it here to prevent the application from crashing.
console.error('Error processing ticks in WASM:', e);
}
}
public disconnect() {
this.channel?.leave();
this.socket.disconnect();
}
}
export const marketDataService = new MarketDataService();
Rigorous Testing with Vitest
This complex interaction is fragile. A change in the Rust data structure, an error in the JSON serialization, or a problem with the WASM module’s memory could break the entire application. We need robust tests. Vitest is perfect for this.
Our vitest.config.ts
needs to be configured to handle WASM.
// file: vitest.config.ts
import { defineConfig } from 'vitest/config';
export default defineConfig({
test: {
environment: 'jsdom',
// We need to configure the server to correctly serve .wasm files during tests.
server: {
deps: {
inline: [/@vite-pwa/],
},
},
// The WASM plugin is crucial for Vitest to load our module.
// It's often part of a larger Vite setup.
// For Vitest alone, you might need to ensure proper async setup.
// For simplicity here, we assume a Vite-like environment setup.
},
});
Unit Testing the WASM Function
First, we test the process_ticks_from_json
function in isolation to validate its logic.
// file: app/src/services/market-data-service.test.ts
import { describe, it, expect, beforeAll } from 'vitest';
import init, { process_ticks_from_json } from '../wasm/rust_aggregator';
import type { AggregationResult } from './market-data-service';
// Load the WASM module once before all tests in this file.
beforeAll(async () => {
await init();
});
describe('WASM Aggregator Logic', () => {
it('should correctly aggregate a simple batch of ticks', () => {
const ticks = [
{ symbol: 'BTCUSD', price: 60000, volume: 1 },
{ symbol: 'BTCUSD', price: 60002, volume: 1 },
{ symbol: 'ETHUSD', price: 4000, volume: 10 },
];
const jsonString = JSON.stringify(ticks);
const resultString = process_ticks_from_json(jsonString);
const result: AggregationResult = JSON.parse(resultString);
expect(result.BTCUSD).toBeDefined();
expect(result.BTCUSD.count).toBe(2);
expect(result.BTCUSD.totalVolume).toBe(2);
expect(result.BTCUSD.vwap).toBeCloseTo(60001);
expect(result.BTCUSD.stdDev).toBeCloseTo(1);
expect(result.ETHUSD).toBeDefined();
expect(result.ETHUSD.count).toBe(1);
expect(result.ETHUSD.totalVolume).toBe(10);
expect(result.ETHUSD.vwap).toBeCloseTo(4000);
});
it('should handle empty input gracefully', () => {
const jsonString = JSON.stringify([]);
const resultString = process_ticks_from_json(jsonString);
const result = JSON.parse(resultString);
expect(Object.keys(result).length).toBe(0);
});
it('should throw an error for malformed JSON', () => {
const malformedJson = '[{"symbol": "BTCUSD"'; // Intentionally broken
// The wasm-bindgen wrapper converts Rust panics/errors into JS exceptions.
expect(() => process_ticks_from_json(malformedJson)).toThrow();
});
});
Integration Testing with a Mocked WebSocket
The most critical test involves verifying the MarketDataService
as a whole. We don’t want to connect to a real Elixir backend during our unit tests. Instead, we can use Vitest’s mocking capabilities to create a fake Phoenix Channel and simulate incoming messages.
// file: app/src/services/market-data-service.integration.test.ts
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import { marketDataService } from './market-data-service';
// Mock the entire 'phoenix' library.
vi.mock('phoenix', () => {
// Create a mock channel class that lets us control its behavior.
const mockChannel = {
on: vi.fn(),
join: vi.fn().mockReturnThis(),
receive: vi.fn().mockReturnThis(),
leave: vi.fn(),
};
// Create a mock socket class.
const mockSocket = {
connect: vi.fn(),
disconnect: vi.fn(),
channel: vi.fn().mockReturnValue(mockChannel),
};
return { Socket: vi.fn(() => mockSocket) };
});
// Import the mocked modules to access their mock functions.
import { Socket } from 'phoenix';
describe('MarketDataService Integration with Mocked Socket', () => {
let mockChannel: any;
beforeEach(async () => {
// Vitest automatically hoists vi.mock, so Socket is already the mock version.
const MockedSocket = Socket as vi.Mock;
MockedSocket.mockClear();
// Re-initialize the service to use the new mock socket instance.
// (This part might require refactoring the service to be instantiated in tests)
// For this example, let's assume we can grab the channel from the mock.
const mockSocketInstance = new Socket('/socket', {});
mockChannel = mockSocketInstance.channel('market:data', {});
// We need our WASM module loaded for these tests as well.
await marketDataService.initialize();
});
afterEach(() => {
vi.restoreAllMocks();
});
it('should connect and process a message from the mocked channel', () => {
const onDataHandler = vi.fn();
marketDataService.connect(onDataHandler);
// Find the 'new_data' callback registered by our service.
const onNewDataCallback = mockChannel.on.mock.calls.find(
(call: any) => call[0] === 'new_data'
)[1];
expect(onNewDataCallback).toBeInstanceOf(Function);
// Simulate a message from the backend.
const testPayload = {
ticks: [
{ symbol: 'TEST', price: 100, volume: 5 },
{ symbol: 'TEST', price: 102, volume: 5 },
],
};
onNewDataCallback(testPayload);
// Assert that our data handler was called with the correctly processed data.
expect(onDataHandler).toHaveBeenCalledOnce();
const result = onDataHandler.mock.calls[0][0];
expect(result.TEST).toBeDefined();
expect(result.TEST.count).toBe(2);
expect(result.TEST.vwap).toBe(101);
});
});
This test proves that our service correctly wires up the channel, listens for the right event, invokes the WASM function, and passes the result to the callback, all without any network dependency.
Final Architecture and Lingering Considerations
The final data flow is efficient and robust.
graph TD A[Elixir/Phoenix Backend] -- Pushes large JSON payload --> B(Phoenix Channel); B -- WebSocket --> C{TypeScript Client}; C -- Passes raw JSON string --> D[Rust/WASM Module]; D -- Fast parsing & aggregation --> D; D -- Returns small aggregated JSON string --> C; C -- Parses result & updates UI State --> E[DOM/UI];
This architecture successfully moved the performance bottleneck from an interpreted language (JavaScript) to a compiled, highly-optimized one (Rust), executed within the browser’s sandboxed WASM runtime. The UI became responsive again, even under heavy data load.
However, this solution is not without its own set of trade-offs. The initial download and compilation of the .wasm
file adds to the application’s startup latency. For our dashboard, this was an acceptable one-time cost. Furthermore, debugging WASM can be more challenging than JavaScript, though browser developer tools are rapidly improving in this area. A panic within the Rust code is unrecoverable from the JavaScript side and will terminate the WASM instance, necessitating robust error handling at the boundary, which we implemented using Result
in Rust and try...catch
in TypeScript.
The next evolutionary step would be to explore WebAssembly threads and shared memory (SharedArrayBuffer
). This would allow us to parallelize the aggregation logic within Rust using libraries like Rayon and eliminate the final data serialization step between the main thread and the WASM module, pushing performance even further. That, however, introduces significant complexity around synchronization and state management, a challenge for another day.