The initial requirement seemed straightforward: display a list of warehouse inventory records in a React Native application. The problem was the data source—a legacy Oracle 12c database housing millions of records in a table that was the centerpiece of a monolithic ERP. The table lacked mobile-friendly indices, and direct queries with any meaningful filtering took seconds to return. Our first attempt, a simple paginated REST API, was a catastrophic failure in user acceptance testing. The mobile app felt sluggish, consumed excessive battery life through constant network requests, and was entirely useless without a stable network connection. The core technical pain point was clear: we could not bridge the impedance mismatch between a high-latency, monolithic relational database and a resource-constrained mobile client using a conventional request-response pattern.
We needed to shift from a “fetch-on-demand” model to a “synchronize-and-query-local” architecture. The concept was to maintain a local, read-only replica of the relevant inventory data on the mobile device itself. This would provide instantaneous data access for the UI and inherent offline capability. The challenge, however, was engineering a robust and efficient pipeline to keep this local replica consistent with the master Oracle database without overwhelming either the server or the client.
Our technology selection process was dictated by pragmatism and the constraints of a production environment. For the mobile-side database, we chose WatermelonDB over other options like Realm or raw SQLite. Its primary advantage is its design philosophy: it is built specifically for React applications and operates on a lazy-loading principle. It doesn’t load all query results into memory, instead observing a query and only loading the necessary items to render. This is critical for performance when dealing with potentially tens of thousands of local records. For the intermediary service, we settled on a standard Node.js application using the node-oracledb
driver. This allowed our frontend-focused team to leverage their JavaScript expertise while providing the necessary asynchronous I/O capabilities to handle database connections efficiently. The final, and most contentious, decision was the synchronization mechanism. A true Change Data Capture (CDC) stream using Oracle GoldenGate was deemed too operationally complex and expensive for this project. Instead, we opted for a more direct, application-level solution: a trigger-based journaling table within the Oracle database itself. This provided an auditable log of all data changes that our sync service could consume.
The Oracle Foundation: A Trigger-Based Changelog
In a real-world project, you often don’t have the luxury of altering core ERP tables. Our first step was to create a parallel INVENTORY_CHANGELOG
table. This table would serve as a high-performance, indexed source for our synchronization service, preventing it from ever needing to perform a full table scan on the massive INVENTORY
table.
The structure of the changelog is critical. It must capture the primary key of the affected row, the type of operation (Insert, Update, Delete), and a timestamp. A sequence-based version number is often superior to a timestamp for avoiding clock skew issues, but a TIMESTAMP WITH TIME ZONE
was deemed sufficient for our consistency requirements.
-- DDL for the changelog table
CREATE TABLE INVENTORY_CHANGELOG (
LOG_ID NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY,
INVENTORY_ID NUMBER NOT NULL,
OPERATION_TYPE VARCHAR2(1) NOT NULL, -- 'I' for Insert, 'U' for Update, 'D' for Delete
CHANGE_TIMESTAMP TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP NOT NULL,
PROCESSED_FLAG NUMBER(1) DEFAULT 0 NOT NULL, -- For potential batch processing/cleanup
CONSTRAINT PK_INVENTORY_CHANGELOG PRIMARY KEY (LOG_ID),
CONSTRAINT CHK_OPERATION_TYPE CHECK (OPERATION_TYPE IN ('I', 'U', 'D'))
);
-- Create an index for efficient querying by the sync service
CREATE INDEX IDX_CHANGELOG_TIMESTAMP ON INVENTORY_CHANGELOG(CHANGE_TIMESTAMP);
Next, we implemented a PL/SQL trigger on the source INVENTORY
table. The pitfall here is performance. A poorly written trigger can introduce significant latency into every transaction against the source table. Therefore, the trigger logic must be as lean as possible, doing nothing more than inserting a record into our new changelog table.
-- The trigger that populates our changelog
CREATE OR REPLACE TRIGGER TRG_INVENTORY_AUDIT
AFTER INSERT OR UPDATE OR DELETE ON INVENTORY
FOR EACH ROW
DECLARE
v_op_type VARCHAR2(1);
BEGIN
IF INSERTING THEN
v_op_type := 'I';
INSERT INTO INVENTORY_CHANGELOG (INVENTORY_ID, OPERATION_TYPE)
VALUES (:NEW.ID, v_op_type);
ELSIF UPDATING THEN
v_op_type := 'U';
INSERT INTO INVENTORY_CHANGELOG (INVENTORY_ID, OPERATION_TYPE)
VALUES (:NEW.ID, v_op_type);
ELSIF DELETING THEN
v_op_type := 'D';
INSERT INTO INVENTORY_CHANGELOG (INVENTORY_ID, OPERATION_TYPE)
VALUES (:OLD.ID, v_op_type);
END IF;
EXCEPTION
WHEN OTHERS THEN
-- In a production system, this must go to a proper logging/alerting mechanism.
-- Forcing the transaction to fail might be desired in some cases.
RAISE_APPLICATION_ERROR(-20001, 'Failed to write to inventory changelog: ' || SQLERRM);
END;
/
A common mistake is to put complex logic inside the trigger. Our trigger is simple, but in cases where an update might not change relevant data, teams are tempted to add IF :NEW.COLUMN != :OLD.COLUMN THEN...
logic. This should be avoided. The sync service, not the database trigger, should be responsible for determining if a change is meaningful. The trigger’s only job is to record that a transaction occurred.
The Middleware: A High-Throughput Synchronization Service
The Node.js service acts as the crucial bridge. Its primary responsibility is to expose an endpoint that allows the React Native client to fetch all data changes since its last successful sync. Managing the Oracle database connection pool correctly is paramount for performance and stability.
// src/database/oracle-connector.js
const oracledb = require('oracledb');
const dbConfig = require('../config/db-config');
// On Windows, the 64-bit Oracle Instant Client is required.
// On other platforms, this may not be necessary if the client is in the system library path.
// A common pitfall is forgetting this initialization step, leading to cryptic errors.
if (process.platform === 'win32') {
try {
oracledb.initOracleClient({ libDir: 'C:\\oracle\\instantclient_21_3' });
} catch (err) {
console.error('Whoops!');
console.error(err);
process.exit(1);
}
}
// Production-grade configuration for the connection pool.
// These values need to be tuned based on expected load.
const poolConfig = {
user: dbConfig.user,
password: dbConfig.password,
connectString: dbConfig.connectString,
poolMin: 4,
poolMax: 10,
poolIncrement: 1,
poolTimeout: 60,
};
async function initialize() {
try {
await oracledb.createPool(poolConfig);
console.log('OracleDB connection pool started successfully.');
} catch (err) {
console.error('FATAL: Failed to initialize OracleDB connection pool.', err);
process.exit(1); // Fail fast if the database connection can't be established.
}
}
async function execute(statement, binds = [], opts = {}) {
let connection;
opts.outFormat = oracledb.OUT_FORMAT_OBJECT; // Always fetch as objects.
try {
connection = await oracledb.getConnection();
const result = await connection.execute(statement, binds, opts);
return result;
} catch (err) {
console.error('Database execution error:', err);
// Here you would add more sophisticated error handling, e.g., mapping Oracle error codes.
throw err;
} finally {
if (connection) {
try {
await connection.close(); // Release the connection back to the pool.
} catch (err) {
console.error('Error closing OracleDB connection:', err);
}
}
}
}
module.exports = { initialize, execute };
With the connection logic established, we can build the sync endpoint. This endpoint is not a simple data fetcher. It needs to handle the lastSyncTimestamp
from the client and fetch both the list of changes and the full data for new/updated records in an efficient manner. A naive implementation might query the changelog, then loop through the IDs to fetch each record individually. This would result in an “N+1 query” problem, which is disastrous for performance. The correct approach is to fetch all changes, collect the IDs, and then perform a single IN
query to get all the necessary data.
// src/controllers/sync-controller.js
const { execute } = require('../database/oracle-connector');
// A logger instance (e.g., Winston) should be used in a real project.
const logger = console;
const BATCH_SIZE = 500; // Limit the number of changes sent in one response.
async function handleSync(req, res) {
// The client MUST send its last known sync timestamp in ISO 8601 format.
// We use a default '1970' timestamp for the initial sync.
const lastSyncTimestamp = req.query.lastSyncTimestamp || '1970-01-01T00:00:00.000Z';
const clientTimezone = req.query.timezone || 'UTC'; // Important for timestamp conversion.
try {
// Step 1: Fetch the batch of changes from the changelog.
const changelogQuery = `
SELECT LOG_ID, INVENTORY_ID, OPERATION_TYPE, CHANGE_TIMESTAMP
FROM INVENTORY_CHANGELOG
WHERE CHANGE_TIMESTAMP > TO_TIMESTAMP_TZ(:lastSyncTimestamp, 'YYYY-MM-DD"T"HH24:MI:SS.FF3"Z"')
ORDER BY CHANGE_TIMESTAMP ASC
FETCH FIRST :batchSize ROWS ONLY
`;
const changelogResult = await execute(changelogQuery, {
lastSyncTimestamp,
batchSize: BATCH_SIZE
});
const changes = changelogResult.rows;
if (changes.length === 0) {
return res.status(200).json({
changes: {},
newTimestamp: new Date().toISOString()
});
}
const created = [];
const updated = [];
const deleted = [];
const idsToFetch = [];
changes.forEach(change => {
if (change.OPERATION_TYPE === 'I' || change.OPERATION_TYPE === 'U') {
idsToFetch.push(change.INVENTORY_ID);
} else if (change.OPERATION_TYPE === 'D') {
deleted.push(change.INVENTORY_ID);
}
});
// Step 2: If there are creates or updates, fetch the full records in a single batch query.
if (idsToFetch.length > 0) {
const uniqueIds = [...new Set(idsToFetch)]; // Ensure we don't fetch the same ID twice.
const inventoryQuery = `
SELECT ID, SKU, PRODUCT_NAME, QUANTITY, LOCATION, LAST_MODIFIED_DATE
FROM INVENTORY
WHERE ID IN (${uniqueIds.map((_, i) => `:${i+1}`).join(',')})
`;
const inventoryResult = await execute(inventoryQuery, uniqueIds);
const inventoryMap = new Map();
inventoryResult.rows.forEach(row => inventoryMap.set(row.ID, row));
// Step 3: Map the full data back to the original changes.
changes.forEach(change => {
if ((change.OPERATION_TYPE === 'I' || change.OPERATION_TYPE === 'U') && inventoryMap.has(change.INVENTORY_ID)) {
const record = inventoryMap.get(change.INVENTORY_ID);
// The key is to distinguish creates from updates for the client-side logic.
// We base this on the client's last sync time, not just the operation type in the log.
// This handles the case where an item is created and updated before the client ever synced.
if (change.OPERATION_TYPE === 'I') {
created.push(record);
} else {
updated.push(record);
}
}
});
}
// The new timestamp should be the timestamp of the LAST record processed in this batch.
const newTimestamp = changes[changes.length - 1].CHANGE_TIMESTAMP.toISOString();
res.status(200).json({
changes: { created, updated, deleted },
newTimestamp
});
} catch (error) {
logger.error(`Sync failed for lastSyncTimestamp: ${lastSyncTimestamp}`, error);
res.status(500).json({ message: 'Internal Server Error during synchronization.' });
}
}
module.exports = { handleSync };
This implementation includes critical optimizations: batching changes, using a single query to fetch updated/created records, and correctly handling timestamps to ensure clients don’t miss data.
The React Native Client: Efficient Local Database Synchronization
On the client side, we need a robust service to manage the synchronization process and apply changes to the local WatermelonDB instance.
First, we define the WatermelonDB schema. This must mirror the structure of the data we receive from our API.
// src/db/schema.js
import { appSchema, tableSchema } from '@nozbe/watermelondb';
export default appSchema({
version: 1,
tables: [
tableSchema({
name: 'inventory_items',
columns: [
{ name: 'sku', type: 'string', isIndexed: true },
{ name: 'product_name', type: 'string' },
{ name: 'quantity', type: 'number' },
{ name: 'location', type: 'string', isOptional: true },
{ name: 'last_modified_date', type: 'number' }, // Store dates as UNIX timestamps
],
}),
],
});
Next, the model definition maps the schema to an object that we can interact with.
// src/db/models/InventoryItem.js
import { Model } from '@nozbe/watermelondb';
import { field, text, readonly, date } from '@nozbe/watermelondb/decorators';
export default class InventoryItem extends Model {
static table = 'inventory_items';
@text('sku') sku;
@text('product_name') productName;
@field('quantity') quantity;
@text('location') location;
@date('last_modified_date') lastModifiedDate;
}
The core of the client-side logic resides in a SyncService
. This service is responsible for calling the API, retrieving changes, and applying them to the local database in a transactional manner. A common mistake is to process each change individually, which involves significant overhead. WatermelonDB’s database.batch()
method is designed to perform a large number of database operations in a single, efficient transaction.
// src/services/SyncService.js
import { Q } from '@nozbe/watermelondb';
import { database } from '../db'; // Your initialized WatermelonDB instance
import apiClient from './apiClient'; // An axios or fetch wrapper for your API
import InventoryItem from '../db/models/InventoryItem';
const LAST_SYNC_TIMESTAMP_KEY = 'last_sync_timestamp';
// A simple persistence layer for the sync timestamp (e.g., AsyncStorage)
const storage = {
get: async (key) => { /* ... implementation ... */ },
set: async (key, value) => { /* ... implementation ... */ },
};
export async function synchronize() {
try {
const lastSyncTimestamp = await storage.get(LAST_SYNC_TIMESTAMP_KEY) || '1970-01-01T00:00:00.000Z';
const response = await apiClient.get('/sync', {
params: { lastSyncTimestamp }
});
const { changes, newTimestamp } = response.data;
if (!changes || (!changes.created?.length && !changes.updated?.length && !changes.deleted?.length)) {
console.log('Sync complete. No new changes.');
await storage.set(LAST_SYNC_TIMESTAMP_KEY, newTimestamp);
return;
}
const inventoryItemsCollection = database.collections.get('inventory_items');
// Prepare records for the batch operation.
const recordsToCreate = changes.created.map(item => {
return inventoryItemsCollection.prepareCreate(record => {
record._raw.id = item.ID.toString(); // Use Oracle ID as WatermelonDB ID
record.sku = item.SKU;
record.productName = item.PRODUCT_NAME;
record.quantity = item.QUANTITY;
record.location = item.LOCATION;
record.lastModifiedDate = new Date(item.LAST_MODIFIED_DATE);
});
});
// For updates, we first need to find the local record.
const idsToUpdate = changes.updated.map(item => item.ID.toString());
const localItemsToUpdate = await inventoryItemsCollection.query(Q.where('id', Q.oneOf(idsToUpdate))).fetch();
const localItemsMap = new Map(localItemsToUpdate.map(item => [item.id, item]));
const recordsToUpdate = changes.updated
.filter(item => localItemsMap.has(item.ID.toString()))
.map(item => {
const localItem = localItemsMap.get(item.ID.toString());
return localItem.prepareUpdate(record => {
record.sku = item.SKU;
record.productName = item.PRODUCT_NAME;
record.quantity = item.QUANTITY;
record.location = item.LOCATION;
record.lastModifiedDate = new Date(item.LAST_MODIFIED_DATE);
});
});
// For deletes, we find the local records to destroy.
const idsToDelete = changes.deleted.map(id => id.toString());
const localItemsToDelete = await inventoryItemsCollection.query(Q.where('id', Q.oneOf(idsToDelete))).fetch();
const recordsToDelete = localItemsToDelete.map(item => item.prepareDestroyPermanently());
// Execute all changes in a single database batch transaction.
await database.batch(
...recordsToCreate,
...recordsToUpdate,
...recordsToDelete
);
// ONLY if the batch succeeds, we update our sync timestamp.
await storage.set(LAST_SYNC_TIMESTAMP_KEY, newTimestamp);
console.log(`Sync successful. Applied ${changes.created.length} creates, ${changes.updated.length} updates, ${changes.deleted.length} deletes.`);
} catch (error) {
// In a real-world project, implement retry logic with exponential backoff.
console.error('Synchronization failed:', error);
}
}
Finally, the UI component can now be incredibly simple and performant. We use @shopify/flash-list
for virtualization and @nozbe/with-database
to connect the component reactively to the WatermelonDB query.
// src/components/InventoryList.js
import React from 'react';
import { View, Text, StyleSheet } from 'react-native';
import { FlashList } from '@shopify/flash-list';
import withObservables from '@nozbe/with-observables';
import { Q } from '@nozbe/watermelondb';
const renderItem = ({ item }) => (
<View style={styles.item}>
<Text style={styles.title}>{item.productName}</Text>
<Text>SKU: {item.sku} | Quantity: {item.quantity}</Text>
</View>
);
const InventoryListComponent = ({ items }) => {
return (
<FlashList
data={items}
renderItem={renderItem}
keyExtractor={item => item.id}
estimatedItemSize={60}
/>
);
};
// This is the magic. withObservables connects the component to the database.
// Whenever the underlying data for this query changes, the component re-renders automatically.
const enhance = withObservables(['search'], ({ database, search }) => ({
items: database.collections
.get('inventory_items')
.query(
// Example of a searchable query
Q.where('product_name', Q.like(`%${Q.sanitizeLikeString(search)}%`))
)
.observe(),
}));
export const InventoryList = enhance(InventoryListComponent);
The final architecture is represented by the following flow:
graph TD subgraph Oracle Database A[INVENTORY Table] -- PL/SQL Trigger --> B(INVENTORY_CHANGELOG); end subgraph Node.js Middleware C[Sync Service API] -- SELECT ... FROM --> B; end subgraph React Native App D[SyncAgent] -- GET /sync?lastSyncTimestamp=... --> C; E[WatermelonDB] -- Batch Updates --> F[Local SQLite DB]; G[InventoryList UI] -- Observes Query --> E; D -- Writes Changes --> E; end style A fill:#f9f,stroke:#333,stroke-width:2px style C fill:#ccf,stroke:#333,stroke-width:2px style G fill:#9f9,stroke:#333,stroke-width:2px
This design successfully decoupled the mobile client from the legacy backend. The UI is now powered by a local database, providing a fast, responsive, and offline-capable user experience. The load on the Oracle database is minimized, as the sync service queries a small, indexed changelog table instead of the primary data table.
The primary limitation of this trigger-based approach is that it constitutes a form of sophisticated polling. While far more efficient than client-side polling, it still introduces a level of latency dictated by how frequently the client’s SyncAgent
runs. For true real-time requirements, this would need to be replaced with a streaming solution using technologies like Debezium connecting to Kafka, which could then push updates to clients via WebSockets. Furthermore, this architecture is designed for one-way data flow (Oracle to client). Implementing a bidirectional sync that allows mobile clients to write back to Oracle would introduce immense complexity around conflict resolution and would require a completely different strategy, likely involving Command Query Responsibility Segregation (CQRS) patterns and a more robust queuing system on the backend. The initial bootstrap for a new client, which needs to pull the entire dataset, also requires a separate, optimized API endpoint not covered here to avoid timing out the standard sync mechanism.