Automating Pinia Store Generation from Oracle Schema Metadata Using Ansible Playbooks


The synchronization drift between a legacy Oracle database schema and a modern Vue.js frontend state managed by Pinia is a persistent source of bugs and development friction. A change in a VARCHAR2 length, a new NOT NULL constraint, or the addition of a column on the backend requires a developer to manually trace and update corresponding TypeScript interfaces, validation rules, and initial state values on the frontend. This manual process is not just tedious; it’s fragile, error-prone, and scales poorly as the number of tables and developers grows. On a project with over 200 Oracle tables feeding the UI, this “death by a thousand paper cuts” became a significant drain on productivity.

Our initial concept was to eliminate the manual step entirely. The Oracle database’s data dictionary is the ultimate source of truth for the data structure. Therefore, we should be able to programmatically introspect this schema and generate the necessary Pinia store boilerplate. This approach guarantees that the frontend state model is always a direct reflection of the backend database structure, enforced automatically within our CI/CD pipeline.

The choice of tooling for this task was critical. A custom Node.js or Python script was an option, but it would require significant boilerplate for handling database connections, command-line argument parsing, file templating, and secret management. We settled on Ansible. While typically used for infrastructure automation, its core strengths are a perfect match for this use case. It is agentless, has robust modules for interacting with Oracle databases, manages secrets with Ansible Vault, and, most importantly, possesses an extremely powerful Jinja2 templating engine. Using Ansible allows us to define this entire code generation process declaratively in a playbook, which can be version-controlled and executed seamlessly within any CI runner.

The overall architecture of the solution is a multi-stage process orchestrated by a single Ansible playbook.

graph TD
    subgraph "CI/CD Pipeline Runner"
        A[Ansible Playbook Execution] --> B{Connect to Oracle};
        B --> C[Execute Metadata SQL Query];
        C --> D{Register Raw JSON Output};
        D --> E[Transform & Group Data];
        E --> F{Process Jinja2 Template};
        F --> G[Generate Pinia .ts Files];
    end

    subgraph "Oracle Database"
        H[Data Dictionary Views
USER_TABLES
USER_TAB_COLUMNS
USER_CONSTRAINTS] end subgraph "Frontend Codebase" I[src/stores/generated/] end C -- Reads from --> H; G -- Writes to --> I;

Environment Preparation and Ansible Configuration

Before writing the playbook, the execution environment needs the correct dependencies. Ansible requires a Python environment with the cx_Oracle library to communicate with the database, and the community.oracle collection provides the necessary modules.

First, the Ansible collection is specified in requirements.yml:

# requirements.yml
collections:
  - name: community.oracle
    version: 1.8.0

And the Python dependency in requirements.txt:

# requirements.txt
ansible
cx_Oracle==8.3.0

Installation is straightforward:

pip install -r requirements.txt
ansible-galaxy collection install -r requirements.yml

A crucial aspect for production use is managing database credentials securely. We use Ansible Vault to encrypt a file containing the credentials.

# Create an encrypted file for secrets
ansible-vault create secrets.yml

Inside secrets.yml, we define the connection variables:

# secrets.yml (Encrypted)
oracle_user: "your_user"
oracle_password: "your_password"
oracle_dsn: "your_host:1521/your_service_name"

The ansible.cfg file configures the inventory path and default behaviors.

# ansible.cfg
[defaults]
inventory = inventory
host_key_checking = False
# Optional: Path to custom filter plugins
filter_plugins = ./plugins/filter

Our inventory file is simple, as we are running the playbook locally against a remote database.

# inventory
[local]
localhost ansible_connection=local

Stage 1: Introspecting the Oracle Schema

The core of the process is a single, comprehensive SQL query that extracts all necessary metadata from Oracle’s data dictionary views. A common mistake is to run multiple small queries and try to stitch the data together in Ansible. This is inefficient and complex. A single query that pre-joins all required information is far superior.

This query joins USER_TAB_COLUMNS with USER_COL_COMMENTS for documentation and performs a series of LEFT JOINs against constraint views (USER_CONSTRAINTS and USER_CONS_COLUMNS) to identify primary keys and non-nullable columns. We explicitly filter for a predefined set of tables to avoid generating stores for system or irrelevant tables.

-- oracle_metadata_query.sql
SELECT
    utc.table_name,
    utc.column_name,
    utc.data_type,
    utc.data_length,
    utc.data_precision,
    utc.data_scale,
    utc.nullable,
    ucc.comments AS column_comment,
    pk.constraint_type AS is_primary_key
FROM
    user_tab_columns utc
LEFT JOIN
    user_col_comments ucc ON utc.table_name = ucc.table_name AND utc.column_name = ucc.column_name
LEFT JOIN (
    SELECT
        uc.table_name,
        ucc.column_name,
        uc.constraint_type
    FROM
        user_constraints uc
    JOIN
        user_cons_columns ucc ON uc.constraint_name = ucc.constraint_name
    WHERE
        uc.constraint_type = 'P'
) pk ON utc.table_name = pk.table_name AND utc.column_name = pk.column_name
WHERE
    utc.table_name IN ('EMPLOYEES', 'DEPARTMENTS', 'JOB_HISTORY') -- Target specific tables
ORDER BY
    utc.table_name,
    utc.column_id;

The Ansible playbook executes this query using the community.oracle.oracle_sql module. The result is registered into a variable, oracle_schema_raw.

# generate_stores.yml
- name: Generate Pinia Stores from Oracle Schema
  hosts: localhost
  gather_facts: no
  vars_files:
    - secrets.yml

  tasks:
    - name: Ensure output directory exists
      ansible.builtin.file:
        path: "../frontend/src/stores/generated"
        state: directory
        mode: '0755'

    - name: Read Oracle metadata SQL query from file
      ansible.builtin.slurp:
        src: oracle_metadata_query.sql
      register: sql_query_content

    - name: Fetch schema metadata from Oracle Database
      community.oracle.oracle_sql:
        username: "{{ oracle_user }}"
        password: "{{ oracle_password }}"
        dsn: "{{ oracle_dsn }}"
        sql: "{{ sql_query_content.content | b64decode }}"
      register: oracle_schema_raw
      # In a real-world project, use no_log to prevent secrets from leaking into logs
      no_log: true

    - name: DEBUG | Display raw output from Oracle
      ansible.builtin.debug:
        var: oracle_schema_raw.msg.rows[0]
      when: oracle_schema_raw.msg.rows is defined and oracle_schema_raw.msg.rows | length > 0

The output from the oracle_sql module is a flat list of dictionaries, where each dictionary represents a column. This format is difficult to work with in a template that needs to iterate over tables.

// Sample raw output in oracle_schema_raw.msg.rows
[
    {
        "TABLE_NAME": "DEPARTMENTS",
        "COLUMN_NAME": "DEPARTMENT_ID",
        "DATA_TYPE": "NUMBER",
        "NULLABLE": "N",
        "IS_PRIMARY_KEY": "P",
        "COLUMN_COMMENT": "Primary key of departments table."
    },
    {
        "TABLE_NAME": "DEPARTMENTS",
        "COLUMN_NAME": "DEPARTMENT_NAME",
        "DATA_TYPE": "VARCHAR2",
        "NULLABLE": "N",
        "IS_PRIMARY_KEY": null,
        "COLUMN_COMMENT": "A not null column that shows name of a department."
    },
    // ... more columns for DEPARTMENTS, then EMPLOYEES, etc.
]

Stage 2: Transforming Data for Templating

This is the most critical and often overlooked step. We must transform the flat list into a nested structure grouped by TABLE_NAME. This makes the Jinja2 template dramatically simpler and more readable. Ansible’s groupby filter is perfect for this.

We add a task that processes oracle_schema_raw.msg.rows and creates a new variable, grouped_schema.

# tasks section of generate_stores.yml continues...

- name: Transform flat schema list into a structure grouped by table name
  ansible.builtin.set_fact:
    grouped_schema: "{{ oracle_schema_raw.msg.rows | groupby('TABLE_NAME') }}"
    
- name: DEBUG | Display transformed and grouped schema structure
  ansible.builtin.debug:
    var: grouped_schema

The grouped_schema variable now has a much more usable structure for our template:

# A representation of the grouped_schema variable
[
    ('DEPARTMENTS', [
        {'TABLE_NAME': 'DEPARTMENTS', 'COLUMN_NAME': 'DEPARTMENT_ID', ...},
        {'TABLE_NAME': 'DEPARTMENTS', 'COLUMN_NAME': 'DEPARTMENT_NAME', ...}
    ]),
    ('EMPLOYEES', [
        {'TABLE_NAME': 'EMPLOYEES', 'COLUMN_NAME': 'EMPLOYEE_ID', ...},
        {'TABLE_NAME': 'EMPLOYEES', 'COLUMN_NAME': 'FIRST_NAME', ...}
    ])
]

Stage 3: The Jinja2 Template for Pinia Store Generation

With the data properly structured, we can create the Jinja2 template (pinia_store.ts.j2). This template will iterate through each table in our grouped_schema and generate a complete, type-safe Pinia store file.

To handle the mapping from Oracle data types to TypeScript types, we could use a complex set of if/elif/else blocks, but a cleaner approach is a custom Ansible filter plugin.

Create the file ./plugins/filter/oracle_to_ts.py:

# ./plugins/filter/oracle_to_ts.py
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type

from ansible.errors import AnsibleError

def to_typescript_type(oracle_type, data_scale):
    """Converts Oracle data type to a TypeScript type."""
    oracle_type_upper = oracle_type.upper()
    
    if 'CHAR' in oracle_type_upper or 'CLOB' in oracle_type_upper:
        return 'string'
    if oracle_type_upper == 'DATE' or 'TIMESTAMP' in oracle_type_upper:
        return 'Date | string | null'
    if oracle_type_upper == 'NUMBER':
        if data_scale is not None and data_scale > 0:
            return 'number'
        else:
            return 'number' # Could also be integer, but number is safer
    if oracle_type_upper == 'BLOB':
        return 'any' # Or a more specific type like Uint8Array if handled
    
    return 'any' # Default fallback

class FilterModule(object):
    def filters(self):
        return {
            'to_ts_type': to_typescript_type,
        }

Now we can use this to_ts_type filter in our Jinja2 template. The template also handles converting table names (e.g., JOB_HISTORY) to PascalCase for interface names (JobHistory) and camelCase for store IDs (jobHistory).

Here is the complete pinia_store.ts.j2 template:

// pinia_store.ts.j2
{% for table_name, columns in grouped_schema %}
{#-
    This block generates one file per table.
    The `template` module in Ansible will loop over this and create multiple files.
-#}
/**
 * @file This file was automatically generated by an Ansible playbook.
 * DO NOT MODIFY IT BY HAND.
 * Any changes will be overwritten during the next build.
 *
 * This store represents the structure of the {{ table_name }} table in the Oracle database.
 */
import { defineStore } from 'pinia';

// Helper function to get the initial state.
// This is useful for resetting the store without a page refresh.
const getInitialState = (): {{ table_name | upper | replace('_', '') }}State => ({
  isLoading: false,
  error: null,
  record: {
  {%- for column in columns %}
    {{ column.COLUMN_NAME | lower }}: {{ 'null' if column.DATA_TYPE not in ['DATE', 'TIMESTAMP'] else 'null' }}, // {{ column.DATA_TYPE }}
  {%- endfor %}
  },
  records: [],
});

/**
 * Interface representing a single record from the {{ table_name }} table.
 */
export interface I{{ table_name | title | replace('_', '') }} {
{%- for column in columns %}
  /**
   * {{ (column.COLUMN_COMMENT or 'No comment available.') | wordwrap(60, break_long_words=False) | indent(3) }}
   * DB Type: {{ column.DATA_TYPE }}
   * Nullable: {{ 'Yes' if column.NULLABLE == 'Y' else 'No' }}
   * Primary Key: {{ 'Yes' if column.IS_PRIMARY_KEY == 'P' else 'No' }}
   */
  {{ column.COLUMN_NAME | lower }}: {{ column.DATA_TYPE | to_ts_type(column.DATA_SCALE) }}{{ ' | null' if column.NULLABLE == 'Y' }};
{%- endfor %}
}

/**
 * Defines the state structure for the {{ table_name | lower | replace('_', '') }} store.
 */
export interface {{ table_name | upper | replace('_', '') }}State {
  isLoading: boolean;
  error: string | null;
  record: Partial<I{{ table_name | title | replace('_', '') }}>;
  records: I{{ table_name | title | replace('_', '') }}[];
}

export const use{{ table_name | title | replace('_', '') }}Store = defineStore({
  id: '{{ table_name | lower | replace('_', ' ') | title | replace(' ', '') }}',

  state: (): {{ table_name | upper | replace('_', '') }}State => getInitialState(),

  getters: {
    /**
     * Checks if the store is currently in a loading state.
     */
    loading: (state): boolean => state.isLoading,
    
    /**
     * Gets the current error message, if any.
     */
    errorMessage: (state): string | null => state.error,

    /**
     * Returns a single record, ensuring it's not null.
     */
    currentRecord: (state): Partial<I{{ table_name | title | replace('_', '') }}> => state.record,
    
    /**
     * Returns the list of all records.
     */
    allRecords: (state): I{{ table_name | title | replace('_', '') }}[] => state.records,
  },

  actions: {
    /**
     * Placeholder action to fetch all records.
     * In a real application, this would make an API call.
     */
    async fetchAll() {
      this.isLoading = true;
      this.error = null;
      try {
        // const response = await api.get('/{{ table_name | lower | replace('_', '-') }}');
        // this.records = response.data;
        console.log('Fetching all records for {{ table_name }}...');
        // This is where you would populate this.records from an API
      } catch (e: any) {
        this.error = e.message || 'An unknown error occurred.';
      } finally {
        this.isLoading = false;
      }
    },

    /**
     * Placeholder action to fetch a single record by its primary key.
     */
    async fetchById(id: number | string) {
      this.isLoading = true;
      this.error = null;
      try {
        // const response = await api.get(`/{{ table_name | lower | replace('_', '-') }}/${id}`);
        // this.record = response.data;
        console.log(`Fetching record with id ${id} for {{ table_name }}...`);
      } catch (e: any) {
        this.error = e.message || 'An unknown error occurred.';
      } finally {
        this.isLoading = false;
      }
    },

    /**
     * Resets the store to its initial state.
     */
    reset() {
      Object.assign(this, getInitialState());
    },
  },
});
{% endfor %}

Stage 4: Orchestrating File Generation

The final task in the playbook uses the template module. A key detail is that we are looping over the grouped_schema to generate a separate file for each table. The dest path is dynamically constructed using the table name.

# tasks section of generate_stores.yml continues...

- name: Generate Pinia store files from template
  ansible.builtin.template:
    src: templates/pinia_store.ts.j2
    dest: "../frontend/src/stores/generated/{{ item[0] | lower }}.store.ts"
    mode: '0644'
  loop: "{{ grouped_schema }}"
  loop_control:
    label: "{{ item[0] }}" # Provides clearer logging output

The loop_control with label makes the Ansible output much cleaner, showing which file is being generated:

TASK [Generate Pinia store files from template] *************************************
changed: [localhost] => (item=DEPARTMENTS)
changed: [localhost] => (item=EMPLOYEES)
changed: [localhost] => (item=JOB_HISTORY)

The final result in the frontend directory src/stores/generated/ will be files like departments.store.ts, employees.store.ts, etc. Each file contains a fully typed Pinia store.

For example, employees.store.ts would look like this:

/**
 * @file This file was automatically generated by an Ansible playbook.
 * DO NOT MODIFY IT BY HAND.
 * Any changes will be overwritten during the next build.
 *
 * This store represents the structure of the EMPLOYEES table in the Oracle database.
 */
import { defineStore } from 'pinia';
// ...
export interface IEmployees {
  /**
   * Primary key of employees table.
   * DB Type: NUMBER
   * Nullable: No
   * Primary Key: Yes
   */
  employee_id: number;
  /**
   * First name of the employee. A not null column.
   * DB Type: VARCHAR2
   * Nullable: No
   * Primary Key: No
   */
  first_name: string;
  // ... and so on for all columns
}
// ... rest of the store code

This entire process is wrapped into a CI job that runs before the frontend build step. A change pushed by a DBA to the Oracle schema will automatically trigger the pipeline, regenerate the Pinia stores, and the subsequent frontend build will fail if there are any breaking changes, catching integration issues long before they reach production.

The primary limitation of this approach is its inability to comprehend complex business logic or relationships between tables. The generated code is a structural scaffold, not a complete data access layer. It cannot automatically infer that EMPLOYEES.DEPARTMENT_ID is a foreign key to DEPARTMENTS.DEPARTMENT_ID and generate actions to fetch related data. This logic must be added manually in a separate, non-generated file that extends or composes the generated store. Furthermore, this process is tightly coupled to the Oracle data dictionary. A migration to another database vendor would require a complete rewrite of the SQL query and potentially the type-mapping logic. Future iterations could involve parsing foreign key constraints to scaffold basic relational actions or using table naming conventions to generate different types of stores (e.g., simple key-value stores for lookup tables prefixed with LKP_).


  TOC