Automating Dynamic Android UI Components with a Terraform-Managed Jupyter and Styled-Components Pipeline


Our Android team’s velocity for data-intensive UI features was grinding to a halt. The core of the problem was the feedback loop for our data science team. They developed complex user segmentation and behavior models in Jupyter Notebooks, which needed to be translated into dynamic visualizations within the Android application. Every minor adjustment to a chart’s logic or data source required a full-blown native development cycle: a data scientist would finalize a model in Python, an Android engineer would painstakingly reimplement the visualization logic in Kotlin using a native charting library, and only then could we build an APK for testing. This process took days, completely stifling experimentation and rapid iteration.

The initial, naive proposal was to just embed a WebView. This is a common pattern, but it almost always results in a disjointed user experience. The web content feels alien, disconnected from the app’s native theme—colors are wrong, fonts don’t match, and dark mode transitions are jarring. Our core challenge was therefore not just to render web content, but to build a system where components rendered in a WebView were visually indistinguishable from their native counterparts, and where the entire data-to-visualization pipeline could be automated and managed as infrastructure.

We landed on a combination of technologies that, on the surface, seem entirely unrelated. The final architecture uses Terraform to manage a serverless backend that executes Jupyter Notebook logic on-demand. This backend serves data to a lightweight React application, which uses Styled-components for dynamic, native-aware theming. This React application is then rendered within a carefully configured Android WebView, creating a seamless bridge between the data science and mobile development worlds. In a real-world project, connecting such disparate ecosystems is fraught with peril, but the payoff in development velocity justified the architectural complexity.

The Infrastructure Foundation: Terraform for a Serverless Notebook Executor

Manual infrastructure management was a non-starter. This system needed to be reproducible, scalable, and version-controlled. Terraform was the obvious choice for defining the entire backend stack as code. The core component is an AWS Lambda function that can execute Python code, triggered by an API Gateway endpoint.

The first step is defining the Lambda function itself and the necessary IAM roles. A common mistake is to grant overly permissive roles; here, the Lambda only needs permissions to execute and write logs to CloudWatch.

# main.tf

provider "aws" {
  region = "us-east-1"
}

data "aws_caller_identity" "current" {}
data "aws_iam_policy_document" "lambda_assume_role" {
  statement {
    actions = ["sts:AssumeRole"]
    principals {
      type        = "Service"
      identifiers = ["lambda.amazonaws.com"]
    }
  }
}

resource "aws_iam_role" "notebook_executor_role" {
  name               = "notebook-executor-lambda-role"
  assume_role_policy = data.aws_iam_policy_document.lambda_assume_role.json
}

# Policy for logging to CloudWatch
data "aws_iam_policy_document" "lambda_logging" {
  statement {
    actions = [
      "logs:CreateLogGroup",
      "logs:CreateLogStream",
      "logs:PutLogEvents",
    ]
    resources = ["arn:aws:logs:*:*:*"]
  }
}

resource "aws_iam_role_policy" "notebook_executor_logging" {
  name   = "notebook-executor-logging-policy"
  role   = aws_iam_role.notebook_executor_role.id
  policy = data.aws_iam_policy_document.lambda_logging.json
}

# Package the Python source code and the notebook
data "archive_file" "lambda_zip" {
  type        = "zip"
  source_dir  = "${path.module}/src"
  output_path = "${path.module}/build/lambda_package.zip"
}

# The Lambda function resource
resource "aws_lambda_function" "notebook_executor" {
  function_name    = "JupyterVisualizationExecutor"
  filename         = data.archive_file.lambda_zip.output_path
  source_code_hash = data.archive_file.lambda_zip.output_base64sha256
  
  handler = "handler.execute_notebook"
  runtime = "python3.9"
  role    = aws_iam_role.notebook_executor_role.arn

  timeout     = 30
  memory_size = 512

  environment {
    variables = {
      NOTEBOOK_PATH = "user_cohort_visual.ipynb"
    }
  }
}

Next, we expose this Lambda function via API Gateway. We’re using the simpler HTTP API for its lower cost and easier configuration compared to a REST API. For a production system, you would add more robust authentication, like an authorizer or API key requirements.

# api_gateway.tf

resource "aws_apigatewayv2_api" "http_api" {
  name          = "notebook-api"
  protocol_type = "HTTP"
}

resource "aws_apigatewayv2_stage" "default" {
  api_id      = aws_apigatewayv2_api.http_api.id
  name        = "$default"
  auto_deploy = true
}

resource "aws_apigatewayv2_integration" "lambda_integration" {
  api_id           = aws_apigatewayv2_api.http_api.id
  integration_type = "AWS_PROXY"
  integration_uri  = aws_lambda_function.notebook_executor.invoke_arn
}

resource "aws_apigatewayv2_route" "post_data" {
  api_id    = aws_apigatewayv2_api.http_api.id
  route_key = "POST /visualize"
  target    = "integrations/${aws_apigatewayv2_integration.lambda_integration.id}"
}

resource "aws_lambda_permission" "api_gateway_permission" {
  statement_id  = "AllowAPIGatewayInvoke"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.notebook_executor.function_name
  principal     = "apigateway.amazonaws.com"

  source_arn = "${aws_apigatewayv2_api.http_api.execution_arn}/*/*"
}

output "api_endpoint" {
  value = aws_apigatewayv2_api.http_api.api_endpoint
}

This Terraform setup provides a fully managed, version-controlled infrastructure. A terraform apply deploys or updates the entire backend, fitting perfectly into a CI/CD pipeline.

The Backend Logic: Executing Notebooks with Papermill

The Lambda function’s job is to act as a harness for the Jupyter Notebook. The pitfall here is trying to reinvent the wheel. Instead of parsing notebook files manually, we use Papermill, a library designed specifically for parameterizing and executing notebooks.

Our Lambda handler receives an event from API Gateway, extracts parameters, passes them to Papermill, executes the notebook, and returns the output. The key is that Papermill executes the notebook in a temporary location and can read the resulting notebook object to extract outputs. We’ll tag a specific cell in our notebook with results to easily identify the final data to be returned.

The directory structure for the Lambda source code (src/) would be:

src/
├── handler.py
├── requirements.txt
└── user_cohort_visual.ipynb

The handler.py orchestrates the execution. It includes robust error handling and structured logging, which are critical for debugging in a serverless environment.

# src/handler.py

import json
import logging
import os
import sys
import tempfile
from io import StringIO

import papermill as pm
import pandas as pd

# Configure structured logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)

def execute_notebook(event, context):
    """
    AWS Lambda handler to execute a Jupyter Notebook via Papermill.
    It expects a JSON body with 'params' for the notebook.
    """
    try:
        body = json.loads(event.get("body", "{}"))
        params = body.get("params", {})
        
        # A simple validation, in a real project this would be more robust
        # e.g., using Pydantic.
        if "user_id" not in params:
             return {
                "statusCode": 400,
                "headers": {"Content-Type": "application/json"},
                "body": json.dumps({"error": "Missing 'user_id' in params"})
            }

        logger.info(f"Executing notebook for user_id: {params['user_id']}")

        source_notebook_path = os.environ.get("NOTEBOOK_PATH", "user_cohort_visual.ipynb")
        
        # Papermill needs writable storage, so we use the /tmp directory in Lambda
        with tempfile.NamedTemporaryFile(suffix=".ipynb", delete=False) as tmp_output:
            output_notebook_path = tmp_output.name

        # Use an in-memory buffer to capture stdout/stderr from the notebook execution
        stdout_buffer = StringIO()
        stderr_buffer = StringIO()
        
        # Execute the notebook
        executed_nb = pm.execute_notebook(
            source_notebook_path,
            output_notebook_path,
            parameters=params,
            stdout_file=stdout_buffer,
            stderr_file=stderr_buffer,
            log_output=True,
            engine_name='sagemaker' # Use a robust engine
        )

        # Check for errors during execution
        execution_errors = []
        for cell in executed_nb.cells:
            if cell.cell_type == 'code' and cell.outputs:
                for output in cell.outputs:
                    if output.output_type == 'error':
                        execution_errors.append({
                            "error_name": output.ename,
                            "error_value": output.evalue,
                            "traceback": "\n".join(output.traceback)
                        })
        
        if execution_errors:
            logger.error("Errors found during notebook execution.")
            logger.error(json.dumps(execution_errors))
            return {
                "statusCode": 500,
                "headers": {"Content-Type": "application/json"},
                "body": json.dumps({"error": "Notebook execution failed", "details": execution_errors})
            }

        # Extract the data from the cell tagged 'results'
        result_data_str = pm.read_notebook(output_notebook_path).dataframe.loc["results", "value"]
        
        # The output from the notebook cell is a string representation of a JSON object.
        # We need to parse it back into a dictionary.
        result_data = json.loads(result_data_str)

        logger.info("Notebook execution successful.")
        
        return {
            "statusCode": 200,
            "headers": {
                "Content-Type": "application/json",
                "Access-Control-Allow-Origin": "*" # For local development, tighten in prod
            },
            "body": json.dumps(result_data)
        }

    except Exception as e:
        logger.exception("An unexpected error occurred in the handler.")
        return {
            "statusCode": 500,
            "headers": {"Content-Type": "application/json"},
            "body": json.dumps({"error": str(e)})
        }
    finally:
        # Clean up the temporary file
        if 'output_notebook_path' in locals() and os.path.exists(output_notebook_path):
            os.remove(output_notebook_path)

The data scientist’s notebook, user_cohort_visual.ipynb, is now just a standard notebook with two special considerations:

  1. A cell at the top is tagged with parameters in the cell metadata. Papermill injects the parameters here.
  2. The final cell that produces the output is tagged with results.

A simplified example notebook:

{
 "cells": [
  {
   "cell_type": "code",
   "metadata": {
    "tags": [
     "parameters"
    ]
   },
   "source": [
    "# This cell is tagged as 'parameters'\n",
    "user_id = 'default-user'"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "import pandas as pd\n",
    "import json\n",
    "\n",
    "# In a real scenario, this would fetch data from a database or data lake\n",
    "def get_user_data(uid):\n",
    "    # Mock data generation based on user_id\n",
    "    if 'test' in uid:\n",
    "        return {'labels': ['Q1', 'Q2', 'Q3', 'Q4'], 'values': [10, 45, 25, 80]}\n",
    "    return {'labels': ['Q1', 'Q2', 'Q3', 'Q4'], 'values': [20, 30, 60, 50]}\n",
    "\n",
    "data = get_user_data(user_id)"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "tags": [
     "results"
    ]
   },
   "source": [
    "# This cell is tagged as 'results'\n",
    "# The output of this cell will be captured by the Lambda handler\n",
    "print(json.dumps(data))"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}

This setup empowers data scientists. They can work in their familiar environment, and as long as they adhere to the tagged cell convention, their logic can be directly integrated and deployed.

The Dynamic Frontend: React with Styled-Components

The frontend is a lightweight React app designed to do two things: fetch data from our new API and render it using a theme provided by the native Android host. This is where styled-components becomes critical. Its ThemeProvider context allows us to inject a theme object at the top level of the app, which all styled components below can access.

First, we define a default theme to allow standalone browser development, but the real theme will come from Android.

// src/theme.js
export const defaultTheme = {
  colors: {
    primary: '#6200EE',
    background: '#FFFFFF',
    text: '#000000',
    surface: '#F5F5F5',
    error: '#B00020',
  },
  typography: {
    fontFamily: 'sans-serif',
    titleSize: '20px',
    bodySize: '14px',
  },
  spacing: {
    small: '8px',
    medium: '16px',
    large: '24px',
  },
};

The main application component sets up the theme provider and handles the communication bridge with Android. We’ll define a global function on the window object that Android can call to update the theme.

// src/App.js
import React, { useState, useEffect } from 'react';
import { ThemeProvider } from 'styled-components';
import { defaultTheme } from './theme';
import Visualization from './Visualization';

function App() {
  const [theme, setTheme] = useState(defaultTheme);

  useEffect(() => {
    // This is the bridge. Android will call this function.
    window.setNativeTheme = (themeString) => {
      try {
        const nativeTheme = JSON.parse(themeString);
        console.log("Native theme received:", nativeTheme);
        // Deep merge with default to avoid errors if some properties are missing
        setTheme(currentTheme => ({ ...currentTheme, ...nativeTheme }));
      } catch (error) {
        console.error("Failed to parse native theme:", error);
      }
    };
    
    // Announce to the native host that the web view is ready to receive the theme
    if (window.AndroidBridge && typeof window.AndroidBridge.onWebViewReady === 'function') {
      window.AndroidBridge.onWebViewReady();
    }

    // Cleanup function
    return () => {
      delete window.setNativeTheme;
    };
  }, []);

  return (
    <ThemeProvider theme={theme}>
      <Visualization />
    </ThemeProvider>
  );
}

export default App;

The Visualization component is responsible for fetching data and displaying it. It uses styled-components to consume the theme provided by ThemeProvider. This is the crucial link: the styles are not hardcoded; they are derived from the theme state.

// src/Visualization.js
import React, { useState, useEffect } from 'react';
import styled, { keyframes } from 'styled-components';

const Card = styled.div`
  background-color: ${props => props.theme.colors.surface};
  padding: ${props => props.theme.spacing.medium};
  border-radius: 8px;
  box-shadow: 0 2px 4px rgba(0,0,0,0.1);
  font-family: ${props => props.theme.typography.fontFamily};
  color: ${props => props.theme.colors.text};
`;

const Title = styled.h2`
  font-size: ${props => props.theme.typography.titleSize};
  margin-top: 0;
  margin-bottom: ${props => props.theme.spacing.medium};
`;

const BarChartContainer = styled.div`
  display: flex;
  justify-content: space-around;
  align-items: flex-end;
  height: 200px;
  border-left: 1px solid ${props => props.theme.colors.text};
  border-bottom: 1px solid ${props => props.theme.colors.text};
  padding-left: ${props => props.theme.spacing.small};
`;

const grow = keyframes`
  from { transform: scaleY(0); }
  to { transform: scaleY(1); }
`;

const Bar = styled.div`
  width: 20%;
  background-color: ${props => props.theme.colors.primary};
  height: ${props => props.height}%;
  transform-origin: bottom;
  animation: ${grow} 0.5s ease-out;
`;

const ErrorMessage = styled.p`
  color: ${props => props.theme.colors.error};
`;


function Visualization() {
  const [data, setData] = useState(null);
  const [loading, setLoading] = useState(true);
  const [error, setError] = useState(null);

  useEffect(() => {
    const fetchData = async () => {
      try {
        setLoading(true);
        setError(null);
        // The API endpoint comes from the Terraform output
        const API_ENDPOINT = 'YOUR_API_GATEWAY_ENDPOINT_HERE'; 

        const response = await fetch(`${API_ENDPOINT}/visualize`, {
          method: 'POST',
          headers: {
            'Content-Type': 'application/json',
          },
          body: JSON.stringify({ params: { user_id: 'android-test-user-123' } }),
        });

        if (!response.ok) {
          throw new Error(`API request failed with status ${response.status}`);
        }

        const result = await response.json();
        setData(result);
      } catch (err) {
        setError(err.message);
      } finally {
        setLoading(false);
      }
    };

    fetchData();
  }, []);

  if (loading) return <Card><Title>Loading Visualization...</Title></Card>;
  if (error) return <Card><ErrorMessage>Error: {error}</ErrorMessage></Card>;
  if (!data) return null;

  const maxValue = Math.max(...data.values);

  return (
    <Card>
      <Title>User Activity (Q1-Q4)</Title>
      <BarChartContainer>
        {data.values.map((value, index) => (
          <Bar key={data.labels[index]} height={(value / maxValue) * 100} />
        ))}
      </BarChartContainer>
    </Card>
  );
}

export default Visualization;

The Native Android Integration

The final piece is the Android host. We need a WebView configured to communicate with our React app. The most critical part is setting up a JavascriptInterface to expose native Kotlin methods to JavaScript, and a way to call JavaScript functions from Kotlin.

A common mistake here is to expose too much of the native application to the JavaScript context, which can be a security risk. Our interface is minimal: it only provides a single method for the web view to signal its readiness.

// Android/app/src/main/java/com/example/dynamicui/WebViewActivity.kt

package com.example.dynamicui

import android.annotation.SuppressLint
import android.content.Context
import android.os.Bundle
import android.util.TypedValue
import android.webkit.JavascriptInterface
import android.webkit.WebChromeClient
import android.webkit.WebView
import androidx.appcompat.app.AppCompatActivity
import androidx.core.content.ContextCompat
import org.json.JSONObject

class WebViewActivity : AppCompatActivity() {

    private lateinit var webView: WebView

    @SuppressLint("SetJavaScriptEnabled", "AddJavascriptInterface")
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_webview)

        webView = findViewById(R.id.dynamic_webview)
        
        webView.settings.javaScriptEnabled = true
        // For debugging in Chrome: chrome://inspect
        WebView.setWebContentsDebuggingEnabled(true) 
        
        webView.webChromeClient = WebChromeClient()
        
        // Add the communication bridge
        webView.addJavascriptInterface(WebAppInterface(this), "AndroidBridge")

        // In a real app, this would be a URL to your deployed React app (e.g., on S3/CloudFront)
        webView.loadUrl("http://10.0.2.2:3000") // 10.0.2.2 is the host machine for Android emulator
    }

    private fun injectTheme() {
        val themeJson = buildThemeJson()
        // We must escape the string for JavaScript evaluation
        val script = "javascript:window.setNativeTheme('$themeJson');"
        webView.post {
            webView.evaluateJavascript(script, null)
        }
    }
    
    private fun buildThemeJson(): String {
        // This function extracts colors, dimensions, etc., from the Android app's theme.
        // This is the core of the native-to-web theme propagation.
        val theme = theme
        val outValue = TypedValue()

        theme.resolveAttribute(com.google.android.material.R.attr.colorPrimary, outValue, true)
        val colorPrimary = String.format("#%06X", (0xFFFFFF and ContextCompat.getColor(this, outValue.resourceId)))
        
        theme.resolveAttribute(com.google.android.material.R.attr.colorOnSurface, outValue, true)
        val colorText = String.format("#%06X", (0xFFFFFF and ContextCompat.getColor(this, outValue.resourceId)))

        theme.resolveAttribute(com.google.android.material.R.attr.colorSurface, outValue, true)
        val colorSurface = String.format("#%06X", (0xFFFFFF and ContextCompat.getColor(this, outValue.resourceId)))
        
        // Similarly for fonts, spacing, etc.
        val json = JSONObject().apply {
            put("colors", JSONObject().apply {
                put("primary", colorPrimary)
                put("text", colorText)
                put("surface", colorSurface)
            })
            // Add other theme properties like typography and spacing here
        }
        
        return json.toString().replace("'", "\\'")
    }

    // This interface is exposed to the WebView's JavaScript context.
    inner class WebAppInterface(private val context: Context) {
        @JavascriptInterface
        fun onWebViewReady() {
            // This is called from the React app's useEffect.
            // It's the signal to inject the theme.
            injectTheme()
        }
    }
}

This flow ensures that the theme is injected only after the React app has mounted and established its communication bridge.

sequenceDiagram
    participant AndroidApp as Android App
    participant WebView as WebView (React)
    participant APIGateway as API Gateway
    participant Lambda as Lambda Executor

    AndroidApp->>WebView: Loads React App URL
    WebView-->>AndroidApp: `AndroidBridge.onWebViewReady()`
    AndroidApp->>AndroidApp: buildThemeJson()
    AndroidApp->>WebView: `evaluateJavascript('window.setNativeTheme(...)')`
    WebView->>WebView: Applies native theme via Styled-components
    
    WebView->>APIGateway: POST /visualize (fetch data)
    APIGateway->>Lambda: Invokes function with payload
    Lambda->>Lambda: Papermill executes notebook
    Lambda-->>APIGateway: Returns JSON data
    APIGateway-->>WebView: Forwards JSON response
    WebView->>WebView: Renders visualization with data and native theme

The result is powerful. A data scientist can commit a change to a Jupyter Notebook in a Git repository. A CI/CD pipeline triggers a terraform apply, updating the Lambda function. The next time an engineer opens a debug build of the Android app, the WebView component automatically fetches the new logic and renders a visualization that is perfectly styled to match the app’s native light or dark theme, without any changes to the Android codebase.

This system isn’t without its limitations. The cold start latency on the Lambda function can be a noticeable delay for the first visualization request. For a system serving live user traffic, this would be unacceptable; a provisioned concurrency model or even a container-based service like Fargate would be necessary. The communication bridge using JavascriptInterface is also functionally effective but somewhat brittle; a more formalized contract using something like Protocol Buffers serialized to strings could provide better type safety and forward compatibility. Finally, the complexity of this architecture is significant, and it’s only justifiable in organizations where the bottleneck between data science and mobile development is a severe and costly problem.


  TOC