The mandate was clear and non-negotiable: zero-trust networking for all internal services. Every connection, including those to our data stores—MariaDB for transactional records and Solr for search indexing—required mutual TLS (mTLS) authentication. The certificates had to be short-lived and dynamically issued by a central PKI authority. Our existing development workflow, which relied heavily on Test-Driven Development (TDD) with simple, unsecured database connections in test environments, was immediately obsolete. The core problem was stark: how do you write a failing test for a database connection that requires a complex, runtime-generated cryptographic identity to even attempt a handshake?
Our first impulse was to mock the entire security layer. This was quickly dismissed. Mocking an SSLContext or a TLS handshake provides a false sense of security. It tests that your code calls the right methods, not that it can actually negotiate a secure channel with a real, configured server. A passing test would mean nothing in the face of a production deployment that fails with an SSLHandshakeException
. We needed to test the real thing. This meant our unit and integration tests had to orchestrate a complete, miniature production environment on the fly: a certificate authority, a MariaDB server configured for mTLS, a SolrCloud instance configured for mTLS, and our application itself.
The foundation for this approach became Testcontainers. It’s the only sane way to manage complex, stateful dependencies like this directly from a JUnit 5 test lifecycle. The goal was to write a single test, shouldSaveAndIndexDocument()
, and have it fail initially due to a TLS error. We would then incrementally build the infrastructure and application code to make it pass, proving our entire secure data access stack worked before writing any business logic.
The architecture of our test environment looked like this:
graph TD subgraph JUnit Test Runner A[Test Class] end subgraph Docker Network via Testcontainers B[App Code Under Test] C[Vault Container] D[MariaDB Container] E[Solr Container] end A -- Manages --> C; A -- Manages --> D; A -- Manages --> E; A -- Invokes --> B; B -- 1. Requests Cert --> C; C -- 2. Issues Cert --> B; B -- 3. mTLS Connect --> D; B -- 4. mTLS Connect --> E;
Our first step was defining the test harness. This involved declaring containers for HashiCorp Vault, MariaDB, and Solr. A critical detail in a real-world project is ensuring all containers can communicate. Testcontainers handles this elegantly with a shared Network
.
// SecureDataAccessLayerIT.java
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Test;
import org.testcontainers.containers.GenericContainer;
import org.testcontainers.containers.MariaDBContainer;
import org.testcontainers.containers.Network;
import org.testcontainers.containers.wait.strategy.Wait;
import org.testcontainers.junit.jupiter.Container;
import org.testcontainers.junit.jupiter.Testcontainers;
import org.testcontainers.utility.DockerImageName;
import org.testcontainers.utility.MountableFile;
import java.io.IOException;
import java.time.Duration;
@Testcontainers
class SecureDataAccessLayerIT {
private static final Network network = Network.newNetwork();
// Vault will be our Certificate Authority
@Container
private static final GenericContainer<?> vaultContainer =
new GenericContainer<>(DockerImageName.parse("hashicorp/vault:1.15"))
.withNetwork(network)
.withNetworkAliases("vault")
.withCopyFileToContainer(
MountableFile.forClasspathResource("vault/config.json"),
"/vault/config/local.json"
)
.withCopyFileToContainer(
MountableFile.forClasspathResource("vault/setup-pki.sh"),
"/usr/local/bin/setup-pki.sh"
)
.withCapAdd("IPC_LOCK")
.withExposedPorts(8200)
.withCommand("server -config /vault/config/local.json")
.withEnv("VAULT_DEV_ROOT_TOKEN_ID", "root-token")
.waitingFor(Wait.forHttp("/v1/sys/health").forStatusCode(200).withStartupTimeout(Duration.ofSeconds(30)));
// MariaDB needs to be configured for mTLS
@Container
private static final MariaDBContainer<?> mariaDbContainer =
new MariaDBContainer<>(DockerImageName.parse("mariadb:10.11"))
.withNetwork(network)
.withNetworkAliases("mariadb")
.withDatabaseName("testdb")
.withUsername("user")
.withPassword("password")
.withCopyFileToContainer(
MountableFile.forClasspathResource("mariadb/my.cnf"),
"/etc/mysql/conf.d/custom.cnf"
)
.withCommand("--ssl-ca=/etc/mysql/certs/ca.pem", "--ssl-cert=/etc/mysql/certs/server.pem", "--ssl-key=/etc/mysql/certs/server-key.pem", "--require_secure_transport=ON")
.waitingFor(Wait.forLogMessage(".*mysqld: ready for connections.*", 1));
// Solr also needs to be configured for mTLS
@Container
private static final GenericContainer<?> solrContainer =
new GenericContainer<>(DockerImageName.parse("solr:9.4"))
.withNetwork(network)
.withNetworkAliases("solr")
.withExposedPorts(8983)
.withCopyFileToContainer(
MountableFile.forClasspathResource("solr/solr.in.sh"),
"/etc/default/solr.in.sh"
)
.withCommand("-c", "-f") // Start in cloud mode and foreground
.waitingFor(Wait.forHttp("/solr/admin/collections?action=CLUSTERSTATUS").forStatusCode(200).withStartupTimeout(Duration.ofSeconds(60)));
@BeforeAll
static void setup() throws IOException, InterruptedException {
// This is where the magic happens: we bootstrap the entire PKI infrastructure
// and configure the databases before any test runs.
// Step 1: Initialize Vault PKI engine
var vaultSetupResult = vaultContainer.execInContainer("sh", "/usr/local/bin/setup-pki.sh");
if (vaultSetupResult.getExitCode() != 0) {
throw new RuntimeException("Failed to setup Vault PKI: " + vaultSetupResult.getStderr());
}
// Step 2: Generate and sign server certificates for MariaDB and Solr
generateAndInstallServerCerts("mariadb", "/etc/mysql/certs");
generateAndInstallServerCerts("solr", "/var/solr/etc/certs");
// The databases will restart due to the custom command line args with cert paths.
// We need to wait for them to be ready again. In a real project, this logic would be more robust.
System.out.println("Infrastructure setup complete. Containers are running with mTLS enabled.");
}
private static void generateAndInstallServerCerts(String serviceName, String certPath) throws IOException, InterruptedException {
// This helper would use the vault CLI inside the container to issue certs
// For brevity, the implementation is omitted, but it would look like:
// 1. vaultContainer.execInContainer("vault", "write", "pki/issue/...", "common_name=serviceName", ...);
// 2. Extract certs from JSON output.
// 3. Copy certs to the respective MariaDB/Solr containers.
// A common pitfall is forgetting to set Subject Alternative Names (SANs) for DNS and IPs.
}
@Test
void shouldFailWithoutClientCertificate() {
// The first "Red" test. We instantiate our class and try to use it.
// This will fail because the client has no certificate to present.
// The expected outcome is an SSLHandshakeException.
}
@Test
void shouldSaveAndIndexDocumentWithDynamicClientCertificate() {
// The final "Green" test we are working towards.
}
}
The setup is complex. We’re not just starting containers; we’re injecting custom configuration files and startup scripts. The setup-pki.sh
script is the heart of the CA setup. It runs inside the Vault container and uses the Vault CLI to initialize the PKI secrets engine.
#!/bin/sh
# vault/setup-pki.sh
set -e
# Use the root token for setup
export VAULT_TOKEN=root-token
export VAULT_ADDR=http://127.0.0.1:8200
# Enable the PKI secrets engine
vault secrets enable pki
# Set a long TTL for the CA itself
vault secrets tune -max-lease-ttl=87600h pki
# Generate the root certificate. In production, this would be an intermediate CA.
vault write -field=certificate pki/root/generate/internal \
common_name="test.local" ttl=87600h > ca.pem
# Configure the CRL and issuing endpoint URLs
vault write pki/config/urls \
issuing_certificates="${VAULT_ADDR}/v1/pki/ca" \
crl_distribution_points="${VAULT_ADDR}/v1/pki/crl"
# Create a role that our application will use to request client certificates.
# The key constraints here are short TTLs and enforcing specific names.
vault write pki/roles/app-client \
allowed_domains="app.client.test.local" \
allow_subdomains=true \
max_ttl="1h" \
key_type="rsa" \
key_bits="2048"
echo "Vault PKI engine configured."
Simultaneously, we must configure MariaDB and Solr to require client certificates. A common mistake is to enable SSL but not client certificate verification, which negates the “mutual” in mTLS.
For MariaDB, the my.cnf
is straightforward:
# mariadb/my.cnf
[mysqld]
require_secure_transport = ON
ssl_ca = /etc/mysql/certs/ca.pem
ssl_cert = /etc/mysql/certs/server.pem
ssl_key = /etc/mysql/certs/server-key.pem
For Solr, this is handled via system properties in solr.in.sh
:
# solr/solr.in.sh
SOLR_SSL_ENABLED=true
SOLR_SSL_KEY_STORE=/var/solr/etc/certs/solr-keystore.p12
SOLR_SSL_KEY_STORE_PASSWORD=secret
SOLR_SSL_TRUST_STORE=/var/solr/etc/certs/solr-truststore.p12
SOLR_SSL_TRUST_STORE_PASSWORD=secret
SOLR_SSL_NEED_CLIENT_AUTH=true
SOLR_SSL_WANT_CLIENT_AUTH=false # 'true' enforces it
With the server-side infrastructure defined, we can turn to the application code. The core component is a CertificateManager
responsible for interacting with Vault at runtime to fetch a client certificate. This component is what makes the whole system dynamic.
// CertificateManager.java
import com.bettercloud.vault.Vault;
import com.bettercloud.vault.VaultConfig;
import com.bettercloud.vault.VaultException;
import org.bouncycastle.asn1.x500.X500Name;
import org.bouncycastle.openssl.PEMKeyPair;
import org.bouncycastle.openssl.PEMParser;
import org.bouncycastle.openssl.jcajce.JcaPEMKeyConverter;
import org.bouncycastle.operator.ContentSigner;
import org.bouncycastle.operator.jcajce.JcaContentSignerBuilder;
import org.bouncycastle.pkcs.PKCS10CertificationRequest;
import org.bouncycastle.pkcs.PKCS10CertificationRequestBuilder;
import org.bouncycastle.pkcs.jcajce.JcaPKCS10CertificationRequestBuilder;
import java.io.StringReader;
import java.security.KeyPair;
import java.security.KeyPairGenerator;
import java.security.KeyStore;
import java.security.PrivateKey;
import java.security.cert.CertificateFactory;
import java.security.cert.X509Certificate;
import java.util.Base64;
import java.util.List;
public class CertificateManager {
private final Vault vault;
private final String pkiRole;
public CertificateManager(String vaultAddr, String vaultToken, String pkiRole) throws VaultException {
final VaultConfig config = new VaultConfig()
.address(vaultAddr)
.token(vaultToken)
.build();
this.vault = new Vault(config);
this.pkiRole = pkiRole;
}
public ClientCertificateMaterials generateClientCertificates() throws Exception {
// Step 1: Generate a new private/public key pair locally.
// The private key should never leave the client.
KeyPairGenerator kpg = KeyPairGenerator.getInstance("RSA");
kpg.initialize(2048);
KeyPair keyPair = kpg.generateKeyPair();
PrivateKey privateKey = keyPair.getPrivate();
// Step 2: Create a Certificate Signing Request (CSR)
PKCS10CertificationRequestBuilder p10Builder = new JcaPKCS10CertificationRequestBuilder(
new X500Name("CN=app.client.test.local"), keyPair.getPublic());
JcaContentSignerBuilder csBuilder = new JcaContentSignerBuilder("SHA256withRSA");
ContentSigner signer = csBuilder.build(privateKey);
PKCS10CertificationRequest csr = p10Builder.build(signer);
// Step 3: Send the CSR to Vault to be signed by our CA
String csrPem = toPem(csr.getEncoded());
String commonName = "app.client.test.local";
final var signingResponse = vault.logical()
.write(String.format("pki/sign/%s", pkiRole),
java.util.Map.of("csr", csrPem, "common_name", commonName));
// Step 4: Extract the certificate chain and CA from the response.
// A pitfall is not handling the full chain correctly.
String certPem = signingResponse.getData().get("certificate");
String caChainPem = signingResponse.getData().get("issuing_ca");
CertificateFactory cf = CertificateFactory.getInstance("X.509");
X509Certificate clientCert = (X509Certificate) cf.generateCertificate(
new java.io.ByteArrayInputStream(certPem.getBytes()));
X509Certificate caCert = (X509Certificate) cf.generateCertificate(
new java.io.ByteArrayInputStream(caChainPem.getBytes()));
return new ClientCertificateMaterials(privateKey, clientCert, caCert);
}
// Helper to PEM-encode the CSR
private String toPem(byte[] derEncoded) {
String base64 = Base64.getEncoder().encodeToString(derEncoded);
return "-----BEGIN CERTIFICATE REQUEST-----\n" + base64 + "\n-----END CERTIFICATE REQUEST-----\n";
}
public record ClientCertificateMaterials(PrivateKey privateKey, X509Certificate clientCert, X509Certificate caCert) {}
}
Now we need a way to use these dynamically generated materials. We’ll create an SslContextFactory
that builds an in-memory KeyStore (for our identity) and TrustStore (for trusting the server’s CA).
// SslContextFactory.java
import javax.net.ssl.KeyManagerFactory;
import javax.net.ssl.SSLContext;
import javax.net.ssl.TrustManagerFactory;
import java.security.KeyStore;
import java.security.cert.X509Certificate;
public class SslContextFactory {
public static SSLContext createSslContext(CertificateManager.ClientCertificateMaterials materials) throws Exception {
// In-memory KeyStore for our client certificate and private key
KeyStore keyStore = KeyStore.getInstance(KeyStore.getDefaultType());
keyStore.load(null, null); // Initialize empty keystore
char[] password = "changeit".toCharArray();
keyStore.setKeyEntry("client-key", materials.privateKey(), password, new X509Certificate[]{materials.clientCert()});
KeyManagerFactory kmf = KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm());
kmf.init(keyStore, password);
// In-memory TrustStore for the CA certificate
KeyStore trustStore = KeyStore.getInstance(KeyStore.getDefaultType());
trustStore.load(null, null);
trustStore.setCertificateEntry("ca-cert", materials.caCert());
TrustManagerFactory tmf = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
tmf.init(trustStore);
// Build and return the SSLContext
SSLContext sslContext = SSLContext.getInstance("TLSv1.3");
sslContext.init(kmf.getKeyManagers(), tmf.getTrustManagers(), null);
return sslContext;
}
}
Finally, we can implement the SecureDataAccessLayer
. This class orchestrates fetching the certificate on startup and configuring the database and Solr clients with the resulting SSLContext
. This is the “Green” phase of TDD—making the test pass.
// SecureDataAccessLayer.java
import com.mysql.cj.jdbc.MysqlDataSource;
import org.apache.solr.client.solrj.impl.Http2SolrClient;
import org.apache.solr.client.solrj.response.QueryResponse;
import org.apache.solr.common.params.MapSolrParams;
import javax.sql.DataSource;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.util.Map;
public class SecureDataAccessLayer {
private final DataSource dataSource;
private final Http2SolrClient solrClient;
public SecureDataAccessLayer(String vaultAddr, String vaultToken) {
try {
// 1. On startup, get client certificates
CertificateManager certManager = new CertificateManager(vaultAddr, vaultToken, "app-client");
var materials = certManager.generateClientCertificates();
var sslContext = SslContextFactory.createSslContext(materials);
// 2. Configure MariaDB DataSource with mTLS
MysqlDataSource mysqlDataSource = new MysqlDataSource();
mysqlDataSource.setURL("jdbc:mysql://mariadb:3306/testdb?useSSL=true&requireSSL=true&verifyServerCertificate=true");
mysqlDataSource.setUser("user");
mysqlDataSource.setPassword("password");
// This is the critical part for the JDBC driver
mysqlDataSource.setSslContext(sslContext);
this.dataSource = mysqlDataSource;
// 3. Configure Solr Client with mTLS
this.solrClient = new Http2SolrClient.Builder("https://solr:8983/solr")
.withSSLContext(sslContext)
.build();
} catch (Exception e) {
// In a real-world project, this fatal error on startup must be handled gracefully.
throw new RuntimeException("Failed to initialize secure data access layer", e);
}
}
public String getDocumentById(int id) throws Exception {
try (Connection conn = dataSource.getConnection();
PreparedStatement ps = conn.prepareStatement("SELECT content FROM documents WHERE id = ?")) {
ps.setInt(1, id);
try (ResultSet rs = ps.executeQuery()) {
if (rs.next()) {
return rs.getString("content");
}
}
}
return null;
}
public long searchDocuments(String query) throws Exception {
final Map<String, String> queryParamMap = Map.of("q", "content:" + query, "rows", "0");
final MapSolrParams queryParams = new MapSolrParams(queryParamMap);
final QueryResponse response = solrClient.query("documents", queryParams);
return response.getResults().getNumFound();
}
}
By wiring this implementation into our SecureDataAccessLayerIT
test, the shouldSaveAndIndexDocument...
test would finally pass. We’ve proven, in an automated and repeatable fashion, that our application can generate a cryptographic identity, establish a mutually authenticated TLS channel with MariaDB and Solr, and perform data operations. This TDD approach forced us to solve the hardest problem—secure infrastructure integration—first.
The primary limitation of this test setup is its execution speed. Spinning up four containers and running PKI initialization scripts takes time, potentially slowing down CI/CD pipelines. For larger test suites, adopting the singleton container pattern where containers are started once for the entire suite is a necessary optimization. Furthermore, this implementation doesn’t address certificate revocation (via CRLs or OCSP), which is a critical security feature for any production deployment. The “secret zero” problem—how the application securely obtains its initial Vault token—is also punted; in production, this would be solved using a trusted orchestrator mechanism like Kubernetes Service Account JWTs or cloud provider IAM roles.