Build Once, Deploy Everywhere: Re-Engineering CI/CD
Re-engineering CI/CD for a regulated UK bank's 25+ microservice platform: parallel deploys, immutable artefacts, Helm, GitOps, fix-forward branching.
The setup
In 2023, a major UK retail bank was partway through building a new digital platform — a suite of more than twenty-five Spring Boot microservices intended to underpin a regulated financial product with strict availability and audit requirements. The platform was being developed by a product-focused engineering team under time pressure, and the bank's existing shared delivery infrastructure was offered as a ready-made solution. The reasoning was sound on paper: the tooling was already licensed, already running, already integrated with the bank's security and compliance checks. Using it would avoid weeks of pipeline engineering and let the team focus on the services themselves.
That decision held for several months. Then it started to cost more than it saved.
The problem
The shared continuous delivery pipeline had five sequential stages and took just over an hour to run a single service through a single environment. Build and package steps were not promoting artefacts forward — they were rebuilding from source at each stage, which meant the same compilation and packaging work happened three or four times per run. This was not a minor inefficiency. With twenty-five services in the product and multiple environments to traverse before a release reached production, a full deployment set consumed the better part of a working day. On a day when a release had to be rolled back and rerun, that became two days.
The pipeline was strictly sequential. Only one service could build or deploy at a time. There was no concurrency, no batching, and no mechanism to run stages in parallel across services. The underlying Jenkins infrastructure was shared across multiple delivery teams at the bank, which introduced resource contention severe enough to cause regular outages. When the tooling went down, every team stopped. Incidents were frequent enough that they had stopped feeling like incidents.
Deployment to regulated test-and-live environments used UrbanCode Deploy, with a single application configured to handle all microservices. The queue was global: each service waited for the one before it regardless of dependency. On a typical release day, the first service might deploy in minutes while the last waited two or three hours simply because of its position in the sequence.
At the Kubernetes layer, the team was using JKube for resource management. JKube worked adequately for initial deployments but had a persistent problem with resource deletion — objects that should have been removed during a redeploy would sometimes persist, creating state drift that was difficult to diagnose. More structurally, JKube embedded Kubernetes configuration inside the application's pom.xml and source tree. This meant configuration and code were version-controlled together, which made it difficult to deploy the same image with different configuration across environments — a prerequisite for any serious build-once-deploy-everywhere approach.
There was no hotfix deployment path. The pipeline was fix-forward only, which in a regulated environment with a strict change management process meant that a critical defect in production required a full release cycle rather than a targeted patch. Branch creation in the repository required a manual pipeline step rather than direct Git operations. There was no automated pathway for deploying to higher environments; every promotion required human intervention at each gate.
Specific numbers that crystallised the problem: the CI cycle for a single service was running at over thirty minutes end-to-end. A full environment deployment across the service estate was taking three to four hours. With four target environments in the regulated estate, a complete release cycle before a major cutover was measured in days.
The approach
Three principles shaped the design. First, modularity: every pipeline stage would be a discrete, reusable component callable from any pipeline configuration. This made individual stages testable in isolation and composable without duplication. Second, parallelism at every layer where the work permitted it — across services during deployment, across test types during CI, across environments during progressive release. Third, immutable artefacts: a container image built during the CI run would be tagged semantically, stored in Nexus, and promoted through environments unchanged. No rebuilding. No environment-specific image variants.
Helm replaced JKube for Kubernetes resource management. The decision was straightforward: Helm handles resource deletion reliably, its chart structure separates configuration from source code cleanly, and the chart and values files can be targeted at a different CD tool — Harness, Spinnaker — without touching the application itself. Environment-specific values files sit in a separate repository from the application source, version-controlled independently, and deployable in any combination.
For deployment to regulated environments, we adopted a GitOps model using a dedicated release repository. The repository holds structured YAML manifests describing which version of each service should run in each environment, along with any dependent infrastructure components. A merge to the release repository is the deployment trigger. The repository is the source of truth; what is in the repository is what is deployed.
The implementation
The CI pipeline triggers on pull request creation. From that point, the automated sequence runs without human input: repository clone, linting, unit tests, static analysis via SonarQube DC, code build, Docker image build, image vulnerability scan via Aqua Scan and dependency scan via NexusIQ, integration tests. Security checks run as early as possible — image scanning happens before integration tests, not after. If the PR build is clean, reviewers are notified automatically via CODEOWNERS assignments.
On merge to main, release, or hotfix branches, the pipeline extends: the service is deployed to the continuous integration environment, dynamic application security testing runs against the deployed instance, and on success the image is tagged following semantic versioning, published to the Nexus container registry, the repository is tagged, and a changelog entry is generated from commit messages. The entire post-merge flow for a single service completes in approximately ten minutes.
The CD pipeline is triggered by a pull request to the release repository. The release owner or a developer updates the relevant environment manifest — service name, version, dependent services, any configuration overrides — and raises the PR. The release manager reviews and merges. A pipeline reads the manifest and deploys all services in parallel, resolving prerequisite ordering (Vault agent and shared infrastructure components deploy first) before the microservice layer. End-to-end tests run in the target environment on completion.
Parallel deployment was achieved by giving each microservice its own component in the deployment pipeline rather than queuing through a shared application. With twenty-five services deploying concurrently rather than sequentially, the ceiling on deployment time is determined by the slowest individual service, not the sum of all of them.
The branching strategy was modernised alongside the pipeline. Teams migrated from master to main as the primary branch. Hotfix and bugfix branches are first-class pipeline targets — they build, scan, and deploy independently of the main release line, which provides the fix-forward and patch-deployment capability the inherited system lacked entirely.
HashiCorp Vault is the secrets store. The pipeline pulls secrets at runtime; nothing sensitive is stored in the repository or baked into images. Database schema migrations run via Flyway, deployed as Kubernetes jobs — a capability the previous pipeline did not support at all, which had required a manual workaround that added risk to every release.
What happened in production
Over the first three months of operation, a representative picture emerged.
CI cycle time for a single service dropped from over thirty minutes to approximately ten. Post-merge deployment to the CI environment — from merge event to running containers — was consistently under fifteen minutes. Full environment deployment across the service estate dropped from three to four hours to under one hour, and remained below that ceiling as new services were added. The parallelism ceiling held: adding a new service to the release manifest did not meaningfully extend deployment time.
The shift-left security scanning surfaced vulnerabilities during the PR phase that had previously been caught — if at all — during late-stage compliance checks. In the first month, NexusIQ flagged seven dependency vulnerabilities in PR builds before code reached the integration environment. Two of these would likely have become release-blocking findings under the old approach.
The outage rate attributable to shared infrastructure dropped to zero, because the product team was no longer sharing infrastructure.
The pattern, generalised
The core lesson here is not specific to CI/CD or to regulated financial services. It is about the cost of inherited infrastructure.
Adopting shared tooling to avoid upfront investment is rational. It becomes irrational when the constraints embedded in that tooling start shaping product decisions — when teams are sequencing work around pipeline bottlenecks, delaying releases because of queue contention, or accepting architectural compromises because the deployment model can't support what they actually need. At that point, the cost of the shared tooling is no longer zero. It is the accumulated friction of every decision made around it.
The questions worth asking before inheriting any shared pipeline are: can we modify it when we need to, without a multi-team approval cycle? Can we scale it independently? Does it support the deployment patterns the product actually requires, or will we be working around it? If the honest answer to any of those is no, the calculus changes.
The GitOps approach to regulated environment deployments — a release repository as the single source of truth, deployment triggered by a merge, all services deploying in parallel from a structured manifest — is a pattern that travels well. It works at any scale where parallel deployment is valuable and where there is an audit requirement around what was deployed, when, and by whom. The Git history provides that audit log automatically. The release manager's approval is recorded in the PR merge event. There is no separate deployment log to maintain.
Build once, deploy everywhere is not a slogan. It is a constraint that forces clarity about what an "environment" actually means in your system. When you can only build the image once, you are forced to externalise all configuration. When configuration is externalised, environment differences are explicit and auditable. The discipline of immutable artefacts is, in practice, the discipline of treating configuration as a first-class concern.
Rohit Varshney
Principal Engineer
Principal Engineer at Bugni Labs, currently engaged at a UK challenger bank providing Google Cloud consulting. A decade-plus focus on continuous integration and delivery, cloud platform engineering, containers, SecOps, and Big Data. Previously led cloud engineering at Lloyds Bank via Publicis Sapient, designing migrations to GCP and delivering CI/CD automation for hundreds of microservices in PSD2 / Open-Banking workloads.