LEGO-Style Terraform: Building a Greenfield GCP Platform
How we built a regulated UK retail bank's GCP platform with LEGO-style Terraform: independent modules, per-module CI, InSpec policy gates, three-tier pipelines.
The brief
In 2021, a UK retail bank came to us with a greenfield opportunity: stand up their entire Google Cloud Platform from scratch. No legacy infrastructure, no inherited technical debt — just a blank canvas and a clear mandate to do it right.
The challenge wasn't simply "deploy some cloud resources." It was to build a platform that multiple delivery streams and value teams could consume safely, consistently, and independently. The infrastructure code itself had to be as well-engineered as any production application: composable, testable, version-controlled, and deployable through automated pipelines.
The approach we settled on was what we call LEGO-style Terraform.
The problem with "just writing Terraform"
Anyone who has managed Terraform at scale knows it starts simple and gets complicated fast. A monorepo, one set of files per environment — this collapses under its own weight as teams multiply. You end up with duplication everywhere, no shared standards, broken pipelines nobody owns, and a growing fear of change. For a regulated financial institution, these aren't just engineering inconveniences — they're compliance risks.
We also deliberately diverged from Google's own Cloud Security Foundations guide, which assumes a monorepo structure. Instead, each module lives in its own codebase and is referenced by git tags. A module can be versioned and released independently, without any impact on other modules or consuming templates. Introducing a new environment shouldn't touch anything that already exists — and with this structure, it doesn't.
The LEGO approach: modules, templates, and layers
Like LEGO bricks, every component in this system is standardised, composable, reusable, and safe to hand off. We structured the IaC estate into three layers.
Certified modules are the lowest-level Terraform modules for individual GCP services: Cloud Storage, Cloud SQL, Pub/Sub, VPC networks, IAM roles, GKE clusters, and more. Each module encodes the bank's security and compliance standards by default — Shared VPC configuration, Private Google Access, flow log retention, firewall baseline. Modules are versioned independently and consumed by pinning to a specific git tag. No silent drift, no surprise breaking changes.
Service templates combine multiple certified modules into a concrete, reusable infrastructure pattern. The template repository holds environment-specific variable files (dev, sit, uat, pre, prd) driving the same code across all environments — parity by construction, not convention. Application and value stream teams consume templates by filling in a variables file and raising a merge request. They don't write Terraform.
Platform orchestration sits at the top: the Terraform that describes the organisation structure, folder hierarchy, shared networking, and centralised services. This layer is owned exclusively by the platform team. It was the first thing the bootstrap process created — and the last thing anyone touches manually.
Organisation structure: production-first by design
The platform adopts a production-first policy approach: start from a strict production posture and grant exceptions down into lower environments on a risk-acceptance basis, never the other way around.
A central control-plane folder (CCM) holds hub-and-spoke networking, automation runners, and security tooling. Business unit folders for each legal entity sit alongside it, each air-gapped from the others. Third-level folders provide environment separation, inheriting policies from their parents. Organisation policies are enforced at Day 0 — no external IPs on VMs, no public Cloud SQL, service account key creation disabled by default, resources locked to approved UK regions. All exceptions are tracked, reviewed, and source-controlled in GitLab.
Three pipelines, three responsibilities
The deployment model uses three distinct pipeline tiers with clearly bounded scope.
Foundation pipelines handle privileged activities: creating projects, configuring folder structures, managing IAM, deploying Shared VPCs, configuring firewalls, and setting up VPC Service Controls. Infrastructure pipelines deploy the resources application teams consume — GKE clusters, Cloud SQL, Pub/Sub, Cloud Storage, logging, and monitoring. Application pipelines build, scan, and deploy container images to GKE, running SAST, DAST, dependency scanning, and secret detection via GitLab Ultimate.
A change to a storage bucket cannot touch a VPC. A GKE update cannot affect org policies. Failures are isolated, ownership is clear, and blast radius is bounded.
Deploying to any environment above dev requires a tagged release with explicit deployment approval — not just a merge. GitLab's protected environments enforce this. Nothing promotes automatically.
Quality gates at every layer
Every module and template has its own dedicated CI pipeline — not a shared mega-pipeline. It fires on every merge request and runs three gates before a merge is permitted.
The first stage checks formatting and lints with GCP-specific rules, catching deprecated arguments and invalid configurations before they reach a real environment.
The second stage runs InSpec compliance tests — executable representations of the bank's security policy. An InSpec profile for a storage bucket verifies uniform bucket-level access, public access blocking, retention policy, and approved region. These tests don't check what Terraform plans to do; they verify what the configuration would produce.
The third stage performs a live deployment to a dedicated GCP test project. The module actually deploys, InSpec controls run against live resources, and the environment is torn down. This catches what no linter can: API quota issues, permission misconfigurations, dependency ordering problems. The main branch of every module is, by definition, deployable.
State is managed remotely in GCS, with CMEK-encrypted buckets per environment. The backend configuration is injected at pipeline runtime — the same module code runs across every environment without modification.
What the LEGO approach actually delivered
For the platform team, module changes were validated automatically in under 20 minutes. Full confidence that InSpec controls passed and the module deployed cleanly — no manual testing, no "deploy to staging and see what happens."
For application and value stream teams, consuming the platform meant filling in a variables file, not writing Terraform. The platform team became a supplier of tested, safe bricks — not a bottleneck every team queued behind.
For compliance and audit, every resource was deployed through a reviewed merge request, via an automated pipeline, from a versioned module, with a git tag marking every deployment. Demonstrating compliance became a matter of pointing at a repo, not reconstructing events from memory.
What we'd do the same — and push further
The foundation — separate repos per module, mandatory CI pipelines, InSpec testing, live test deployments, production-first policy — we'd replicate on any greenfield IaC project of similar scale without hesitation.
If starting today, we'd push cost estimation into the pipeline earlier, invest sooner in a self-service template catalogue, and bring in OPA or Checkov alongside InSpec for faster policy-as-code feedback in the inner development loop.
The LEGO metaphor has limits — composing modules into templates requires careful interface design, and bricks don't always fit exactly as expected. But the core insight holds: if the bricks are solid, the structures you build with them will be too.
Bugni Labs
R&D Engine
The R&D engine powering our advanced software engineering practices — platform engineering, AI-native architectures, and AI-Native Engineering methodologies for enterprise clients.