Architecture

Brownfield Cloud: Why Remediation Matters More Than Innovation

6 min read
Plate · Issue — · April 27, 2026

Almost every company I run an AWS assessment with today has the same story: at some point between 2018 and 2022, the cloud journey began. One account first, then another for the new application, then one for dev, one for prod. Maybe a few more for pilot projects that were never shut down. Today there are 15, 20, 30 accounts. The original owners are no longer at the company. Documentation has gaps. And the CISO was informed of some accounts' existence only six months after the first incident.

That is not failure. That is the normal evolutionary curve of cloud environments. But very few organisations have an explicit strategy for getting this situation under control without rebuilding the entire infrastructure from scratch.

Greenfield is the industry's preferred narrative. Clean-slate architecture, everything right from the start, landing zone from zero. Nice theory. In practice, greenfield in 2026 is the exception. The rest of us live in brownfield. And brownfield has different rules.

I want to describe three patterns I encounter repeatedly in assessments. No single company has all three. But I have not yet seen a company in a brownfield situation that has none of them.

Pattern 1: Account Sprawl Without Attribution

The most common pattern. Nobody can say off the top of their head which account belongs to which product or team. Cost allocation tags exist, but not consistently. Some accounts have no tags at all. Cost Explorer shows a number, but assigning it to teams or projects is guesswork.

Why is this a security problem and not just a financial one? Because scoping is security-critical.

When a penetration test or security incident response process is running, the first question is: what is the scope? Which systems belong together? Where are the blast radius boundaries? Without account attribution, there is no reliable answer. Incident response becomes detective work inside your own infrastructure.

In an assessment I ran earlier this year, it took three days for the team to compile a complete list of accounts with their workloads. Three days. In an incident, that would be a disaster.

The root cause is structural: accounts were created individually over years, without a central governance framework. AWS Organizations was not activated from the start. Or it was activated, but OUs were never structured meaningfully.

Pattern 2: IAM Debt from the Pilot Phase

Every company has them: service accounts with AdministratorAccess left over from the pilot phase. At the time, someone wanted to test a Lambda function quickly. The easiest solution was a role with full permissions. It worked. The service went to production. The role was never adjusted.

That is not human failure. That is the logical consequence of a context where speed came before governance.

The problem is that these roles today often carry critical workloads. The Lambda function from 2020 with AdministratorAccess now runs in production, processes customer data, and has network access to internal systems.

IAM debt is one of the most underestimated attack vectors. Not because it is hard to find, but because it looks so normal. A role with AdministratorAccess does not trigger an alert. It just sits there, until someone actively looks for it.

A concrete example from an assessment: I found an IAM role that had not been used in three years. No last-used activity. But it had full access to an S3 bucket containing historical customer data. Nobody at the company could say why this role existed.

That is the blast radius attackers look for. Not the carefully configured, well-documented roles. The forgotten ones.

Pattern 3: SCP Gaps at the Org Level

Service Control Policies are the most powerful tool AWS Organizations offers: preventive guardrails that operate at the level of the entire organisation or individual Organizational Units. No account can deviate from them, regardless of what local IAM policies are set.

In practice, I see two extreme states.

The first: SCPs do not exist or exist only as symbolic one-liners. The org level is open, each team configures what it wants. Security baseline requirements, for example no public S3 buckets, no disabling CloudTrail, no creating IAM roles with admin access without an approval process, are enforced individually with varying success.

The second: SCPs were set at some point during a burst of governance activism and are so restrictive that every team needs an exception ticket for every new requirement. The SCP regime becomes a bottleneck. Teams start looking for workarounds. Governance becomes an obstacle course that gets bypassed.

Both are a problem. An SCP regime that is not followed in practice is not a governance instrument. It is paper-tiger compliance.

Why Brownfield Remediation Is the Underrated Discipline

When I talk to cloud owners about these three patterns, the same reaction often follows: "Yes, we know about this. But we do not have time to address it. There are strategic initiatives coming up."

That is the problem.

IAM debt compounds. An unpatched service account today is a potentially compromised account tomorrow. An account without attribution today is an unresolved scope area in your next NIS2 or C5 audit. An SCP gap today is the blast-radius entry point that the next penetration test will find.

The answer is not to rebuild everything. That is neither economically nor practically viable. The answer is structured remediation using the tools AWS provides:

AWS Control Tower enrollment of existing accounts into a clean multi-account structure. This is possible without migrating workloads. Control Tower brings centralised logging, OUs with a sensible hierarchy, and a starting point for consistent SCPs.

C5 Conformance Pack in AWS Config for automated compliance checks against the C5 criteria catalogue. This identifies technical deviations continuously, not just at audit time.

Security Hub CSPM for consolidated findings across all accounts. Anyone with 20 accounts who searches for security findings in each one individually has no oversight. Security Hub provides it.

With the Cloud Governance Accelerator (CGA), I apply this approach to brownfield accounts: surgically, without complete re-platforming, with the goal of establishing C5 and NIS2 readiness in historically grown environments. This is not a greenfield project. It is remediation with a method.

The Most Important Question

Greenfield architecture is the prestige discipline in cloud work. New accounts, clean structures, modern architecture patterns. No wonder it gets the attention.

But the greatest security gain per euro invested for most DACH organisations today does not come from greenfield innovation. It comes from brownfield remediation. From the account that has been running for three years, whose roles have never been reviewed, that has no tags, and whose assignment to a business unit is unclear.

When organisations start treating this question as a strategic priority rather than a technical homework assignment that keeps getting pushed back, the risk level changes faster than any new security tool can achieve.


Have you had your own experiences with brownfield assessments or remediation projects? Happy to exchange notes in the comments.