Technology

Modernization without stopping growth: how to move from a monolith to a scalable architecture?

Agnieszka Ułaniak
Marketing Manager, Altimi
January 10, 2026
2
min read

Many companies reach a point where the monolith “still works”, but every next feature costs more and more: releases are stressful, changes “conflict” with each other, and scaling means adding resources to the whole system instead of to what is actually the bottleneck. The good news is that it is possible to modernize without shutting the business down – provided that instead of the “starting from scratch” approach you choose an incremental approach based on proven patterns.

Below you will find a guide that will take you through this process: from the decision “microservices, are you sure?” through migration patterns (Strangler Fig, Branch by Abstraction, Expand & Contract), all the way to working with data, observability and zero‑downtime deployments.

What is really blocking growth in a monolith?

A monolith is not “bad” by definition. The problem starts when the cost of change grows faster than the value of the change. The most common symptoms are:

  • The delivery pace drops (long lead times, difficult regression tests, growing “time to production”).
  • High deployment risk: one change can break half of the system.
  • Scaling “everything at once”: to support one module, you have to scale the whole monolith.
  • Strong data coupling: a shared database and shared transactions make it hard to split responsibilities.
  • Lack of transparency: nobody is able to answer “what will happen if we change X?”.

In practice, modernization has two parallel goals:

  • keep delivering the roadmap,
  • reduce the cost of change over time.

“Scalable architecture” does not always mean microservices

Before you make the decision for a microservices architecture, refine what actually needs to scale – the team, deployments, traffic, reliability, or maybe all of that at once.
Martin Fowler and James Lewis describe microservices as a set of characteristics (including organizing around business capabilities, automation, designing for failure, decentralized data). It is not a “technology”, but a way of building and running a system.

In practice, a staged strategy often wins:

  • Modular monolith (a good first step): you clean up module boundaries and dependencies, but keep a single deployment.
  • SOA / domain services: you extract the most important parts of the domain into separate services.
  • Microservices where it makes sense (e.g. modules with a different change cycle, different SLA, different load profile).

Foundation: incremental modernization (Strangler Fig), not a “big‑bang rewrite”

The safest migration pattern for a monolith is Strangler Fig: the new architecture “grows around” the old one, gradually intercepting traffic and functions until the old component can be switched off.
The essence is that:

  • changes can happen without the users knowing about it,
  • you deliver value along the way (and not only “at the end”),
  • you spread the risk across small steps.

Azure describes a typical implementation using a façade/proxy that routes requests to the old monolith or to new services – depending on what has already been moved.
AWS, on the other hand, emphasizes that the goal of this approach is to minimize risk and disruptions to business operations.

Migration plan “without stopping growth” – 10 steps that work

  1. Define success (metrics + SLO)

Without metrics, modernization can turn into a “project for architects”. Make sure to define:

  • DORA / Four Keys (deployment frequency, lead time for changes, change failure rate, time to restore service).
  • SLO/SLI + error budget: how much error/outage you “can spend” in a period before you slow down deployments and focus on stability.
  1. Map the domain and boundaries (bounded contexts)

Splitting the system according to technical layers often ends up with problems and makes software development harder. A better approach is to split according to the domain and business language, in line with Domain‑Driven Design (DDD) principles. This is a way of creating software that focuses on the real business area or problem that the application is meant to solve, instead of on technology or architecture itself.
Microsoft defines DDD as an area (boundary) where a consistent domain model applies, and recommends grouping functions where this model is shared. Martin Fowler emphasizes that DDD helps to deal with complex models by splitting them into bounded contexts and defining clear relations between them.
The effect of this approach? You get well‑defined modules or services and at the same time avoid creating “accidental microservices”.

  1. Choose the “first service” strategically (thin slice)

The first extracted service should be a small but complete vertical slice of functionality that meets three conditions:

  • It delivers real business value – it is easy to justify the cost of extracting it because it solves a specific, important problem.
  • It has clearly defined data – you know what data it needs and where the responsibility boundary lies with respect to the rest of the system.
  • It experiences high friction in the monolith – it is changed often, breaks often, or slows down the development of other elements.

Such a choice minimizes risk and at the same time immediately reduces the load on the monolith.

  1. Put in place a layer that intercepts traffic (façade / gateway / routing)

Strangler Fig usually starts with a component that:

  • Terminates traffic (e.g. HTTP) – it receives all incoming requests from clients.
  • Routes requests to the monolith or to the new service – at the beginning, most traffic goes to the monolith and selected paths to the new components.
  • Enables gradual switching of endpoints – step by step you move successive parts of functionality from the monolith to new services, changing only routing configuration instead of the whole system at once.

Thanks to this, you avoid a “big switch” on a single day and can evolve the system in small, controlled steps.

  1. Ensure safe deployments: feature flags + gradual rollout

Two techniques that are life‑saving in system migration:

  • Feature Toggles (Feature Flags) – they let you change system behavior without redeploying: you can hide unfinished features, enable new code only for part of the traffic (e.g. a percentage of users) and test changes “live”, but you need to consciously manage growing configuration complexity.
  • Branch by Abstraction – it enables large changes in several steps: you introduce a common abstraction (interface/layer) under which you switch the old and new implementation, often combining it with flags to control when each version is used.

In practice, both approaches let you modernize the system “from the inside” without blocking the main code branch (trunk) and without stopping the delivery of ongoing features.

  1. Observability before you distribute the system (logs, metrics, tracing)

Before you start splitting the monolith into multiple services, make sure you have solid observability, because without telemetry, debugging in a distributed system becomes “hunting in the dark”.
OpenTelemetry is today the de facto standard for instrumenting and exporting logs, metrics and traces in a vendor‑neutral way.
A minimal set before a larger decomposition is:

  • Shared correlation IDs – the same identifier flows through all services, which allows you to reconstruct the full request path.
  • Distributed tracing for critical paths (e.g. checkout, payment, login) – you see exactly where delays and errors appear.
  • SLO dashboards and related alerts based on the error budget – you define the expected quality level, measure adherence to it and get a signal when the system is “burning through” the error budget.
  1. Data strategy: “first coexistence, then separation”

The hardest part of migration is almost always data.
A safe path:

  • stage 1: the new service uses data via the monolith or through a controlled access layer,
  • stage 2: you introduce gradual changes to schema and clients,
  • stage 3: you separate responsibilities and (eventually) data.

For schema changes and migrations, expand and contract works very well (first “expand”, then “switch”, finally “clean up”).

  1. Consistency between services: outbox, CDC and sagas instead of 2PC

In microservices, “one global transaction for everything” stops making sense, so other approaches are used. Three practical patterns are usually:

  • Saga – it splits a process into a series of local transactions in different services, with compensating steps instead of a single global rollback.
  • Event Sourcing – the system state is built from a sequence of events, which makes it easier to replay and fix the effects of errors.
  • CQRS + asynchronous messages – it separates the write part from the read part and accepts eventual consistency.

In short: instead of artificially enforcing one “distributed transaction” across many services, you consciously design eventual consistency wherever the business can accept short‑term data divergence.

  1. Zero‑downtime deployments: blue/green + canary + fast rollback

To modernize a system without stopping its growth, you need the ability to deploy frequently and safely. A blue/green approach means maintaining two parallel environment versions (old and new) and switching all traffic in one move, which gives you an immediate rollback if something goes wrong.
A canary approach allows you to roll out a new version gradually – first to a small fraction of traffic, then to more and more, with close monitoring of technical and business metrics.
In Kubernetes‑based environments, controlled rollouts in Deployments are key, based on replicas and gradual updates, which enables both blue/green‑like and canary scenarios and reduces risk in each deployment.

  1. A “Definition of Done that saves the migration from lasting forever”

Treat each migration slice as done only when:

  • Production traffic actually goes through the new path and the old one is no longer needed for day‑to‑day operation.
  • You have meaningful monitoring and SLOs for the new service, so you see its availability, errors and performance.
  • The monolith has been slimmed down – unnecessary code and dependencies have been removed, instead of just “attaching” a new service next to it.
  • The team clearly knows who owns the module/service, how to evolve it and where to report issues.

This sounds trivial, but exactly the lack of such “closure” makes migration turn into a never‑ending project and leaves technical ruins behind.

The most common traps (and how to avoid them)

“Let’s do microservices because everyone does”

Without automation and proper observability, microservices only multiply problems. Instead of following a trend, go back to concrete characteristics and business requirements.

Splitting by technical layers

This ends with a “database microservice” and a “UI microservice”, i.e. strong dependencies and lack of autonomy. Split the system by domain and bounded contexts, not by technology.

Distributing data without a consistency plan

Before you split the database, prepare outbox/CDC/saga mechanisms and consciously design error scenarios and behavior under data inconsistency.

Lack of a risk‑control mechanism

Introduce DORA‑type metrics and SLOs with an error budget so that delivery speed and system stability are measured in a common language and allow you to manage risk consciously.

Practical “mini‑checklist” to start (to copy into Jira)

This can be a “checklist” section at the start of modernization:

  • Baseline DORA metrics collected and concrete goals for the next 3 months defined (e.g. deployment frequency, lead time for change).
  • 1–2 SLOs defined for a critical user journey (e.g. registration, purchase, payment) so that you measure what the user really experiences.
  • Bounded contexts and domain priorities mapped – you know which business areas you will migrate first.
  • First thin slice selected and a Strangler Fig plan prepared (routing traffic between monolith and new service).
  • Feature flags in place plus a plan for regularly cleaning up old flags so you don’t accumulate complexity.
  • Observability: distributed tracing for critical paths, dashboards and alerts aligned with SLOs and not only with infrastructure.
  • Data strategy based on expand & contract plus a decision whether you use outbox, CDC or both.
  • Deployment strategy: clearly described canary and/or blue‑green scenarios together with a rollback procedure so that each deployment has a safe way back.

Where does a technology partner fit into all this?

A technology partner is the missing “multiplier” that enables modernization in parallel with product development – without overloading the core team.
In practice, this usually means:

  • An architect / architecture team who helps set domain boundaries, design the target architecture (bounded contexts, data, integrations) and write a realistic migration plan instead of a general “we will move to microservices”.
  • DevOps / Cloud who are responsible for making deployments, infrastructure and observability “boring”: automated CI/CD pipelines, IaC, monitoring, alerts and cloud security.
  • A practice of delivering change in small steps – support with Strangler Fig, feature flags, test automation and rollouts so that changes go out iteratively with controlled risk, not as a “big bang”.

Companies such as Altimi describe this support model precisely in the areas of DevOps/CI/CD, cloud architecture, managed services and system modernization without disrupting ongoing operations and product development.

FAQ

FAQ - Modernization without stopping growth

Do we always have to move to microservices?

No. Very often the best first step is a modular monolith + automation + observability. Microservices make sense when they provide a measurable benefit (independent deployments, different SLAs, different load profiles).

How long does it take to migrate a monolith without downtime?

It depends on the size of the domain and on the quality of boundaries in code/data. It is best to think of it as a series of short iterations (thin slices) following the Strangler Fig pattern, instead of one “year‑long project”.

How do we avoid breaking data consistency after the split?

Most often, a combination wins: outbox (reliable event publishing), sagas (multi‑service processes) and/or CDC (synchronization during migrations).

What is absolutely critical before splitting?

Observability. Without telemetry (traces/metrics/logs), a distributed system becomes hard to operate. OpenTelemetry is a good starting point.

Articles you might be interested in

From workshop to deployment: how to choose processes that AI will really improve

February 12, 2026
Minutes

Transition to Managed Services: when it pays off and how to start without a revolution

February 12, 2026
Minutes

Secure SDLC in practice for a software company: what to implement first?

February 12, 2026
Minutes