The Case for a Central Business Orchestration Layer—Without Recentralizing Your Architecture

Your business orchestration layer is the difference between chaos and extracting value from introducing AI to core business processes.
  • Blog
  • >
  • The Case for a Central Business Orchestration Layer—Without Recentralizing Your Architecture

Let’s start with a conversation with a customer that’s surprisingly common in my daily work (as chief technologist for Camunda, the enterprise platform for agentic orchestration):

CEO: “We need more control. I want to understand how the business actually runs—end-to-end. Too many customer journeys are broken, and we need to fix that. And on top of everything, we need to figure out how AI fits into our core operations. If we can’t even see how the business operates today, how can we steer it tomorrow? We need less chaos and more adaptability—otherwise we’ll be out of the market sooner than we think.”

Me: “I hear this a lot. What you’re really asking for is a business orchestration platform: one logical place to understand, coordinate, and steer your core end-to-end processes. Today, that shows up in your architecture as a logically central orchestration layer that ties together different systems, humans, automation tools, bots, and now increasingly AI agents. Because it’s the business processes—not the individual services—that need to evolve fastest.”

Chief Architect (leaning back, arms crossed): “Wait a second. Central? Again? That sounds like the old integration-hub era. ESBs, big-box BPM suites… We spent a decade digging ourselves out of those traps. That’s exactly why we went all-in on agile, microservices, and event-driven architectures.”

CEO (to the architect): “I get that. But look at where we are. The chaos seems worse than ever. And nothing changes in reasonable time anymore. Ask five teams how an order flows through our landscape, and you get six different answers.”

Principal Engineer (half joking, half exhausted): “And all of them are partly wrong.”

Me: “Exactly. Microservices brought flexibility but also accidental complexity. We introduced distributed-systems headaches, and those so-called decoupled event-driven flows ended up hiding a giant, implicit coordination layer. That invisible layer defines your end-to-end processes…but nobody can see it, measure it, or adapt it. And without understanding your processes, you can’t change anything—let alone introduce AI in a meaningful way. That’s why you need a business orchestration layer.”

CEO: “So you’re suggesting we centralize again?”

Me: “Logically central—yes. But operationally and organizationally decentralized. The whole point is not to repeat the old patterns. We want clarity and control without slipping back into the same centralizing traps. Let me walk you through what that actually looks like…”

The architecture pendulum and why the sweet spot is in the middle

Over the last decade, we’ve been riding a cycle of centralize → decentralize → recentralize—usually as a reaction to past pain rather than thoughtful design. The uncomfortable truth: the sweet spot isn’t at either extreme. It’s right in the middle, where we intentionally balance federation and coordination.

This post explores that balance through the lens of the business orchestration layer:

  • A logical place where end-to-end processes live
  • Providing transparency, reliability, and adaptability
  • Offering a shared understanding of who does what, when, and why

At the same time, teams still need autonomy, independent deployability, and the ability to deliver without waiting for a central authority.

The key point: you can have both a logically central orchestration layer and a decentralized way of building and operating solutions.

The problem: end-to-end processes span everything

From a developer’s perspective, microservices are great—until you try to understand what the business is doing across all those services. But this is exactly where end-to-end processes live: they span organizational, domain, and system boundaries to fulfill a customer goal.

  • Place an order at a retailer?
    Payments, inventory reservation, warehouse allocation, shipment creation, notifications… Each one handled by a different system.
  • Open a bank account?
    Identity verification, compliance checks, approvals, issuing account numbers, provisioning cards, sending credentials.
  • File an insurance claim?
    Intake, triage, fraud checks, adjuster assignment, document verification, settlement computations, payout.

I’m talking about operational processes here, the ones that run your business every day, span multiple systems, and can’t just be rebuilt from scratch for every run. The components behind each step might be microservices, SaaS products, on-prem systems, RPA bots, or decades-old legacy software. And within that heterogeneous mess, the end-to-end process still has to happen.

Very often, this happens through point-to-point APIs, brittle integrations, old batch jobs, ESB routes, “serverless glue” scripts, or modern distributed choreography. Those together usually behave like a flash mob—impressive from the outside, unpredictable from within.

After a while, teams start asking for:

  • A clear picture of long-running business flows
  • A common language to discuss processes with business stakeholders
  • A way to change orchestration logic without touching twenty different components
  • Insight into where AI should sit and how it should make decisions safely

Microservices gave every team autonomy over its domain. But no one got autonomy over the whole flow, the journey the customer actually experiences. So end-to-end, the customer journey still depends on manual handoffs and stitched-together integrations. Customers hit what I call the automation ceiling: AI and automation capabilities are there, but without orchestration, they don’t translate into reliable end-to-end outcomes.

SOA and microservices: dead-ends we’ve already lived through

Historically, service oriented architecture (SOA) tried to solve this by introducing the Enterprise Service Bus (ESB). I was around when ESBs promised transformation, routing, integration, process logic, and governance. The ESB, however, had a very technical focus and created bottlenecks instead of business agility.

Image2

A typical ESB vision from the past (taken from https://www.tutorialspoint.com/soa/soa_enterprise_service_bus.htm)

The problem was not the idea of orchestration; the problem was how ESBs implemented it:

  • Centralized ownership → one team became the gatekeeper for every change
  • Centralized runtime → a single box that became the critical path
  • Implicit coupling → the ESB rewired APIs on your behalf
  • Hidden logic → flows buried in scripts and proprietary config
  • No developer lifecycle → no versioning, testing, or CI/CD

When this became slow and brittle, the industry swung the pendulum the other way: “Let’s just decentralize everything. Hello, microservices!”

But we simply traded one form of coupling (centralized) for another (implicit and distributed). Pure choreography at scale also breaks down: it’s hard to reason about, hard to evolve, and nearly impossible to govern.

The challenges around end-to-end processes weren’t solved by magic—we just shifted complexity around. I talked about it at length in talks like “Complex event flows in distributed systems” and dedicated a whole chapter in my book Practical Process Automation.

So how to solve this dilemma?

The solution: a logically central orchestration layer that’s operationally distributed

Here’s the mindset shift: you can run one logical orchestration layer while keeping development and operations decentralized.

To make this more concrete, let’s use the example of a retail bank support process (I pick this because it nicely shows how you can later swap in agentic AI).

The business orchestration platform and its orchestration layer

Image1

The business orchestration layer holds the process logic. It coordinates what needs to happen for the end-to-end process, not necessarily how each step is carried out. For that, you loop in existing IT systems, AI agents, and humans.

It’s important to define the right granularity for end-to-end processes, just as you would with any other capabilities of an organization. In my book Enterprise Process Orchestration, I use a framework based on business capability maps with five layers:

  1. Business areas
  2. Customer journeys
  3. End-to-end processes
  4. Business capabilities
  5. Integration capabilities (often referred to as tools in the Agentic AI space)

The following picture shows a sample map for the bank support process on layers 3 to 5 (the layers internal to an organization).

Image3

In this map, business capabilities might be implemented by any means: custom software, SaaS, manual steps, AI agents, or a mix of them. The end-to-end process is also “just” a special kind of business capability: one that focuses on long-running coordination across boundaries along the customer journey.

But any capability that requires long-running coordination should leverage process orchestration, to avoid accidental complexity when implementing this individually again and again.

The question becomes: how do you do this at scale across an organization without recentralizing everything?

Team topologies: the real enabler of decentralization

The trick is not to centralize modeling or ownership. We’re big fans of the Team Topologies book and you can map its ideas directly to an enterprise-wide orchestration approach:

  • A central platform team provides the orchestration platform as a self-service capability
  • Federated stream-aligned teams build and own their workflows end-to-end
  • A central enablement team helps federated teams adopt good modeling and integration patterns.
Image4

Each stream-aligned team owns its piece of the process just like it owns its own microservice:

  • BPMN models are versioned like (or together with) code
  • Workflows are deployed in units that make sense, independent of other teams
  • Teams test and monitor their own orchestration logic
  • No central modeling department, no process priesthood

Every business capability that requires dedicated engineering will be implemented by a stream-aligned team. Some capabilities may indeed be delivered by standard off-the-shelf software or handled manually (think emails or spreadsheets), but a core set of capabilities in most organizations requires custom processes. That’s especially true for end-to-end processes.

This setup preserves autonomy while finally giving the organization a clear, observable view of the overall flow.

Microservice compatibility and domain-driven design

This approach is completely compatible with microservices thinking, because a microservice is just one way of implementing a business capability. This fits nicely with domain-driven design (DDD), which emphasizes clear component boundaries and bounded contexts.

The business orchestration layer “just” means that every microservice can leverage BPMN and process orchestration, but it doesn’t mean every service must embed a workflow engine. Teams can decide where it makes sense to introduce BPMN within their service boundary.

The orchestration layer is a technical capability the team has at its disposal (like a relational database or a message broker), not a mandate.

Image5

And with the end-to-end process being a business capability in its own right, you can also implement it as its own microservice. This clarifies ownership for end-to-end processes, which is a huge achievement. For years I’ve discussed with organizations in many industries how the lack of clear ownership for end-to-end processes causes friction, delays, and finger-pointing.

AI requires agentic orchestration, not less orchestration

When you add agentic AI, many of those capabilities may be implemented (or at least supported) by AI agents. In Camunda, we blend deterministic and dynamic behavior in BPMN models. We call this agentic orchestration: the orchestrated mix of structured workflows and AI-driven autonomy across agents, people, and systems.

This lets you choose the best approach for each task:

  • A structured, deterministic process
  • A human-driven decision
  • A supervised AI agent
  • A fully autonomous AI agent
Image6
Example process: Loan support, with deterministic aspects, but also an AI agent at play in its center

The important bit: you can evolve over time. You might start with AI providing recommendations while humans stay fully in the loop, then gradually move to more automation as trust grows.

And you don’t just orchestrate agents from the outside as black boxes in a process. For agents you build on the platform, Camunda can also orchestrate them from the inside – injecting enforceable steps between an agent’s decision and its action. If an agent decides to send an email or approve a payout, the process can force a policy check or a human approval before anything actually happens. That’s what makes it safe to give agents real operational authority.

The great thing is that you don’t need one old-school process for deterministic steps and a second AI process for everything else. Agentic orchestration allows you to do this in one model: straight-through processing where the rules are clear, agent reasoning where judgment is needed, with the ability to dial autonomy up or down per step.

In a future where many autonomous AI agents carry out your core business operations, AI doesn’t reduce the need for orchestration; it increases it. Those agents still need to be coordinated, monitored, and governed to respond appropriately to situations like these:

  • When should they be triggered?
  • How do you handle failures and contradictions?
  • When does a human need to step in?
  • How do you swap one AI provider for another?

All of that is process logic and belongs in your orchestration layer. And we’ve been doing orchestration for business processes for years. Agentic orchestration is simply the next wave: the same enterprise-grade orchestration, now extended to AI agents.

Platform requirements: what an orchestration engine must provide

To make this viable, the underlying orchestration platform that powers the orchestration layer in your future business orchestration platform must support an enterprise-wide approach. It has to be built for scale and provide a handful of critical capabilities.

First, end-to-end processes need a shared language that both business and engineering can understand. BPMN fits this role very well. It allows you to describe flows according to all important workflow patterns: steps and branching, decision points, compensations, timeouts and SLAs, work distribution between humans, systems, and AI. And it does this explicitly, not hidden in code or scattered across events.

Second, it needs to operate at scale. That’s not only about throughput and latency, but also about how teams can consume the platform, like:

  • Self-service provisioning of dedicated clusters for high-demand use cases
  • Tenants on shared hardware for teams that don’t need their own cluster
  • A “catch-all” environment for a long tail of simple processes

Third, it needs a flexible integration approach with features like:

  • Out-of-the-box connectivity for common systems
  • Open APIs and SDKs for deep technical integrations (supporting MCP in the future)
  • The ability for your platform team to build and publish company-specific connectors and accelerators

Fourth, it should support the platform play in your organization by:

  • Fitting into your internal developer platform
  • Integrating with CI/CD, observability, and security
  • Providing a solid foundation for both process orchestration and agentic orchestration, blending deterministic and dynamic workflows

Fifth, it must provide a real operational control room. There should be full visibility into every running case, every agent decision, every exception, with the ability to intervene, reroute, and continuously optimize. Observability and optimization shouldn’t be bolt-ons; they need to be built into the orchestration platform from day one.

Already today, Camunda delivers the core orchestration layer of your future business orchestration platform. It’s not just theory; it’s proven in production at large scale (see for example the whitepaper “How to Create a Central Process Automation Platform”).

This isn’t SOA 2.0—it’s distributed orchestration

A central business orchestration layer doesn’t mean going back to ESB times. The difference is fundamental:

  • ESB centralized integration → orchestration centralizes process orchestration methodology, while solution development, operations, and integration remain decentralized.
  • ESB tried to hide complexity → orchestration makes complexity observable and manageable.
  • ESB broke ownership → orchestration strengthens domain and process ownership.
  • ESB enforced one roadmap → orchestration enables team-level autonomy.

From my experience, once architects see orchestration as infrastructure, not as a big central brain, the anxiety goes away.

And it gives you something you never really had before: a clear, adaptable, and observable view of how your business actually runs. In the age of AI, that will be the difference between companies that move forward and those that get left behind.

Final thoughts and where to go next

The business orchestration layer will be a key component in the enterprise architecture of the future. For many organizations, it can make the difference between chaos and actually extracting value from introducing AI into their core business processes.

Getting there is, of course, a journey. But every journey starts with the first step. After outlining such a bold vision, the next move is simple: add process orchestration to one of your real business processes. Pick something relevant—something that solves an actual pain—but keep it small enough to deliver quick wins. Use that first project as a stepping stone toward the orchestrated enterprise you want to become over the next few years.

If you’d like to dive deeper into this journey, I can recommend the book Enterprise Process Orchestration.

Try All Features of Camunda

Related Content