Every few months, a new headline declares that “BPMN is dead” and that agentic AI will replace traditional process modeling entirely. The argument sounds compelling: if autonomous agents can negotiate and coordinate work in milliseconds, why would we still need process diagrams and orchestration engines?
But that framing misses something essential: AI can decide what to do next, but orchestration is what makes it executable—safely, repeatedly, and at scale. Agents are great at ambiguity. Enterprises still have to run work with process state, controls, audit trails, retries, SLAs, and clear ownership when things go wrong. That’s where BPMN still matters.
If AI is the brain, orchestration is the nervous system—required to make the muscles move in a coordinated way.
And the hard problems of enterprise automation haven’t magically changed with AI. Before you can automate work, you need to understand how it works and where it should go. Understand what your business strategy demands, which improvements actually matter, how processes need to evolve to support future goals. Then comes execution with its own demands:
- A reliable runtime for complex workflows
- Long-running journeys (weeks or months)
- Operational control when things go wrong
- Business-level observability for AI-driven processes
- Governed evolution of production systems
These are exactly the problems a BPMN-based orchestration engine solves.
But of course, orchestration has to evolve. In the age of AI, we’re moving beyond deterministic routing between systems toward agentic orchestration, a runtime that combines dynamic reasoning (agents deciding in ambiguous situations) with deterministic control (state, retries, SLAs, governance) so organizations can safely give AI real operational responsibility.
In practice, agentic orchestration is not just “call an LLM from one step in a process.” It’s the ability to combine two kinds of automation in one coherent system:
- Deterministic orchestration: explicit steps, state, timers, retries, error handling, and compliance-ready execution semantics
- Dynamic reasoning: agents that interpret context, handle ambiguity, and decide what to do next
Modern enterprise automation needs both.
You want AI where the work is fuzzy and variable, and you want deterministic control where work is very structured (and can be automated very efficiently) or must be reliable, auditable, and operable.
That’s the opportunity. Not BPMN or agents,but BPMN and agents. Deterministic and dynamic. Guardrails and autonomy.
Camunda’s 2026 State of Agentic Orchestration and Automation report found that 71% of organizations use AI agents, yet only 11% of agentic use cases reached production in the last year. The journey from pilot to production is longer and harder than expected—not because agents can’t do the work, but because coordinating them safely at scale requires something most organizations don’t yet have: orchestration. And it’s not just tooling: 85% say they haven’t reached the right level of process maturity to implement agentic orchestration.
Capability without coordination is risk without control. Agentic orchestration closes that gap.
So what does agentic orchestration look like beyond the buzzword? In the rest of this post, I’ll first explain why BPMN is less about drawing diagrams and more about providing an executable runtime contract—the part most critics overlook.
Then we’ll look at a practical example that shows how outside orchestration (BPMN as the backbone of an end-to-end journey) and inside orchestration (BPMN as must-follow operating procedures between the agent’s brain and its tools) work together in a single, coherent model.
Along the way, I’ll address the idea that “AI can just generate the orchestration code,” and why, beyond trivial cases, you’re effectively rebuilding an orchestration engine from scratch.
Let’s go!
The missing piece in agent architectures: orchestration
When people say, “BPMN is outdated,” they often mean BPMN diagrams look old school. But BPMN is more than a picture.
Behind the boxes and arrows lies a precisely defined execution semantics. In addition rendering diagrams, a BPMN engine also executes them. It becomes the runtime that:
- Tracks process state
- Manages transitions and timers
- Enforces transaction boundaries
- Handles retries, errors, and compensation
- Coordinates parallel paths and joins
And because it executes, it knows what happened, when, how long it took, which path was taken, and who was involved. You get full execution history and performance data automatically. While this does provide an audit trail (always a great thing to have on hand), it also provides the foundation for continuous improvement: discovering bottlenecks, measuring the impact of changes, and feeding AI systems that can suggest optimizations based on what actually happens in production.
This is what’s at stake here. All combined with a graphical diagram that can be understood by various stakeholders, from business folks to software engineers to operations people.
Agent frameworks, by contrast, focus on reasoning:
- Deciding what to do next
- Choosing which tools to call
- Interpreting instructions in natural language
Using these capabilities without coordination is a risk. Orchestration is what lets you hand AI real authority over your operations. Rather than simply automating individual steps, you’ret giving AI control over the end-to-end flow itself: deciding how work moves, when to escalate, when to reroute, when to bring in a human.
All while staying in control, enforcing deterministic guardrails where needed, maintaining a complete audit trail, and building the operational trust that lets you give AI more responsibility over time. That’s not limiting AI. That’s what makes it powerful enough to run the work that matters.
Closing this gap is exactly where a BPMN-based orchestration engine shines. Agentic AI can decide what should happen. Agentic orchestration ensures it happens safely, reliably, and repeatedly, over time, across systems, and under real-world constraints.
BPMN for developers: executable state, not diagramming ceremony
A lot of developer skepticism toward BPMN comes from how it’s been used, meaning diagram-first processes, heavy modeling ceremonies, and “boxes and arrows” that feel far removed from production code.
But that’s not how we use BPMN.
It’s time for a more sophisticated view of BPMN. Rather than simply “a diagram,” think of it as an executable orchestration contract, a stateful runtime model that makes coordination explicit and operable. Treated properly, BPMN is code-adjacent. For example, it’s:
- Versioned in Git
- Reviewed like any other change
- Tested through automated unit or integration tests
- Deployed alongside services
- Observable in production with instance state and history
The alternative isn’t “no orchestration.” The alternative is hidden coordination scattered across services, message handlers, retries, queues, cron jobs, ad hoc glue code, and packaged enterprise applications (like CRM, ERP, or others)—which is harder to test, harder to operate, and harder to govern.
If you care about reliability and operability, orchestration becomes a runtime primitive. BPMN is the only standard (ISO, by the way) that encodes that primitive, and because it is a standard, AI can work with it natively, too. But more on that later.
The “AI will just generate orchestration code” argument
A related counter-argument is becoming popular in the AI era: “We don’t need orchestration engines. AI can simply generate the code.”
At first glance, this seems plausible. Modern AI tools can generate surprisingly good code. For simple flows, that may work.
But once processes become nontrivial, the complexity explodes. Suddenly the generated code must handle retries, timeouts, long-running state, parallel execution, versioning, monitoring, and auditability. These aren’t edge cases—they’re recurring structures. Workflow patterns research exists precisely because real processes repeatedly require things like parallel splits, deferred choice, compensation, and correlation.
At that point, you’re no longer generating glue code—you’re rebuilding an orchestration engine, piecemeal, across every application. Organizations that go down this route eventually centralize the logic into shared libraries. Once that happens, they’ve essentially reinvented an orchestration engine, just with more complexity and less maturity.
AI can generate code. But generating and operating a reliable process runtime across an enterprise is a very different problem.
A real example: loan origination with outside and inside orchestration
To make this concrete, let’s look at loan origination at a bank and the process of issuing new personal loans to customers. What makes this example interesting is that it shows both orchestration patterns working together in a single coherent model.
The end-to-end journey (outside orchestration)
At the top level, BPMN defines the complete customer journey:
- Receive loan request: Omnichannel (portal, email, broker, mobile app)
- Fraud check: Invoke specialized fraud-detection agents via Agent-to-Agent (A2A) protocol. If fraud risk is too high, reject, otherwise continue.
- Loan offer preparation: AI agent handles customer interaction (this is where it gets interesting)
- Underwriting: Structured subprocess with credit check, risk assessment, terms calculation
- Human approval: Legally required decision point
- Send official offer
- Wait for signature: With timer-based reminders every 7 days and expiry when the offer gets too old
- Disburse loan

This is orchestration from the outside: BPMN controls the sequence, enforces the fraud gate, manages long-running state (the signature wait can take weeks), handles timeouts, and ensures human approval happens before disbursement.
The loan offer preparation agent (the big box in step 3) operates within this structure. It can’t skip fraud checks, bypass approval, or trigger payment directly.
Inside the agent: orchestrated tool access (inside orchestration)
Now zoom into the loan offer preparation agent. In BPMN terms, this is an ad hoc subprocess that gives the agent significant autonomy over how it accomplishes its goal. It converses with the customer, understands their needs, proposes loan options, answers questions, and decides when to trigger the formal application.
To do its work, like loading available loan products, calculating repayments, accessing core banking systems via MCP, or asking a loan specialist, it has access to many tools.
But here’s where inside orchestration kicks in: each tool call is itself orchestrated by BPMN.
For instance, when the agent decides it needs specialist help, BPMN ensures an actual human loan officer receives a structured task, can provide guidance, and can take full control of the conversation. The agent cannot bypass this procedure. If the specialist escalates, the agent exits reliably and the human takes over.
Similarly, when the agent wants to message the customer, BPMN can route the draft through human review before it’s sent or skip that step based on a confidence score. You control the guardrail, not the agent.
And when the agent decides the customer is ready to apply for a specific loan, it triggers the structured loan application subprocess. That process enforces credit check, risk assessment, terms calculation—all deterministically, all auditable. The agent regains control only when that regulated procedure completes.

Why this matters: one runtime, two patterns, full control
Here’s what’s powerful about this design: it’s the same orchestration engine doing both jobs.
The outer journey is orchestrated. The agent’s tool access is also orchestrated. You don’t need separate infrastructure for process orchestration and agent guardrails—BPMN does both.
You control the autonomy dial per tool.
Right now, every customer message gets reviewed. As trust builds, you might:
- Route only low-confidence messages through review (based on an AI judge scoring the draft).
- Auto-approve routine messages entirely.
- Keep high-risk communications under review.
You adjust this by changing the BPMN model. The agent doesn’t need to know.
The agent has real authority within clear boundaries. It decides when to query knowledge vs. ask a specialist, which loan products to recommend, and when the customer is ready to formally apply. But it operates within an enforced structure. Fraud check already happened, human approval will happen, disbursement follows signature.
You’re not hoping the agent behaves. You’ve surrounded it with explicit, enforceable operating procedures.
Organizations moving from standalone agents to orchestrated flows consistently see dramatic improvements, such as faster processing, fewer errors, full auditability, and the operational confidence to actually put AI into production.
One model, shifting along the autonomy spectrum
A key advantage of using BPMN as the orchestration language is that it naturally spans a spectrum of autonomy within a single model with:
- Fully deterministic steps (purely scripted or rule-based)
- Hybrid steps (agent proposes, human or rules confirm)
- Fully agentic steps (agent decides and acts within guardrails)
Crucially, this is not a one-time design decision. It’s a journey.
Today you might experiment with an agent in a suggestion-only role, with a human always in the loop. As the pattern stabilizes and you gain trust, you gradually dial up autonomy: first auto-approve low-risk cases, then expand thresholds, then automate more categories. At some point, you may discover a formerly agentic part has become a stable, repeatable pattern. You then harden it into a deterministic service, which is cheaper to run, easier to reason about, and simpler to test. Because it’s all in one engine, you can measure emerging patterns and make shifts based on real execution data.
The reverse will also happen: a previously deterministic step faces more variance or complexity (e.g., changing regulations, new document types, or simply forgotten edge cases). You can soften it by swapping in an agent task, while the surrounding process stays intact.
BPMN provides the scaffold for this evolution. You don’t have to rebuild everything when you adjust the autonomy level of a single step. You tune the process by:
- Changing which tasks are agentic or deterministic
- Adding or removing human approvals
- Tightening or relaxing guardrails
That’s what deterministic + dynamic in a single model means in practice: the ability to move individual parts of your process up and down the autonomy scale over time, without losing control of the whole.
Long-running processes meet fast-changing AI
AI evolves rapidly with:
- New models
- New prompts
- New safety policies
- New regulatory requirements
- New integration patterns
Enterprise processes are long-running by nature—weeks or months from start to finish—and hundreds of thousands of instances may be in flight at any given time. You can’t simply stop everything and restart on an AI change. This creates difficult operational questions:
- Which agent version was used when this process started?
- How do we upgrade the process definition without breaking in-flight instances?
- What do we do with cases sitting at an agent step when the agent’s behavior changes?
- If a regulator asks, can we prove exactly which version of the logic applied to a decision six months ago?
When a regulator audits a loan decision from six months ago, you need to prove which model version, which prompt, and which process definition applied. With BPMN, you can. Without it, you’re stitching together log fragments across systems and hoping they’re complete.
A mature BPMN orchestration engine handles this:
- Versioned processes: Multiple definitions coexist; new instances use the latest version while old ones complete on their original definition (or migrate explicitly).
- Governed migration: You decide which instances to move to a new version, and how.
- Audit trails: Each instance carries a history of which path it took under which definition.
When you route AI agents through BPMN processes, this versioning and auditability extends to agentic behavior:
- You can roll out a new agent or prompt to a subset of process versions
- You can keep high-risk journeys on a conservative path while experimenting elsewhere
- You always know which combination of process and agent handled a given case
This is where long-running orchestration and fast-changing AI meet and where BPMN turns from simply a diagramming standard into a governance backbone.
You can’t operate what you can’t see
Giving agents work is easy. Operating them is hard.
Operations teams and business owners need visibility and control at two levels. And increasingly, AI systems need that same access in order to take over operational tasks, learn from execution patterns, suggest optimizations, and handle incidents with full context. This is easy because you have rich, structured execution data.
For technical operations, you can see:
- Which process instances are stuck, and at which step?
- Where are SLAs at risk?
- How do we pause, reroute, or retry work when downstream systems fail or behave unexpectedly?
- How do we contain the blast radius if an agent misbehaves?
For business and regulatory oversight, you can determine:
- How long do key journeys take end-to-end?
- Where are the bottlenecks and rework loops?
- What share of work is handled by agents vs humans?
- Can we explain, in business language, how a specific outcome was reached?
Without orchestration, you end up:
- Digging through logs and tool-specific dashboards
- Trying to reconstruct cross-system journeys by hand
- Hoping that prompt history and LLM logs will satisfy regulators
A BPMN-based orchestration engine gives you a single operational lens:
- Every piece of work is a process instance with a clear lifecycle.
- Each step, including agent calls, human tasks, and system integrations, is visible in context.
- Failures follow explicit error paths, or can simply wait at the exact step that failed for resolution.
- Ops teams can intervene under clear rules. For example, they can pause, cancel, skip, or retry.
- Business teams see work in their own language: how many loan applications are waiting for signature, what’s the average time from request to offer, where do cases get stuck.
This is how you move from “a lot of agents doing things” to agentic operations you can actually run. SREs and Ops see and control what’s happening: which instances are stuck at the fraud check, retry failed credit bureau calls, reassign loans when a specialist is unavailable. Business leaders see and measure outcomes: conversion rates, cycle times, how often agents escalate to humans, where bottlenecks form. Risk and compliance see AI operating within defined guardrails: which agent version prepared each offer, proof that human approval happened before disbursement, complete audit trail per case.
Without orchestration, you’re stitching together log fragments across systems, hoping you can reconstruct what happened. With orchestrated AI, you have a complete operational picture, from business intent to technical execution.
AI will help build orchestration, too
Finally, let’s consider that AI also makes orchestration easier to build.
Developers increasingly work in AI-first environments where assistants generate and refactor code. The same will happen for orchestration artifacts. AI can already help:
- Draft valid, executable BPMN models from natural language descriptions. You can feed it existing process documentation, wiki pages, regulatory requirements, or interview transcripts, and get compliant models that run immediately.
- Suggest improvements to existing processes.
- Generate integration code for service tasks.
- Explain complex models to developers and business users.
This isn’t theoretical. Daniel, our CTO, built a full AI agent orchestration in under an hour using Claude and Camunda, from natural language description to running process. The AI generated valid BPMN, wrote the integration code, and deployed a working system.
This creates multiple ways of building solutions, all better than generating a bespoke orchestration engine in every application. AI generates BPMN files directly in the repository, reviewed and versioned like code. AI interacts with modeling tools through APIs or MCP-style integrations. Specialized copilots from orchestration vendors help refine, validate, and govern models
Platforms are already investing in this direction. Camunda, for example, is continuously advancing its Copilot functionality to assist not only with process modeling but full solution development.
Instead of replacing orchestration, AI can dramatically accelerate it while the runtime remains standardized, operable, and governed.
Recap
If all you did was draw static BPMN diagrams and run them in a closed, deterministic world, then, yes, that would be insufficient for the age of AI. But that’s a false dichotomy.
The real opportunity is not BPMN or agents. It is BPMN and agents. Deterministic and dynamic. Guardrails and autonomy.
In practical terms, that means:
- Using BPMN to orchestrate end-to-end flows across people, systems, and agents
- Letting agents handle complex reasoning and unstructured tasks within those flows
- Injecting mandatory guardrails between the agent’s brain and its tools—orchestration from the outside and the inside
- Moving individual steps along the autonomy spectrum as you learn
- Leveraging the engine’s strengths where AI is weakest: long-running state, operations, monitoring, versioning, and governance
BPMN is not an artifact of a pre-AI world. It’s one of the few technologies that already encodes the hard lessons of running automation at scale: how to represent work, manage it over time, and keep humans in control.
Agentic AI makes orchestration more important, not less.
Try it in practice
All of this is not just an architectural thought experiment. You can build it today. Platforms like Camunda provide an enterprise foundation for agentic orchestration, using BPMN as the common language to orchestrate AI agents, people, and systems across end-to-end business processes. It also enables you to blend deterministic and dynamic orchestration in a single model and bring the runtime, operations, monitoring, and governance you need to move AI from pilots to production.
If you’re hitting the automation ceiling—agents deployed, results flatlining—it’s time to orchestrate.