Welcome to the fifth and final blog in our series around designing AI agents in Camunda. In this blog we summarize all the patterns to help you choose which connector and pattern will best suit your requirements for implementing agentic AI in your Camunda process.
Be sure to read the previous four (4) blogs in the series before reading this one.
- Two Connectors, Three Patterns, and One Model for Designing AI Agents in Camunda
- Designing AI Agents in Camunda: The AI Agent Task connector Standing Alone
- Designing AI Agents in Camunda: The AI Agent Task Connector with Loop
- Designing AI Agents in Camunda: The AI Agent Sub-Process Connector
Now that you have an understanding of the two connectors for AI Agents with Camunda and the patterns available, let’s summarize what you learned and see these patterns executing in a process together.
The architect lens—How to choose an agentic AI pattern
When it’s time to determine which agentic AI pattern to implement, there are some guidelines you can apply to help you determine the best approach. Before you drop AI agents into your process, pause and review the requirements for the job to be done.
- Determine your auditing requirements. Decide what you’ll need to audit, and spell out if it will clear a basic compliance check.
- The next thing you need to do is to establish the bandwidth you need, accounting for expected throughput, peak spikes, and acceptable latency.
- Then think about the type of work you need to orchestrate with AI. Are you dealing with straight-through tasks, judgment calls that need a human in the loop, or assistive steps that enrich existing tasks and free up specialists?
Having these very clear requirements and operational standards will help you formulate the right approach for integrating AI agents into your processes.
Example agentic AI scenarios
Let’s review a few examples to help clarify which pattern might be best for a certain set of requirements.
- Let’s assume that you just need a summary of information from one or multiple sources, to generate email content, or something similar, you should use the single AI task as a pattern. This will minimize complexity while allowing you to select a model and LLM that best fits the job at hand.
- If governance and audit are of the utmost importance, these requirements will push you to using the task agent with a loop. With this approach each tool call is explicit, which helps with approvals, separation of duties, and per-call cost attribution. Just remember, the task agent loop creates more tokens, so watch throughput and apply appropriate limits for max tool calls, token budgets, and concurrency. You can also take advantage of using DMN for guardrails that decide which tools are eligible and when to involve a human.
- For flexibility and room to grow, you may want to keep the core process slim, but encapsulate the agent so you still have room to add controls (retries, timeouts, cancels, metrics) without redrawing the whole diagram then you want to model your AI agent using the AI agent sub-process pattern. This will keep your diagram simple and clean while still allowing you to monitor your AI agent activities using process variables. This option also works well when you need timeouts or cancels scoped inside the agent.
How do you choose?
When you are struggling to determine which connector and pattern is best for your requirements, start with the need and intent—not the tooling.
For example, if your priorities are explicit control, approvals, and per-call auditing, then go with the task agent with a loop and an ad-hoc sub-process toolbox.
Alternatively, if you just need a simple LLM-style action without a set of tools, you will want to take the fastest path and use the task agent alone.
Or you might want to stick with the sub-process AI agent for speed.
You can mix and evolve.
See it in action: all three patterns
So, let’s take a look at a demonstration of both connectors and all three patterns in action.
Comparison chart
The following table provides a summary of the three patterns that we provided in the previous blog posts.
| Task Agent Alone | Task Agent + Ad-hoc Subprocess Loop | Process Agent Alone | |
|---|---|---|---|
| Connector | AI Agent Task | AI Agent Task (+ toolbox Subprocess) | AI Agent Subprocess |
| Best Use Cases | For simple LLM-style tasks with no tools. | When you need explicit approvals, pre-/post-processing, or per-call audit. | For multi-tool goals when speed, readability, and event handling are the priorities. |
| Autonomy | Low | Modeled via gateways | High (internal loop) |
| Governance | Coarse | Fine-grained per call | Coarse (lean BPMN) |
| Setup Complexity | Minimal | Highest | Low |
| Event Subprocess inside agent | Not applicable | Not supported | Supported |
| Audit visibility | One node | Every call visible | Fewer nodes |
| Typical KPI | Fastest time-to-value | Compliance, approvals, cost control | Throughput, simplicity |
| Risk posture | Low-medium | High control | Medium with guardrails |
| Strengths | • Speed. • Straightforward to mode and maintain. • Plugs in easily to existing flows. • Keep the diagram lean. • Fastest time-to-value. • Limited governance risk. | • Control. • Flexible, iterative approach to AI agent interactions. • Supports feedback loop for AI agent multiple tool calls as needed. • Every tool is visible for auditing and tracking. • Easy to inject validation and human-in-the-loop into the toolbox. • Flexible, iterative approach. • Connector continues to make any required tool calls until it reaches its goal or a configured limit. | • Provides a self-contained environment for AI agent operations. • Allows the AI agent to dynamically orchestrate tool calling and request handling. • Supports autonomous operation. • Simplified setup – agent handles tool loop internally. • Event sub-process supported inside the agent sub-process. |
| Weaknesses | • Limited autonomy and less granular audit. • No governance beyond basic input and output must be modeled around the task. | • May require more complex process modeling. • Potential for longer execution times if multiple iterations are needed. • No support for event sub-process inside of the tool loop, so must handle timeouts and loop cancellations outside of the loop. | • May require more setup and configuration compared to a single AI Task. • Could be overkill for simpler AI interactions. • Less control over tool calls. • Agent decides and loops implicitly. • Less audit visibility. • Fewer BPMN nodes. |
Key takeaways and next steps
Adding AI agents to your processes should speed delivery and keep the BPMN clear, while still handling events inside the agent’s boundary. Pick the pattern that matches the job:
- The task agent alone is a practical choice for one-shot intelligence where autonomy is not required. Essentially, minimal modeling and fast implementation.
- The task agent with a loop gives you maximum governance by turning each tool call into a BPMN decision with explicit approvals, retries, and audit by call.
- The sub-process agent provides a lean model maintaining observability via process variables, and it’s ideal when you need timeouts, cancels, or compensation contained within the agent’s scope.
All three patterns keep you in one model, backed by Zeebe for scale and Operate and Optimize for transparency. Use one, two, or all three patterns once or multiple times in your process. The options available with Camunda AI Agent connectors can help you meet your KPIs and constraints along with the guardrails that fit your compliance and scaling targets.
Looking for more information?
If you want to read more about AI agents, we recommend the following blog posts.
- A New Era of Automation: Enterprise-Grade Agents
- Agentic Orchestration: Automation’s Next Big Shift
- Enterprise-Grade Agentic Automation Is Here
- Going from Hype to Impact: Lessons Learned while Making Agentic Orchestration Work
- The Benefits of BPMN AI Agents
Try it out
And if you want to try it for yourself, check out these step-by-step guides.
Start the discussion at forum.camunda.io