Build an AI Process Agent with Camunda: Orchestrating People, Tools, and LLMs

Learn how to build an AI process agent with Camunda. See how you can quickly build, create and deploy a working AI agent within your business process.
  • Blog
  • >
  • Build an AI Process Agent with Camunda: Orchestrating People, Tools, and LLMs

Artificial intelligence is at its most powerful when it doesn’t work in isolation. The real magic happens when AI agents are orchestrated alongside humans and digital systems—bridging REST endpoints, user interactions, and large language models (LLMs) into a single, seamless workflow. That’s exactly what Camunda makes possible.

In this blog, we’ll explore how to create an AI process agent using Camunda that brings together the intelligence of an LLM, the structure of a business process, and the flexibility of external tools. By the end, you’ll see how simple it can be to connect a REST endpoint, orchestrate an LLM, and even incorporate human decision-making into the same flow.

To guide you through the journey, we will walk step by step through the process of building, configuring, and running your very first AI process agent process with our most recent functionality provided in Camunda 8.8-alpha8. You’ll learn how to design the workflow, connect the model, set up the tooling, and finally deploy and test the end-to-end process. Our own Niall Deehan has also provided a video to accompany this tutorial and a GitHub repository with the finished assets.

The outcome? An orchestration where humans and AI work together—not just coexisting, but actively complementing one another to solve real user requests.

Let’s dive in.

Getting to know our AI-powered customer support system

This example demonstrates how to integrate Camunda’s AI features into a Camunda process using AWS Bedrock and Claude Sonnet. The process creates a basic technology agent that can answer various queries related to technology topics and provide interactive responses to the requestor via a simple AI-powered customer support system.

Ai--agent-customer-support-camunda

It is important to note that the box in the center, the ad-hoc sub-process, is our AI process agent that connects to the LLM and provides the tools for use by the agent to resolve inquiries.

The process flow

The process is quite simple, but here’s the flow:

  1. Query Asked: This is the start event that initiates the process when the user submits a technical question from an input form.
  2. Tech Helper Agent: Our AI agent (Claude Sonnet via AWS Bedrock) processes the query using the following:
    • Built-in AI Knowledge: The agent has extensive domain knowledge about various technology topics.
    • External Tools: In this case, access to REST APIs for real-time data retrieval to help resolve the query.
    • Interactive Forms: The user task that provides formatted displays of the responses to interact with the requestor.
  1. Query Answered: The process completes after providing a comprehensive response to the requestor.

Understanding the AI agent

Let’s take a look at some of the properties we will be defining for the AI process agent in this exercise.

  • Provider: AWS Bedrock with Claude Sonnet 4 model (us.anthropic.claude-sonnet-4-20250514-v1:0)
  • Memory: In-process storage with 20-message context window
  • Limits: Maximum 10 model calls per process instance
  • Tool Integration: Two integrated tools for enhanced functionality
    • Get List of Tech Stuff: This is a REST API call using an HTTP endpoint that retrieves technology product data from https://api.restful-api.dev/objects.
    • Show Response to Query: This is a user task that formats and displays the AI response in Markdown format to the requestor.

What can our agent do?

Our agent isn’t just smart—it’s practical. It knows how to:

  • Tackle all sorts of technical customer requests without breaking a sweat.
  • Lean on the tools you’ve given it instead of making wild guesses.
  • Reuse the same tool as often as needed, tweaking parameters each time to get the job done.
  • Share its reasoning clearly, neatly wrapped in structured XML <thinking> tags.
  • Remember the conversation so every response is grounded in context.
  • Deliver answers that look clean and approachable, formatted in user-friendly markdown.

Building our AI agent in Camunda

Now that you have an understanding of what we are planning for our agent, let’s get down to building it.

Note: As mentioned, this process was built using Camunda 8.8-alpha8. In addition, it assumes that you have an AWS secret key and access key associated with an AWS user with the policies in place to access AWS Bedrock. You may use any available LLM and use the same steps, but results may vary.

Create the process

Let’s create the project and build it out so we can see it in action.

  1. Create a new BPMN diagram and name it “Tech Helper Agent” with an ID of process_techHelperAgent.
    Create-bpmn-diagram

  2. Make sure you are modeling using Zeebe 8.8-alpha8 or later before you begin modeling by updating that at the bottom of your Modeler screen.
    Use-zeebe-8-8

  3. Create your start and end event and make them a good distance apart.
    Start-end

  4. Label the start and end event as follows:
Element typeNameID
Start eventQuery Askedstart_queryAsked
End eventQuery Answeredend_queryAnswered
  1. Drag an expanded sub-process in between your start and end events.
    Sub-process

    Now your process should look something like this.
    Image25

  2. Now you want to make sure that your sub-process is changed to the element type “AI Agent.”
    Sub-process-ai-agent


    This is the key to configuring this process. By creating this sub-process, we are defining the AI process agent and connecting to the proper Large Language Model (LLM). Inside this sub-process will be all the tools available to our agent for resolving the request.
  3. After changing the element type for the sub-process, remove the start event within the sub-process as this will not be necessary.
  4. Now you need to name your agent. In this case, we have used the name of “Tech Helper Agent” with an ID of agent_techHelper.

Configure your AI process agent

Now you need to configure your agent so it has access to the LLM, the tools and runs as expected.

  1. Open the properties panel on the right and the first thing we need to do is to enter the information for the LLM (Model Provider). In this case, select “AWS Bedrock” from the dropdown for the provider.
    Llm-choice

    Once you make this selection, several additional fields will be displayed for configuration of this provider.
    Llm-configuration

  2. You will need to fill in the following:
    • Region
    • Access key
    • Secret key

In our case, we have these saved in secrets, but you will need to have the proper AWS account with the correct policy to grant AWS Bedrock access. The properties for the Model Provider now look like this for our process.

Llm-confugration-2
  1. Now you need to add the model to use. We have used us.anthropic.claude-sonnet-4-20250514-v1:0 which you will add to the “Model” property in the Model section.
    Llm-model

  2. We will leave the System Prompt with the default for now and move to the User Prompt. We will be using the input from the user that will be entered on an initiating form using the variable request.
    User-prompt

  3. Under the Response section, check off “Include assistant message” and “Include agent context” as this will provide us additional information about what is happening with the agent.
  4. For this example, we are just going to use the default System prompt; however, you can fine tune this prompt if you want the agent to do some specific things or you want to give it some more definitive instructions.

Add tools to your AI process agent

The next step is to add tools for your AI process agent to use to respond with the best information possible. Let’s get started.

  1. Drag a task into your AI agent to be the first tool, a REST API, for the agent.
  2. Change the element type to be the REST API Connector.
    Rest-connector

  3. You will need to name your task. In our example, we used the name of “Get list of Tech Stuff” and an ID of task_listofTechStuff.
  4. It is important to add context for the agent in the element documentation field so that the agent knows the purpose of this tool. For example, you may want to enter something similar to this:
    This tool is used to get the list of all available technology stuff.
  5. You now need to confirm the REST API connector with the following:
FieldValue
AuthenticationNone
MethodGET
URLhttp://api.restful-api.dev/objects
Output mapping/Result variabletoolCallResult

The AI agent is always looking for a specific output variable, toolCallResult. So, we need to make sure that we assign the result variable from this particular task to this variable name.

Note:  We will be using this https://restful-api.dev API for this exercise.

Api

  1. Now we need to create the user task that will provide the results to the user and allow interaction with the agent. Drag another task into the agent and change it to a user task.
    User-task

  2. You will want to name this task as well. In our example, we used the name of “Show response to Query” with an ID of task_showResponse.
  3. As before, we want to make sure that we provide the agent with some information on when this task should be accessed, so in the element documentation, add something like:
    Use this tool in order to display data to the user who made the request.
  4. Now in this situation, we need the agent to create some data and then display it to the user in a formatted manner. For example, we want the agent to generate a list from the JSON object, toolCallResult, returned from the REST call for us. So, let’s go to the Inputs section and create an input local variable called resultOfQuery.

  5. Camunda provides a function that we provide called fromAi which can be used when we want to ask the agent to create a variable for us. In this case, we want to expand the FEEL editor to enter a FEEL expression for the variable assignment value.
    fromAi(toolCall.resultOfQuery, "This is the result of the query made by the user, format this in markdown")

    Camunda-fromai


    This function tells the agent to create a variable called resultOfQuery inside of toolCall and a description of what we want to put in that variable. The default variable type is string, so we did not add that to the query, but it also use:
    fromAi(toolCall.resultOfQuery, "This is the result of the query made by the user, format this in markdown", “string”)
  6. Just as we did for the REST API connector task, we need to create the output variable toolCallResult and then specify what that is:
    {
            "responseFromUser : responseFromUser
    }

    Now the inputs and outputs section should look like this:
    Input-output

Forms for user tasks

Now we need to make sure that all the user tasks have the required forms to interact with the requestor.

  1. You need to add a form to the start event so that the AI process agent will know what query it has received so that it can respond accordingly. Select the start event and click the chain link icon and select +create new form. This will take you to the form editor.
  2. Create a simple form called “Query Asked” with an ID of form_queryAsked.

  3. Create the following on your form.
Component TypeField labelKey/Text
Text view## Query the Agent
Text fieldRequestrequest

The request is the user prompt that will be provided to the agent.

Query
  1. Click Go to diagram to return to the diagram.
  2. Select the “Show response to Query” task and select the chain link icon to link it to a form. You will need to select “Create new form” to create a new form that will display the details of the response to the request.
  3. Name your form “Show response to Query” with an ID of form_showResponsetoQuery.
  4. Create the following on your form.
Component TypeField labelKey/Text
Text view## Your Request Response
Text viewresultOfQuery
Text fieldFollow Up QuestionresponseFromUser

The resultOfQuery is the response that the agent defined previously and the responseFromUser is the output defined in the agent in case the user wants to provide a refinement to the original request.

Response
  1. Click Go to diagram to return to the diagram.

You made it! Your very first AI Agent process is built, configured, connected, and ready to roll. Nicely done. Now comes the fun part: seeing it in action, but first let’s deploy it.

Deploying your process

Before you can run your process, it needs to be deployed. You can do this directly from your Web Modeler.

  1. From within Web Modeler, select the down arrow the Deploy & run button at the upper right corner of your screen, and select “Deploy” from the dropdown.
    Deploy

  2. Make sure you select to deploy your process to a Camunda 8.8-alpha8 cluster (or later). Notice that the deployment will also deploy the forms that you have linked to the user steps in your process.
    Deploy-to-cluster

  3. Select Deploy to deploy the process.

Run your process

Now that your process is deployed, let’s see what it can do.

  1. Open Tasklist and select the “Processes” tab to find your newly deployed “Tech Helper Agent.”
    Tasklist-process

  2. Click Start process -> to open the initial form where you can enter your request for the agent.
  3. Enter the following in the request when prompted.
    What are the latest mobile phones available?

    Query-test


    Then click the Start process button.
  4. Open Camunda Operate and review the process. It will likely be at the AI agent process step deciding what tools should be called.
    Operate-view

  5. After a few seconds, it will call the REST API tool to obtain the list of items and then the user task to display what it found.
    Operate-view-2

  6. Now access Tasklist and select “Tasks” from the tabs at the top. When you open the “Show response to Query” step, you should see something like this.
    Response-test

  7. Be sure to assign the task to yourself and enter the following in the field at the bottom of the form.
    Follow-up-test

  8. The agent doesn’t need to query the API again as it has all the information from the previous query. It just needs to compare the models and provide a response. You will likely get something like this.
    Follow-up-response-test

  9. You can continue to keep providing additional feedback at the end of the form, but if you don’t, the process will finish.
    Operate-view-3

And there you have it. You have a working technology support agent.

Some more things to try

You might want to look at some of the information provided in the agent.Context variable when in Operate. This information will show what the agent asked, the response it received and more details about what the agent is doing.

You also might want to try asking different questions such as:

  • What is the best monitor over 24” on the market?
  • What are the types of flat screen TVs over 32”?
  • Which mobile phone is the best?

Have fun and see what your agent recommends.

Learn more about Camunda and AI agents

This process showcases the power of combining Camunda’s workflow orchestration with AI agents for intelligent, tool-enabled automation. If you want to learn more about how Camunda supports AI agents, please see the following references:

And don’t forget to get started today with Camunda

Start the discussion at forum.camunda.io

Try All Features of Camunda

Related Content

Run an open-source LLM locally to ensure your data never leaves your machine and avoid API costs altogether.
AI needs to be orchestrated, just like any other endpoint in an automated business process.
Don't replace governance with black-box AI. Blend intelligence with control, using agentic orchestration in an orchestrated process.