Daily Statistics with Camunda

In this step-by-step tutorial, learn how you can completely automate daily statistics using Camunda.
  • Blog
  • >
  • Daily Statistics with Camunda

30 Day Free Trial

Bring together legacy systems, RPA bots, microservices and more with Camunda

Sign Up for Camunda Content

Get the latest on Camunda features, events, top trends, and more.

BPMN Process: Get incident statistics

When using tools like to manage your incidents—not talking about Camunda incidents in your business processes, but incidents that relate to your general system/offering and production system—it is often useful to get an overview of ongoing incidents. Furthermore, it is beneficial to get reports based on certain (custom) filters like affected teams, severity, etc.

Unfortunately, doesn’t support this type of reporting out-of-the-box. Camunda can help here 🚀

Today, I want to show you how you can get the necessary statistics from, extract the necessary results, and post them for example to Slack to create a daily incident statistics update with Camunda.

Screenshot: Incident statistics daily update

Get details from

The API is rather simple and well-documented. All necessary information can be found in their API docs


As a first step, you have to create an API token. Follow this guide to create one.

After you have this make sure to store it somewhere safe.

Incidents API

We want to get the current statistics for the Incidents for that we have to query:

In our daily update, we are only interested in ongoing incidents for that we need to filter status_category[one_of]=live. Details of this filtering mechanism are also described in their documentation.

It becomes interesting if you want to filter for custom_fields (which you have defined earlier on your own). That can be for example the affected team, or other details you add to your incidents. Be aware that you have to find the corresponding IDs for the custom_field and also the potential options.

To find these you can either query the endpoint or run a query against the incidents endpoint, as the response contains all necessary information.


After you find all the necessary filtering (or just if you want to experiment with something) you can use the following script to try your query.

# Script to query API
set -euo pipefail

if [ -z $1 ];
  echo "Must provide an api token to query api"


incidents=$(curl --verbose --get \
  -H "Accept: application/json" \
  -H "Authorization: Bearer $token" \
  --data 'custom_field[<CUSTOM_FIELD_ID>][one_of]=<CUSTOM_FIELD_OPTION_ID>&status_category[one_of]=live'

# Custom fields need to be addressed with IDs

count=$(echo $incidents | jq '.incidents | length')

echo "$count incidents"
echo $incidents | jq '.incidents[].name'

Example usage

$ ./ $token 
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  459k    0  459k    0     0  1054k      0 --:--:-- --:--:-- --:--:-- 1055k
10 incidents
"Some other incident"
"test incident"

With that script running we have everything ready and can get things automated.

Automating with Camunda

You can either get a trial account here, or host Camunda on your own, for example with the provided Self-managed version.

For simplicity reasons, I will skip the details here, and we expect the usage of the Camunda SaaS offering. Furthermore, will concentrate on the modeling details mostly (using Web-Modeler).

Follow these instructions here if you are unsure how to model your first diagram.

Adding API key as Connector secret

When querying the API we need an API key, as described above. To make this available to our process instances and connectors we have to create a secret in our Camunda cluster.

For more on how to create a secret for your Camunda cluster, take a look at this guide.

In the end, it should be similar to the below:

Screenshot: Connector secrets

Query incident statistics via REST Connector

BPMN: Rest Connector

After we have created our Camunda cluster, added our Connector secret, and started modeling we can add a REST Connector. Details about this can be found in the documentation here.

In the properties panel of the REST Connector, we have to specify all important details, similar to what we used in our script above.

Screenshot: REST connector properties panel

One important part we need to add as well is the result expression which would look like this:

   incidentIoResponse: response.body

This maps the response body to our variable incidentIoResponse. With that, we are already ready to query the details of our incidents. You can either test this by creating a process instance and verifying the results or you just continue with the next step.

Extracting incident details

After we get all the incident details from we need to extract the important details. We can do this by defining a script task in our process model and implementing it with FEEL expressions.

BPMN: Extracting statistics

Depending on what you are interested in you want to extract different details. For all the potential properties of incidents, you can take a look at the API documentation here.

In our example, we are interested in:

  • Incident name
  • Incident Severity
  • Incident permalink, which points to the page
  • The related Slack details (channel name and ID)
  • The incident command (with name and Slack ID)
  • Incident count

All of this can be extracted with the following FEEL expression:

      for incident in incidentIoResponse.incidents 
      return { name:,
               permalink: incident.permalink,
               slack_channel_id: incident.slack_channel_id, 
               slack_channel_name: incident.slack_channel_name,              
               ic_name: incident.incident_role_assignments[item.role.shortform = "commander"][1],                 
               ic_slack_id: incident.incident_role_assignments[item.role.shortform = "commander"].assignee.slack_user_id[1]
    incidents_count: count(incidentIoResponse.incidents)

With the FEEL for loop, we are iterating over the incidents, and creating for each incident a new context (object).

The FEEL list filter allows us to find the incident commanders and their respective names and Slack IDs.

Sending statistics via Slack Connector

BPMN: Slack Connector added

Camunda supports several Connectors out-of-the-box, not only the REST Connector but also a Slack Connector. That Slack Connector we use in our example to send our statistics to a respective channel.

Of course also here we need an API key to access the Slack API, and you can follow this guide to create a Slack OAuth token.

Similar to the REST Connector, we need to add the Slack OAuth Token to our Connector Secrets. Follow this documentation if you need to know how to create/add them to your cluster.

Screenshot: Connector Secrets

After adding the secret we can start with modeling the Slack Connector.

We have to specify some details, like which channel or user should get the update. Furthermore, we have to specify what the message should look like.

Screenshot: Slack Connector props

For the Slack message, we can use again a FEEL expression.

":incident-heart: This is your daily incident update :incident-io:\n\n"
+ "We have currently *" +string(incidents_stats.incidents_count) + " ongoing Incidents*\n\n\n"
string join(
  for incident in sort(incidents_stats.incidents, function(x,y) x.severity < y.severity)
  return incident.severity + " - <" + incident.permalink + "|" + + ">" + " IC: <@" + incident.ic_slack_id + ">"
, "\n")

Formatting the Slack message has to follow some guidelines, which we can find in the Slack documentation here.

As the first part of the message, we want to post some header (with some nice Emoji—this of course depends on your Slack workspace whether this is available or not).

As the message field expects a string we have to concatenate everything to one string. We have to iterate over the incidents and join them via the string join function. This function allows us to specify a delimiter, which is in our case a newline.

To sort our incident list by severity we can use the FEEL sort function.

We can now experiment with our model and it should send a message already to a specified channel or DM, when we create a process instance for the definition.

Running it every working day

BPMN: Daily incident statistics

As the last step, we want to execute/run it every working day at the same time. In our example, we want to do it always before lunch (around 12 o’clock CET). To achieve this we model a timer start event with a cycle expression. Details about this can be found in the documentation here.

The example expression we would use is 0 0 11 * * MON-FRI. We are referring here to 11 as we specify here the time in UTC.

Be aware that tools like crontab-guru don’t parse this expression correctly as they don’t expect the seconds (at the beginning). Here the expression would be 0 11 * * MON-FRI.

Daily incident statistics with Camunda

After modeling all our logic (wiring everything together) we can deploy this to our previously created Camunda cluster, and as soon as the right time is reached we will see a Slack message like this:

Screenshot: Daily incident statistics

As you can see there is not much magic behind it, it is quite “easy” to generate a daily report from an external source, like

You don’t need to scrape the statistics manually anymore and put them into a good format to share them with others. We were able to completely automate this.

That is the power of Camunda: “Automate Any Process, Anywhere.”

P.S.: The possibilities for further things are endless, for example, auto-assigning an engineer to an incident, based on current ongoing incidents (related to statistics and load).

P.P.S.: If you like this post please let us know on the forum, where you can leave a comment. I’m open to any other feedback as well.

Editor’s note: This post first appeared on Medium. We have republished it here with slight edits for clarity.

Notable Replies
  1. Hi Christopher, Christopher Kujawa, Author at Camunda

    I find the content of the blog good because it highlights a very easy way to implement monitoring with Camunda. However, there are a few areas where clarity and wording could be improved for better understanding:

    • The initial paragraphs could explicitly state that in your example, Camunda serves as a system for automating incident reporting rather than generating the incidents themselves. This distinction isn’t immediately clear and requires re-reading to grasp fully. (I had to read the blog twice to actually get the point). Clarifying this point early on would prevent misinterpretation.

    • Without the point above, I was assuming that you were going to explain how to automate the reporting of incidents inside Camunda using Camunda, but actually you wanted to make a more general/broad point, how to automate the reporting of any system that feeds

    Overall was good, but as said, the first part was not that clear, enhancing clarity and explicitly stating the focus of the blog, and doing this at the very beginning will improve its effectiveness in transporting the intended message.

  2. Addendum. I read the original blog in Medium, over there makes more sense. In the Camunda Blog there is a deal of context that requires the clarification I talked about. I suppose the reason is because monitoring and reporting of incidents inside Camunda is indeed a hot topic and this natural expectation played in the blog.

  3. Hey @g.manzano

    thanks for the feedback! I agree that the term “incident” is quite overloaded in the context of Camunda, so this makes totally sense. We will improve the opening thank you :bowing_man:


  4. We updated the intro a bit. I hope this helps.

    Thanks again for your feedback :bowing_man:


  5. I just read it. Love it now. You guys rock !

Continue the discussion at


Avatar for system Avatar for Zelldon Avatar for g.manzano

Try All Features of Camunda

Related Content

Transition smoothly from design to implementation with an end-to-end business process. We'll show you how!
See how Funding Societies is taking advantage of process orchestration to automate their lending process, greatly improving outcomes for their customers.
Process blueprints can now be found on Camunda marketplace! Read on to learn how they can help you and how you can contribute.