Modeling with Situation Patterns

When modeling you will sometimes realise that some situations share common characteristics. In order to save work for yourself and spread such knowledge within your organisation, collect and document such patterns, as soon as you understand their nature and have found a satisfying solution for modeling them. For a start, we collected some typical patterns for you, which we see quite often in our modeling practice. You do not need to "reinvent the wheel" over and over again!
Modeling with Situation Patterns

Escalating a Situation Step by Step

You need something and hope that it happens. Such a hope for a result of course may materialize, but it does not have to! After some time, you will typically become impatient and try to do something to make it happen. But if it then still does not happen, there comes a point at which you will have to decide that you must accept a failure.

We sometimes also call that very common pattern a Multi Step Escalation.

A Real Life Example

Being open minded with respect to brand new online shops can sometimes lead to disappointments.
A month ago, I ordered a pair of shoes with that new online shop! After two weeks of waiting: nothing. So I contacted them to determine what’s up. The clerk promised me that the shoes will leave the warehouse today! But again, nothing, so after another week I just canceled that order. Since then I did not hear a word.

Modeling a generic process dealing with all the possible outcomes of such situations can be achieved by means of several different modeling techniques.

1. Using Several Event Based Gateways

1 After having ordered the good the process passively waits for the "success case" by means of an event based gateway: the good should be delivered. However, in case this does not happen within a reasonable time we make a first step of escalation: we remind the dealer.
2 We still stay optimistic. Therefore the process again passively waits for the "success case" by means of another event based gateway: the good should still be delivered. However, in case this does again not happen within a reasonable time we make a second step of escalation: we cancel the deal.

This solution explicitly shows how the two steps of this escalation are performed. Timers are modeled separately, followed by their corresponding escalation activities.

The usage of separate event based gateways leads to duplication (of e.g. the receiving message events) and makes the model larger, even more so in case multiple steps of escalation need to be modeled.

Furthermore, note that while we are reminding the dealer, we are strictly speaking not in a position to receive the good! The process can handle a message event only if it is in a "ready-to-receive" state at exactly the moment it occurs. Therefore that message might get lost until we are at the second event based gateway. While you might want to choose to ignore strict execution semantics when modeling for communication purposes, you will need to get it right for executable models.

As a consequence you might want to use that pattern when modeling simple two phase escalations for communication purposes only.

2. Using Gateways Forming a Loop

1 After having ordered the good the process passively waits for the "success case" by means of an event based gateway: the good should be delivered. However, in case this does not happen within a reasonable time …​
2 We choose by means of an exclusive gateway to make a first step of escalation: we remind the dealer. We still stay optimistic. Therefore the process returns to the event based gateway and again passively waits for the "success case": the good should still be delivered. However, in case this does again not happen within a reasonable time we choose a second step of escalation: we cancel the deal.

This model is a more compact and more generic modeling solution to the situation. If it comes to multiple steps of escalation you will need such an approach to avoid huge diagrams.

The solution is less explicit. We could not choose to label the timer with explicit durations, as a single timer is used for both durations. The solution is less readable for a less experienced reading public. For a fast understanding of the two step escalation, this method of modeling is less suitable.

Furthermore, note that while we are reminding the dealer, we are strictly speaking not in a position to receive the good! The process can handle a message event only if it is in a "ready-to-receive" state at exactly the moment it occurs. Therefore that message might get lost until we are back at the event based gateway. While you might want to choose to ignore strict execution semantics when modeling for communication purposes, you will need to get it right for executable models.

As a consequence you might want to use that pattern when modeling escalations with multiple steps for communication purposes only.

3. Using Boundary Events

1 After having ordered the good the process passively waits for the "success case" by means of a receive task: the good should be delivered. However, in case this does not happen within a reasonable time …​
2 a non-interrupting boundary timer event triggers a first step of escalation: we remind the dealer. We still stay optimistic. Therefore we did not interrupt the receive task, but continued to wait for the "success case": the good should still be delivered.
3 However, in case this does not happen within a reasonable time we trigger a second step of escalation by means of an interrupting boundary timer event: we interrupt the waiting for delivery and cancel the deal.

This model is even more compact and a very generic modeling solution to the situation. If it comes to multiple steps of escalation the non-interrupting boundary timer event could even trigger multiple times.

Furthermore, the model complies with BPMN execution semantics: since we never leave the wait state to receive the process is always in a "ready-to-receive" state and an incoming message cannot get lost. While you might want to choose to ignore such strict execution semantics when modeling for communication purposes, you will need to get it right for executable models.

The solution is less readable and less intuitive for a less experienced reading public, because the way the interrupting and non-interrupting timers collaborate requires a profound understanding of boundary events and the consequences for token flow semantics. For communication purposes, this method of modeling is therefore typically less suitable.

As a consequence you might want to use that pattern when modeling escalations with two steps as well as escalations with multiple steps for executable models.

Requiring a Second Set of Eyes

For a certain task - typically a critical one in terms of your business - you need the opinion, review or approval of two different people.

We sometimes also call that pattern Four Eyes Principle.

A Real Life Example

The manager of a small sized bank’s lending department has a problem.
Enough is enough! Over the last quarter we lost 100k Euros in unrecoverable medium sized loans. Controlling now tells me that could probably have been easily avoided by more responsible decisions of our lending department staff! I want that every such decision is signed off by two people from now on.

Modeling a generic process dealing with that requirement can be achieved easily, but the better solution also depends on whether you prefer overall speed over total effort - or not.

All of the following modeling patterns assume that the two or more tasks needed to ultimately approve the loan must not be completed by one and the same person. When executing such patterns, you must enforce that. Learn more about that technical aspect in Signing Off Work Done by Other People (4 Eyes Principle).

1. Using Separate Tasks

1 A first approver looks at the loan and decides. If s/he decides not to approve, we are done, but in case the loan is approved …​
2 a second approver looks at the loan. If s/he also decides to approve, the loan is ultimately approved.

This solution explicitly shows how the two steps of this approval are performed. Tasks are modeled separately, followed by gateways visualising the decision making process.

Note that the approvers work in a strictly sequential mode - which might be exactly what we need in case we want minimization of effort and e.g. display the reasonings of the first approver for the second one. However, we also might prefer maximization of speed - then see solution 3. Using a Multi Instance Task further below.

The usage of separate tasks leads to duplication and makes the model larger, even more so in case multiple steps of approvals need to be modeled.

As a consequence you might want to use that pattern when modeling the need for a second set of eyes needed in sequential order, therefore minimizing effort needed by the participating approvers.

While it is theoretically possible to model separate, explicit approval tasks in parallel, we do not recommend such patterns due to readability concerns.

As a better alternative when looking for maximization of speed see solution 3. Using a Multi Instance Task further below.

2. Using a Loop

1 A first approver looks at the loan and decides. If s/he decides not to approve, we are done, but …​
2 in case the loan is approved we turn to a second approver to look at the loan. If s/he also decides to approve, the loan is ultimately approved.

This model is a more compact modeling solution to the situation. If it comes to multiple sets of eyes needed you will probably prefer such an approach to avoid huge diagrams.

Note that the approvers work in a strictly sequential mode - which might be exactly what we need in case we want minimization of effort and e.g. display the reasonings of the first approver for the second one. However, we also might prefer maximization of speed - then see solution 3. Using a Multi Instance Task further below.

The solution is less explicit. We could not choose to label the tasks with explicit references to a first and a second step of approval, as a single task is used for both approvals. The solution is less readable for a less experienced reading public. For a fast understanding of the two steps needed for ultimate approval, this method of modeling is less suitable.

As a consequence you might want to use that pattern when modeling the need for multiple sets of eyes needed in sequential order, therefore minimizing effort needed by the participating approvers.

3. Using a Multi Instance Task

1 All the necessary approvers are immediately asked to look at the loan and decide - by means of a multi instance task. The tasks are completed with a positive approval. Once all positive approvals for all necessary approvers are made the loan is ultimately approved.
2 In case the loan is not approved by one of the approvers, a boundary message event is triggered, interrupting the multi instance task - therefore removing all the tasks of all approvers who did not yet decide. The loan is then not approved.

This model is a very compact modeling solution to the situation. It can also easily deal with multiple sets of eyes needed.

Note that the approvers work in a parallel mode - which might be exactly what we need in case we want maximization of speed and want the approvers to do their work independent from each other and uninfluenced by each other. However, we also might prefer minimization of effort - then see solutions 1. Using Separate Tasks or 2. Using a Loop above.

The solution is much less explicit and less readable for a less experienced reading public, because the way the boundary event interacts with a multi instance task requires a profound understanding of BPMN. For communication purposes, this method of modeling is therefore typically less suitable.

As a consequence you might want to use that pattern when modeling the need for two or multiple sets of eyes needed in parallel order, therefore maximising speed for the overall approval process.

Measuring Key Performance Indicators (KPIs)

You want to measure specific aspects of your process execution performance along some indicators.

A Real Life Example

A software developer involved in introducing Camunda gets curious about the business.
How many applications do we accept or decline per month…​ and how many do we need to review manually? How many are later accepted and declined? How much time do we spend for those manual work cases? And how long does the customer have to wait for an answer? I mean …​ do we focus on the meaningful cases …​?

When modeling a process, we should actually always add some information about important key performance indicators implicitly - e.g. by specifically naming start and end events with the process state reached from a business perspective. But on top of that we might explicitly add additional business milestones or phases.

While the following section concentrates on the aspect of modeling key performance indicators, you might want to learn more about using them for Reporting About Processes from a more technical perspective - when being faced with the task to actually retrieve and present Camunda’s "historical data" collected on the way of execution.

Showing Milestones

1 First, we assess the application risk based on a set of automatically evaluable rules.
2 We can then determine whether the automated rules already came to a (positive or negative) conclusion or not. In case the rules led to an unsure result, a human must assess the application risk.
3 We use explicit intermediate events to make perfectly clear that we are interested in the applications which never see a human …​
4 and be able to compare that to the applications which needed to be assessed manually, because the automatic assessment failed to determine a clear result.
5 We also use end events which are meaningful from a business perspective. We must know whether an application was either accepted …​
6 or rejected.

By means of that process model, we can now Camunda let count the applications which were accepted and declined. We know how many and which instances we needed to review manually and can therefore also narrow down our accpeted/declined statistics to those manual cases.

Furthermore we will be able to measure the handling time needed for the user task, e.g. by measuring the time needed from claiming the task to completing it. The customer will need to wait a cycle time from start to end events, and this statistics of course could e.g. be limited to the manually assessed applications and will then also include any idle periods in the process.

By comparing the economic value of manually assessed insurance policies to the effort (handling time) we invest into them, we will also be able to learn whether we focus our manual work on the meaningful cases and eventually improve upon the automatically evaluated assessment rules.

Emphasizing Process Phases

As an alternative or supplement to using events, you might also use sub processes to emphasize certain phases in your process.

1 By introducing a separate embedded sub process we emphasize the phase of manual application assessment, which is the critical one from an economic perspective.

Note that this makes even more sense if multiple tasks are contained within one phase.

Evaluating Decisions in Processes

You need to come to a decision relevant for your next process steps. Your actual decision depends on a number of different factors and rules.

We sometimes also call that pattern Business Rules in BPMN.

A Real Life Example

The freshly hired business analyst is always as busy as a bee.
Let’s see…​ Category A customers always get their credit card applications approved, whereas Category D gets rejected by default. For B and C it’s more complicated. Right, in between 2500 and 5000 Euros, we want a B customer, below 2500 a C customer is OK, too. Mmh. Should be no problem with a couple of gateways!

Showing Decision Logic in the Diagram?

When modeling business processes, we focus on the flow of work and just use gateways to show that following tasks or results fundamentally differ from each other. However, in the example above, the business analyst used gateways to model the logic underlying a decision - which clearly is considered to be an anti pattern!

It does not make sense to model the rules determining a decision inside the BPMN model. The rules decision tree will grow exponentially for every additional criteria. Furthermore, we typically will want to change such rules much more often than the process (in the sense of tasks needed to be carried out).

Using a Single Task for a Decision

1 Instead of modeling the rules determining a decision inside the BPMN model, we just show a single task representing the decision. Of course, when preparing for executing such a model in Camunda, we can wire such a task with a DMN decision table or some other programmed piece of decision logic.
2 While it would be possible to hide the evaluation of decision logic behind the exclusive gateway, we recommend to always show an explicit node with which the data is retrieved which then might be used by subsequent data based gateways.

Distinguishing Undesired Results from Fatal Problems

You model a certain step in a process and wonder about undesired outcomes and other problems hindering you to achieve the result of the step.

A Real Life Example

What today is a "problem" for the business might become part of the happy path in a less successful future!
Before we can issue a credit card, we must ensure that a customer is credit-worthy. Unfortunately sometimes it might also turn out that we cannot even get any information about the customer. Then we typically also reject at the moment. Luckily, we do have enough business with safe customers anyway.

Using Gateways to Check for Undesired Results

1 Showing the check for the applicant’s creditworthiness as a gateway also informs about the result of the preceding task: the applicant might be creditworthy - or not. Both outcomes are valid results of the task, even though one of the outcomes here might be undesired from a business perspective.

Using Boundary Error Events to Check for Fatal Problems

1 Not to know anything about the creditworthiness (because we cannot even retrieve information about the applicant) is not considered to be a valid result of the step, but a fatal problem hindering us to achieve any valid result. We therefore model it as a boundary error event.
The fact that both problems (an unknown applicant number or an applicant which turns out not to be credit-worthy) lead us at the moment to the same reaction in the process (we reject the credit card application) does not influence that we need to model it differently. The decision in favor of a gateway or an error boundary event solely depends on the exact definition of the result of a process step. See the next section.

Understanding the Definition of the Result

It depends on assumptions and definitions what we want to consider to be a valid result for a process step. We might have chosen to model the process above with slightly different execution semantics, while achieving the same business semantics:

1 The only valid result for the step "Ensure credit-worthiness" is now that the customer is in fact credit-worthy. Therefore any other condition must be modeled with an error boundary event.

In order to advance clarity by means of process models, it is absolutely crucial for modelers to have a clear mental definition of the result a specific step produces and as a consequence to be able to distinguish undesired results from fatal problems hindering us to achieve any result for the step.

While there is per se no "right" way to decide what to consider as a valid result for your step, the business reader will typically have a mental preference to see certain business issues either more as undesired outcomes or more as fatal problems. However, for the executable pools, your discretion to decide about a step’s result might also be limited when using e.g. service contracts which are already pre-defined.

Asking Multiple Recipients for a Single Reply

You offer something to or request something from multiple communication partners, but you actually just need the first reply.

We sometimes also call that pattern First Come, First Serve.

A Real Life Example

A well known personal transportation startup works with a system of relatively independent drivers.
Of course, when the customer requests a tour, speed is everything. Therefore we need to limit a tour to those of our drivers which are close by. Of course there might be several drivers within a similar distance. We then just offer the tour to all of them!

Using a Multi Instance Task

1 After having determined all drivers which are currently close enough to serve the customer, we push the information about the tour to all of those drivers.
2 We then wait for the reply of a single driver. Once we have it, the process won’t wait any longer but proceed to the end event, informing the customer about the approaching driver.
According to the process model it is possible that another driver accepts the tour as well. However, as the process in the Tour Offering System is not waiting for the message any more it will get "lost". The process can handle a message event only if it is in a "ready-to-receive" state at exactly the moment it occurs. In Camunda Platform this means that the sender of the message gets an error that the message cannot be delivered. As our process proceeded to the end event after the first reply, all subsequent messages will be intentionally "ignored".

Processing a Batch of Objects

You need to process many objects at once, which were already created before one by one - or which were updated one by one to reach a certain status.

We sometimes also call that pattern simply the 1-to-n problem.

A Real Life Example

A lawyer explains to a new client the way he intends to bill him.
Of course, if you need advice, you can call me whenever you want! We will agree about any work that needs to be done and my assistant will track those services which are subject to a charge. Once a month mostly you will receive a neatly structured invoice providing you with all the details!

Using Data Stores and Multi Instance Activities

1 The client asks for advice whenever s/he needs it. Note that we create here a process instance per request for advice.
2 The lawyer makes sure to record the billable hours needed for the client.
3 As he does not directly inform anybody by doing this, but rather collects data, we show this with a data store representing the time sheet and a data association pointing in its direction - representing the "write operation".
4 The assistant starts his invoicing process on a monthly basis, in other words we create a process instance per monthly billing cycle.
5 As a first step the assistant determines all the billable clients. This are of course the clients for which time sheet entries exist in the respective month. Note that we have many legal advice instances who have a relationship to one billing instance and that the connection is implicitly shown by the "read operation" on the current status of data in the time sheet.
6 Now that the assistant knows the billable clients, s/he can iterate through them and invoice all of them. We use a sequential multi instance sub process to illustrate that we need to do this for every billable client.
7 On the way, the assistant is also in charge to check and correct time sheet entries, illustrated with a parallel multi instance task. Note that these time sheet entries and hence task instances relate here 1:1 to the instances of the lawyer’s "legal consulting" process. In real life the lawyer might have created several time sheet entries per legal advice process, but this does not change the logic of the assistance’s process.
8 Once the client is invoiced, the assistant starts a "payment processing" instance per invoice, the details of which are not shown in this diagram. We can imagine that the assistant needs to be prepared to follow up with reminders until the client eventually pays the bill.

Concurring Dependent Instances

You need to process a request, but need to make sure that you don’t process several similar requests at the same time.

A Real Life Example

A bank worries about the increasing costs for creditworthiness background checks.
Such a request costs real money, and we often have packages of related business being processed at the same time. So we should at least make sure that if one credit check of a customer is already running, we do not want another credit check for the same customer to be performed at the same time.

Using Message Events

1 Once an instance passes this event and moves on to the subsequent actual determination of the creditworthiness …​
2 …​ other instances will determine that there already exists an active instance and wait to be informed by this instance.
3 When the active instance has determined the creditworthiness, it will move on to inform the waiting instances …​
4 …​ which will receive a message with a creditworthiness payload and be finished themselves with the needed information.
The model explicitly shows here separate steps (determine and inform waiting instances) which you might want to implement more efficiently within one single step doing both semantic steps at once by means of a small piece of java code.

Using a Timer Event

While using timer events can be a feasible approach in case you want to avoid communication between instances, we do not recommend it. Downsides are e.g. that such solutions cause delays and overhead due to the perdiodical queries and the loop.

1 Once an instance passes this event and moves on to the subsequent actual determination of the creditworthiness …​
2 …​ all other instances will go into a wait state for some time, but check periodically, if the active instance is finished.
3 When the active instance has determined the creditworthiness and finishes …​
4 …​ all other instances will also finish after some time.

No guarantee - The statements made in this publication are recommendations based on the practical experience of the authors. They are not part of Camunda’s official product documentation. Camunda cannot accept any responsibility for the accuracy or timeliness of the statements made. If examples of source code are shown, a total absence of errors in the provided source code cannot be guaranteed. Liability for any damage resulting from the application of the recommendations presented here, is excluded.

Copyright © Camunda Services GmbH - All rights reserved. The disclosure of the information presented here is only permitted with written consent of Camunda Services GmbH.