|Size your environment for Camunda appropriately, including sufficient Hardware and Database space.|
You do not need big hardware to run Camunda. Actually the hardware requirements are basically determined by two things:
The container / application server you want to use (see Deciding About Your Stack).
Things you do in Delegation Code like Service Tasks. For example when calling SOAP WebServices or when doing complex calculations in Java more CPU time is consumed within the delegation code (your code) than in Camunda.
The only way to get reliable figures for your project and environment is to do load testing on a close-to-production environment. We recommend to do this if in doubt. Steering the REST API via load generator tools like JMeter is relatively easy.
From the Camunda perspective there are a number of aspects to look at:
Average duration between process start: This determines the overall load on the system. We typically try to calculate "how many new process instances per second".
If you have a new process instance only every couple of seconds or minutes (or even hours) - you don’t have to think about sizing. Use a "small server".
If you have more than hundred process instances per second consider using a bigger server. For example, on we could start around 100 to 500 process instances per second on a normal developer notebook (Intel i5 4 Cores @2.5 Ghz, 8 GB RAM, SSD HD).
Average process instance run time: With the average run time you can calculate how many process instances you typically have in the runtime database (e.g. when starting 1 Process Instance per hour with a typical duration of 2 weeks you have 2 weeks * 7 days * 24 hours * 1 process instance/hour = 336 process instances running). This does not create "active" load on the engine, but it influences database behavior (Query runtime, index size, index write time).
Wait States: Sometimes you face situations where (most) process instances run through in one go, without stopping at a Wait State. Then the process instance is never written to the runtime database. This obviously decreases load dramatically.
Number of concurrent clients: This determines how many queries are fired against the database in parallel.
Typical Queries: On the database there is a big difference if you load process instances / tasks only by Id or Business Key (both have an index) - or by a combination of different process variables (e.g. to correlate by business data). In high load scenarios think about the most common queries you will have.
History: The configured History Level determines how much history data is written and how much database disk space is required.
We normally do not hit limits in scalability of Camunda. Due to the small footprint the engine can run with extreme efficiency. All state is persisted in the database, so you can always add new process engine instances to speed up execution. See Why Camunda scales like hell.
The natural limit for this kind of architecture is the database. Horizontal scalability could be reached by additional mechanisms like sharding, which are not part of the best practices (as they are project specific solutions, see for example Camunda meets Cassandra @Zalando for a customer example).
We recommend to run two machines for high availability. They do not have to form a proper cluster in terms of an application server cluster, just setup two identical nodes pointing to the same database.
You can run Camunda on virtualized systems. The license is not bound to CPU cores, making this very easy from a licensing perspective as well.
We do not give concrete configuration recommendations. We recommend "server classes":
Small: Whatever you typically run as a small server (e.g. 1-2 CPU, 1-8 GB RAM).
Medium: Whatever you typically run as a medium server (e.g. 2-4 CPU, 4-16 GB RAM).
Large: Whatever you typically run as a large server (e.g. 4-64 CPU, 16-128 GB RAM).
In most projects small servers are sufficient. Think about a medium server:
If you start more that 100 process instances / second
If you have CPU intense delegation code.
If your code/deployment has additional requirements.
As mentioned in Deciding About Your Stack we recommend Oracle or PostgreSQL. Together with DB2 we made the best performance observations there.
The amount of space required on the database depends on
History Level: Turning off History saves huge amount of tablespace - as you only have to keep current runtime data in the database. But normally you keep it to "FULL" in order to leverage audit logging capabilities of the process engine.
Process Variables: All process variables needs to be written to the database (in a serialized form, e.g. JSON). With the History Level "FULL" an entry is inserted into history tables every time a variable is changed remembering the old value. With big data objects stored and changed often, this requires a lot of space.
When calculating database size, you should also clarify if and how often you will be Cleaning Up Historical Data.
The real Space occupied within your database depends very much on your database product and configuration. There is no easy formula to calculate this space. Instead this section gives an example.
|1||25% of the instances will be reviewed|
|2||10% of the instances will be ended after review|
When running the Invoice Example with the statistical distributions mentioned above and the following configuration:
History Level FULL
Starting approx. 40.000 Process Instances in total
Where approx. 33.000 Process Instances are already ended (that means: deleted from runtime)
On an Oracle 12c Enterprise Edition (18.104.22.168.0, 64bit Production) Installation on Linux.
You get the following results:
|# Instances||Disk Space||Calculcated Space / Process Instance||Remarks|
Around half of the space is used for indices
Space requirements are influenced massively by History Level
As a rule of thumb capture the following figures and use the example above to make an informed "guess":
Number of process or case instances / day
Average number of executed activities / process instance or case instance
Sum of size of variables / process instance or case instance
Average number of updates / variable
This is an example calculation from a real-life scenario.
Estimated PI / Month: 300,000
Concurrent Users: 450
Assumptions for Calculation:
Equally distributed on 20 working days (more realistic than 30 days, worst case is to add buffer)
Equally distributed in 8 working hours (more realistic than 8 hours, worst case is to add buffer)
Guesses depending on type of process definition
Typically taking 2 days
Mostly User Tasks
Seldom Web Service calls
15.000 new PI / day
1.875 new PI / hour
31 new PI / minute
~ new PI every 2 seconds
In this case a "small server" is sufficient.
No guarantee - The statements made in this publication are recommendations based on the practical experience of the authors. They are not part of Camunda’s official product documentation. Camunda cannot accept any responsibility for the accuracy or timeliness of the statements made. If examples of source code are shown, a total absence of errors in the provided source code cannot be guaranteed. Liability for any damage resulting from the application of the recommendations presented here, is excluded.
Copyright © Camunda Services GmbH - All rights reserved. The disclosure of the information presented here is only permitted with written consent of Camunda Services GmbH.