Camunda’s partner Infosys recently contributed some Connectors for various SQL databases (Oracle, MS-SQL, PostgreSQL, and MySQL) to the Camunda Community. These Connectors allow you to execute common data definition tasks (called DDL, for example creating a table), do data manipulations (called DML, for example inserting data into a table), and query data from the database.
I was immediately reminded of a recent customer scenario in the financial industry where that would have been quite handy to have. Based on this business example from trading operations, I created the example for today’s blog post and want to walk you through it using an SQL Connector in Camunda Platform 8. Of course, I modified the real-life scenario, so if you work in banking don’t expect it to be 100% accurate.
A new requirement around beneficial owners
The customer operated a trading application, where new stock trades needed to be matched with the stock exchange and passed on to custodian banks for execution. New compliance regulations required the broker to transfer beneficial owner information about sellers (beneficial owner information is about the natural persons behind legal entities, even over a hierarchy of legal entities). Such information needed to be captured, but their existing home-grown trading system did not do this out of the box. While they wanted to integrate such information in that system, they lacked the time to do this before the given deadline. So, they went for a quick workaround instead.
While you can decide for yourself if such a workaround is good or bad, it was impressive to see that they could add it within days to their daily operations.
They already had Camunda as the process orchestrator in production, and already executed the customer onboarding process on Camunda. This gave them a great opportunity to simply add the proper activity to store beneficial owner information at the right point in time.
As backend, they created a simple database table in their existing PostgreSQL installation:
CREATE TABLE BeneficialOwner (customerId VARCHAR, individual VARCHAR, organization VARCHAR, share VARCHAR)
The customer onboarding process was extended by the activity leveraging the PostgreSQL Connector:
At the same time, they also run trade execution processes on Camunda, so they could add an activity there to query this information so that they can transfer it to the custodian as required:
Again, I agree that this might not be a beautiful architecture and you might very well prefer a proper microservice or even to extend the trading or CRM system to fulfill the requirements. Still, having the possibility to adjust processes this easily is a great opportunity to add resilience to your processes, maybe just as a temporary workaround to buy some time. However, you might also want to keep in mind that processes differ very much in their criticality for the business, so while such workarounds might scare architects responsible for payments or trading, such an architecture might also be just right for other processes like simple approvals.
Deploying the PostgreSQL Connector
Let’s turn our attention to how to run the PostgreSQL Connector. The Connector is not provided out of the box by Camunda, so it is also not ready for use when you spin up a new cluster in Camunda Platform 8 SaaS.
Instead, the Connector itself is provided as an open source component by Camunda’s partner Infosys on their own GitHub repository: https://github.com/Infosys/camunda-connectors/blob/main/connector-postgresql/. As Infosys is not yet providing a binary, you must build the Connector from the sources yourself:
git clone https://github.com/Infosys/camunda-connectors cd connector-postgresql mvn clean install
A Connector then needs a runtime to work. This runtime is basically a simple Java application that contains one or more Connectors and can connect to Camunda Platform as illustrated in the following picture:
One great advantage of this architecture is that you can run one or more Connector runtimes, depending on your exact requirements. For example, you might want to co-locate the Connector runtime for PostgreSQL next to your database. This allows it to leverage Camunda Platform 8 SaaS and connect to PostgreSQL only from the Connector runtime that runs locally in the network of PostgreSQL. Or, you might separate Connectors that have strict performance requirements.
Technically speaking, you can create a Maven project that pulls in the Connectors you want to run as dependencies, like I did for PostgreSQL Connector earlier:
<dependencies> <dependency> <groupId>io.camunda</groupId> <artifactId>spring-zeebe-connector-runtime</artifactId> </dependency> <dependency> <groupId>com.infosys.camundaconnectors.db.postgresql</groupId> <artifactId>connector-postgresql</artifactId> </dependency> </dependencies>
The full pom.xml is available as an example on GitHub:
To play around, you could add the connection information to a Camunda Platform 8 SaaS cluster to the file src/main/resources/application.properties and start the Java application afterwards. Then, you will have the PostgreSQL Connector ready to do any work.
To allow modelers making use of the Connector, you will further need the so-called element template (the UI part of the Connector) deployed to your Camunda Modeler. You can find the element template, which is a JSON file, for PostgreSQL here: https://github.com/Infosys/camunda-connectors/blob/main/connector-postgresql/element-templates/postgresql-database-connector.json.
If you use Web Modeler, you can import the file there:
If you use Desktop Modeler, you can add the JSON file to the right place on your local disk (for me on Windows this is
%APPDATA%camunda-modelerresourceselement-templates), see documentation for details.
Use the PostgreSQL Connector
Now you are ready to use the Connector in your BPMN model and configure these new PostgreSQL tasks properly in the palette (I used Web Modeler for the screenshot):
You can find two executable BPMN models to import and start with on GitHub as well: https://github.com/berndruecker/trade-ops-camunda-8-sql-connector/tree/main/models. One is for onboarding new customers (this saves beneficial owner information in PostgreSQL) and one is for trade execution, querying it from there. I leave it up to the reader to dive into the various settings to make those queries work, but as you can already see in the screenshot above, it is not too much to configure.
Camunda Platform 8 offers support for secrets management built-in, which would make it easy to move the secrets for PostgreSQL to an externally-provided secret that can differ for every environment.
For my demo I have used a managed instance of PostgreSQL via ElephantSQL.
Want to see it in action?
I recorded a quick walkthrough here:
To connect to PostgreSQL, MySQL, MS-SQL, or Oracle, please check out the Connectors from Infosys. To connect to other endpoints, check out our awesome list of existing Connectors, which also points to other open source initiatives providing Connectors. If there is nothing that suits your needs, you are of course welcome to develop your own Connector and share it with the community via the Camunda Community Hub.
As always, if you have questions or feedback, don’t hesitate to reach out, ideally via our forum.