What are you looking for?

How Decision Modeling and AI Help with Risk Assessment

Easily orchestrate different departments and end-user requirements for risk assessment with AI and decision modeling.
By
  • Blog
  • >
  • How Decision Modeling and AI Help with Risk Assessment

The financial and insurance industries depend heavily on their ability to correctly determine the risk involved in any given investment. If there’s a low chance of risk, investing for a certain return is a great idea; of course, if something is determined to be very risky, it may not be worth putting up any investment. But this very rudimentary explanation skips over the incredibly complex question: “How do you determine the risk of a given investment?”

When risk assessment systems are built with Camunda, they tend to use DMN (decision model and notation). This is an open standard from the folks who brought us BPMN, which attempts to marry a business-friendly tool for describing rules with the directly executable metamodel that made BPMN 2.0 so popular.

But risk analysis is more than just a set of rules. In this post, I’m going to talk through a pattern I’ve come across based on working on projects like these over the years.

I’ll be using BPMN to orchestrate DMN tables, front-end applications, and AI integration, all of which will demonstrate a solid pattern for how you can deal with risk analysis.

Decision modeling for risk assessment workflow

Using DMN to make decisions

Let me quickly explain the following DMN table. The columns represent input and output, and the rows act as individual rules that are activated or not based on the value of the variables they’re testing.

Table with run risk rules and hit policy

The idea is that given a set of inputs, the rules are evaluated and an output is produced, which should give you an answer to some kind of question. In this case, the question is, “What are the risks of this investment?”

So, if you have an input like:

{
    "incomeLastYear" : 11000
    "incomeThisYear" : 29000,
    "purchases" : ["Home"],
    "assets" : ["Home"]
}

It will trigger the first rule, with the output being a text description and a score—specifically, Purchased a second home on low salary and 40.

The most interesting thing about this table is actually the Hit Policy, which determines how the rules are activated. The most common hit policy is First, which simply means that the rules are evaluated in order and the first rule that matches is returned.

Here, though, I’m using the Collect hit policy, which means that all matching rules are returned. It’s ideal for this particular use case because you want to find all of the reasons why, for the given input, it could be a risky investment. So, if the given output is:

{
    "purchases": ["Horse","Art","Boat","Other"],
    "incomeLastYear" : 2000,
    "incomeThisYear" : 219872346,
    "assets" : ["Car","Home"]
}

The output will be a description and score for each flagged issue. 

{
    "description":"Big increase in income",
    "score":40},
{
    "description":"A large unspecified purchase has been added to details ",
    "score":20},
{
    "description":"Has sent in details on time.",
    "score":0
}

From an end-user point of view, it would show the highlighted rules as well as the input:

Output shows highlighted rules

What this gives you in the end is a complete list of all potential risks in both natural language and as a score. You can then use this to both route a process and help process participants handle the human tasks. 

Integrating DMN with BPMN

These two standards—DMN and BPMN—are designed to complement each other, and integrating them isn’t hard. So let’s talk about the value of integrating them.

The following image displays a business rules task called Run Risk Rules. It takes the data in the current context of the BPMN process and feeds it into a DMN table.

I’ve already described what happens inside the table, but what about after it’s executed?

Taking data from BPMN and feeding into DMN

As I explained earlier, you have multiple outputs, one of which is a score. Each rule has a specific score aimed at highlighting the riskiness of the rule triggered. When you get the output, simply add all of those scores together to get a single risk score.

riskScore = sum(riskResults.score)

You can then use that value to decide if you can automatically accept the applicant or reject the applicant. If it’s borderline, you will want to involve a human to investigate the applicant, and the investigator can then decide whether to reject the applicant.

This is where you can make really good use of OpenAI.

Using OpenAI to summarize results

In a situation where someone has been tasked with investigating an application, you’ll have some requirements. The first is that you need to give the investigator all the necessary information, but you don’t want to overload them or give them information they shouldn’t have.

I’m thinking specifically of a requirement I came across on a project where the investigator was not supposed to know what the rules of the DMN were. Those rules were highly confidential, and only the team tasked with creating and maintaining the rules was supposed to know about them. The investigation team was much, much bigger, and there was a fear that if someone knew the rules, they would be able to find ways around them.

So there was a bit of a conundrum: how do you give the investigator enough information so that they can do their job without giving away confidential data?

One solution is to use AI to scramble the large output of various descriptions into a single short paragraph. While this might give an indication of the output, it would never reveal the rules themselves.

In this example, Ithe following prompt to OpenAI…

"Someone has submitted their financial details for a risk assessment write a summary of these Findings: " + riskDescriptions + " with detailed suggestions of potential actions for further investigations"

…will return an output to the end user that looks like this:

The financial details submitted for the risk assessment show that a large unspecified purchase has been added to the individual’s expenses. The income reported does not seem to support the required upkeep of assets. However, it is noted that the details were submitted on time. 

Based on these findings, further investigation is recommended. This may include requesting more specific details about the large purchase, verifying the accuracy of the reported income, and assessing the individual’s overall financial stability. It is important to closely monitor any discrepancies and take appropriate action to mitigate potential risks.

This gives a simple, easy-to-understand summary of where to start an investigation while also making it quite difficult to reverse-engineer these suggestions in a way that would reveal the rules.

Summary

There’s a lot to like about this example, and if you want to try it for yourself, the code is available with instructions on how to get it running. But it’s worth taking a step back and seeing how well this solution manages to easily orchestrate different departments and end-user requirements.

It manages to add some useful features for external systems while maintaining the context and flow throughout the process. It also gives a clear indication of what happens in runtime as well as makes redesign and innovation easier.

I hope this example of combining AI with decision modeling manages to inspire you to look at your own use cases and consider a more process-focused approach.

Start the discussion at forum.camunda.io

Try All Features of Camunda

Related Content

Learn the individual strengths of genAI and ML and then learn how they can complement each other in your business.
Learn about the best programming languages for microservices, plus how to compose and orchestrate microservices for your project.
What Intelligent Process Automation (IPA)? Learn the definition and benefits of IPA, plus use cases, workflows, and technologies needed to get started.