What are you looking for?

Generating a Zeebe-Python Client Stub in Less Than An Hour: A gRPC + Zeebe Tutorial

  • Blog
  • >
  • Generating a Zeebe-Python Client Stub in Less Than An Hour: A gRPC + Zeebe Tutorial

Please note that this blog post has not been updated since its original publication to reflect changes to Zeebe’s gateway.proto file.

The general approach for creating a stub that we show here still applies to newer versions of Zeebe, but you may need to make a few modifications depending on which Zeebe version you’re using.

The high points:

  • Starting in Zeebe 0.12, Zeebe clients communicate with brokers via a stateless gRPC gateway, with Protocol Buffers used as the interface design language and message interchange format.
  • gRPC makes it easy to generate a “client stub” in any of ten supported programming languages; this means it’s possible to use Zeebe in applications written in not only Java and Go–the languages with officially-supported Zeebe clients–but also Python, Ruby, C#, PHP, and more.
  • We’ll show you step-by-step how we generated and started prototyping with a Python client stub in less than an hour, and we’ll provide everything you need to follow along.

What is gRPC, and why is it a good fit for Zeebe?

Zeebe 0.12, released in October 2018, introduced significant updates to Zeebe. Topics were removed. Topic subscription was replaced by a new exporter system, and we also shipped a ready-to-go Elasticsearch exporter.

Lastly, we reworked the way that Zeebe clients communicate with brokers. Zeebe now uses gRPC for client-server communication, and clients connect to brokers via a stateless gateway. The client protocol is defined using Protocol Buffers v3 (proto3).

gRPC was first developed by Google and is now an open-source project and part of the Cloud Native Computing Foundation. If you’re new to gRPC, the “What is gRPC” page on the project website provides a simple, concise description:

In gRPC a client application can directly call methods on a server application on a different machine as if it was a local object, making it easier for you to create distributed applications and services. As in many RPC systems, gRPC is based around the idea of defining a service, specifying the methods that can be called remotely with their parameters and return types. On the server side, the server implements this interface and runs a gRPC server to handle client calls. On the client side, the client has a stub (referred to as just a client in some languages) that provides the same methods as the server.

gRPC has many nice features that make it a good fit for Zeebe:

  • gRPC supports bi-directional streaming for opening a persistent connection and sending or receiving a stream of messages between client and server.
  • gRPC uses the common http2 protocol by default.
  • gRPC uses Protocol Buffers as an interface definition and data serialization mechanism–specifically, Zeebe uses proto3, which supports easy client generation in ten different programming languages.

That third benefit is particularly interesting for Zeebe users. If Zeebe doesn’t provide an officially-supported client in your target language, you can easily generate a so-called client stub in any of the ten languages supported by gRPC and start using Zeebe in your application.

In the rest of this post, we’ll look more closely at Zeebe’s gRPC Gateway service and walk through a simple tutorial where we generate and use a client stub for Zeebe in Python.

Zeebe’s gRPC Gateway Service {#zeebe’s-grpc-gateway-service}

At a high level, there are two things you need to do to create a gRPC service and generate client and server code in your target language:

  1. Define a service interface and the structure of messages using Protocol Buffers as the interface definition language
  2. Use the protocol buffer compiler to generate usable data access classes in your preferred programming language based on the proto definition

To understand exactly what one of these service interfaces looks like, we recommend taking a look at Zeebe’s .proto file, which you can find in GitHub.

In lines 8 through 201, you’ll see that we define the structure of request and response messages that can be sent via the service, and starting on line 203, we define the service itself.

If you’re already familiar with the capabilities of Zeebe’s Java or Go clients, then the service defined in the .proto will look familiar to you. That’s because the .proto file serves as a central place where the client interaction protocol is defined one time for clients in all programming languages (and therefore doesn’t need to be duplicated every time a new client is created).

Another compelling point about gRPC: a service’s server code doesn’t have to be written in the same language as client code. In other words, the Zeebe team wrote the server code for the gateway service just one time and in Java (our preferred language), and Zeebe brokers can receive and respond to requests from clients in any other gRPC-supported language.

Generating a client stub for Zeebe in Python

Alright, alright. Onto the fun part. In this section, we’ll walk through step-by-step how we generated a client stub for Python, and we’ll provide everything you need to follow along.

First, you’ll need to go through the Prerequisites in the Python quickstart on the gRPC site to install gRPC and gRPC tools. You can stop at the “Download the example” section header.

Next, create and change into a new directory.

mkdir zeebe-python
cd zeebe-python

Then add the gateway.proto file to that directory. Here’s the raw file in GitHub.

OK, we’re ready to generate some gRPC code based on the service defined in the gateway.proto file. Inside the directory you just created, run:

python -m grpc_tools.protoc -I. --python_out=. --grpc_python_out=. ./gateway.proto

In that directory, you should now see two newly-generated files: gateway_pb2.py, which contains the generated request and response classes, andgateway_pb2_grpc.py, which contains our generated client and server classes. We won’t be editing these files, but we will be calling the methods in them as we build a simple client application.

Before you go any further, you’ll need to download Zeebe if you haven’t already. In this walkthrough, we’re using Zeebe 0.13.1, which you can find here: https://github.com/zeebe-io/zeebe/releases/tag/0.13.1.

Next, go ahead and start up a Zeebe broker.

cd zeebe-broker-0.13.1/

While we’re here, let’s use the Zeebe command line interface (zbctl) to deploy a workflow model. Here’s the model we’ll be using in this tutorial, which you should save as a bpmn file in the zeebe-broker directory.

To deploy, run:

./bin/zbctl deploy simple-process.bpmn

Note that if you’re on a Mac, you’ll need to run:

./bin/zbctl.darwin deploy simple-process.bpmn

And if you’re on Windows, it’s:

./bin/zbctl.exe deploy simple-process.bpmn

Next, we’re going to build a simple client application to create workflow instances, to work on jobs, and so that we can finish instances for our model, to send messages.

In the directory where you generated the gateway_pb2.py and gateway_pb2_grpc.py files, create a new Python file. Let’s call it zeebe_client.py.

If you need a reference along the way, you can see our finished zeebe_client.py file here. Note that we’re going to call all of these different methods from one file to keep things simple, but that probably isn’t how you’d do things in a real-world scenario.

Setup and First Client Call

First, you’ll need to import three modules.

import grpc
import gateway_pb2
import gateway_pb2_grpc

To start calling service methods, we need to create what’s called a stub. In a new function, define a channel (26500 is the default Zeebe port) then instantiate the GatewayStub class of the gateway_pb2_grpc module.

def run():
    with grpc.insecure_channel('localhost:26500') as channel:
        stub = gateway_pb2_grpc.GatewayStub(channel)

We already started up the Zeebe broker, and so the first thing we’ll do is check the broker topology to confirm that it’s running and configured as expected.

def run():
    with grpc.insecure_channel('localhost:26500') as channel:
        stub = gateway_pb2_grpc.GatewayStub(channel)
    topologyResponse = stub.Topology(gateway_pb2.TopologyRequest())

Let’s talk through how we came up with that request. We can see that in our proto service definition, a TopologyRequest to the RPC Topologyshould get us aTopologyResponse. This is a pretty simple call that doesn’t require us to pass any arguments.

And what does a TopologyResponse look like? We can see starting on line 27 of the proto file that a TopologyResponse includes brokers, clusterSize, partitionsCount, and replicationFactor.

So let’s run the thing and see what happens! At the end of your zeebe_client.py file, be sure to add:

if __name__ == '__main__':

Then we can hop over to our terminal and run it.

python zeebe_client.py

Et voila! We have a response. Note that partitions aren’t printed here because gRPC doesn’t show default values, and in this case, we have the default 1 partition.

brokers {
  host: ""
  port: 26501
  partitions {
clusterSize: 1
partitionsCount: 1
replicationFactor: 1

We can follow the same approach for other RPCs in the Gateway service. For those of you who’d like to see a few more examples, we’ll walk through how to get a list of workflows, create a workflow instance, work on a job, and send a message.

Get a List of Deployed Workflows

Next, we’ll update the function to get a list of deployed workflows.

def run():
    with grpc.insecure_channel('localhost:26500') as channel:
        stub = gateway_pb2_grpc.GatewayStub(channel)
        topologyResponse = stub.Topology(gateway_pb2.TopologyRequest())
        listResponse = stub.ListWorkflows(gateway_pb2.ListWorkflowsRequest())

When you run the client, you should see the response:

workflows {
  bpmnProcessId: "simple-process"
  version: 1
  workflowKey: 1
  resourceName: "simple-process.bpmn"

Create a Workflow Instance

OK, so we’ve used our client to retrieve some information about the Zeebe brokers and our workflows. Now it’s time to start doing. So let’s create a workflow instance! We’ll add the following to our function:

createResponse = stub.CreateWorkflowInstance(gateway_pb2.CreateWorkflowInstanceRequest(
bpmnProcessId = 'simple-process', version = -1, payload = '{"orderId" : "ab1234"}'))

As a reminder, we can find out what arguments we should pass based on the CreateWorkflowInstanceRequest message definition in the proto file:

message CreateWorkflowInstanceRequest {
  int64 workflowKey = 1;
  string bpmnProcessId = 2;
  /* if bpmnProcessId is set version = -1 indicates to use the latest version */
  int32 version = 3;
  /* payload has to be a valid json object as string */
  string payload = 4;

OK, so now we’ve created an instance for the simple-process.bpmn workflow we already deployed, and we’re using the most recent version of the workflow model (in our case, the only version).

Our instance has a payload, too, which we’ll use later to correlate a message to the workflow instance.

Activate and Work on Jobs

Our simple workflow model includes a “Collect Money” service task, and next, we need to request and work on a corresponding job so that we can complete this task.

A simple process used for this example

If you open the workflow model in the Zeebe Modeler, click on the “Collect Money” service task, and open the properties panel, you can see that the job type is payment-service (click to enlarge).

Service task job type in the Zeebe Modeler

And so we’ll need to create a simple worker that requests and completes jobs of type payment-service.

First, let’s take a look at the ActivateJobsRequest in our proto file. The ActivateJobsRequest is what we’ll use to communicate to the Zeebe brokers that we have a worker ready to take on a certain number of a certain type of job.

message ActivateJobsRequest {
  string type = 1;
  string worker = 2;
  int64 timeout = 3;
  int32 amount = 4;

For the purposes of this tutorial, the most important fields in the request are type (again, this is the job type specified in the service task in the workflow model) and amount (the number of jobs our worker will accept at that time). We’ll also set a timeout(how long the worker has to complete the job)`and worker `(a way to for us to identify the worker that activated a job).

When we look at the Gateway service in the proto file, we see that an ActivateJobsRequest returns a stream rather than a single response:

rpc ActivateJobs (ActivateJobsRequest) returns (stream ActivateJobsResponse) {

So we’ll need to handle the stream a bit differently than we’ve been handling single responses when we update our client.

for jobResponse in stub.ActivateJobs(gateway_pb2.ActivateJobsRequest(
type = 'payment-service', worker = 'zeebe-client-test', timeout = 1000,
amount = 32)):
            for job in jobResponse.jobs:
                stub.CompleteJob(gateway_pb2.CompleteJobRequest(jobKey = job.key))

Note that in this tutorial, we’re simply activating and completing jobs–there’s no business logic in our client application. In the real world, you’d replace print(job) with your job worker’s business logic.

When we run zeebe_client.py, we see information about the job we just completed:

key: 2
type: "payment-service"
jobHeaders {
  workflowInstanceKey: 6
  bpmnProcessId: "simple-process"
  workflowDefinitionVersion: 1
  workflowKey: 1
  elementId: "collect-money"
  elementInstanceKey: 21
customHeaders: "{"method":"VISA"}"
worker: "zeebe-client-test"
retries: 3
deadline: 1543398908569
payload: "{"orderId":"ab1234"}"

There’s one last thing we have to do to complete the workflow: send a message that can be correlated with this workflow instance.

If you’re new to messages and message correlation in Zeebe, you might want to check out the blog post where we go over the feature in detail or read through the Message Correlation reference page of the documentation.

Once again, we’ll look at our model in the Zeebe Modeler to get more information about the message we need to send sending. The workflow instance will need a message with name payment-confirmed and a matching orderId(the correlation key) in order to correlate the message (click to enlarge).

Message name in Zeebe Modeler

stub.PublishMessage(gateway_pb2.PublishMessageRequest(name =
        "payment-confirmed", correlationKey = "ab1234", timeToLive = 10000,
        messageId = "messageId", payload =  '{"total-charged" : 25.95}' ))

When we created our workflow instance, we included a payload orderId : "ab1234", and because orderId is the correlation key we specified in the workflow model, we need to make sure the value in the correlationKey field of our message matches the workflow instance payload so that the message will be correlated.

The PublishMessageResponse is empty, so we won’t worry about storing it in a variable or printing it.

If we run the updated application, we don’t get any feedback about the message being correlated in the terminal itself, but if, for example, we’re using Zeebe’s Elasticsearch exporter and inspecting the stream of workflow events in Kibana, we can see that our message was correlated to a workflow instance and that the workflow instance was completed (click to enlarge):

Inspecting Zeebe events in Kibana=

Wrapping Up

And that concludes our gRPC client stub tutorial for Python! After generating a Python client stub using Zeebe’s protocol buffer and native gRPC tools, we walked through how to:

  • Request a cluster topology
  • Request a list of deployed workflows
  • Deploy a workflow instance
  • Activate and complete jobs
  • Publish a message

…all in Python and without using zbctl.

We chose Python because, well, it’s one of the co-authors’ preferred programming language. But remember: you could go through these steps and create your own stub in any of the programming languages listed here.

What we generated today wasn’t a fully-featured client, but for many users, we expect that a stub like this one will be more than sufficient for working with Zeebe in their target language. We hope you give it a try.

Questions? Feedback? We want to hear from you. Visit the Zeebe Community page to see how to contact us.

Camunda Developer Community

Join Camunda’s global community of developers sharing code, advice, and meaningful experiences

Try All Features of Camunda

Related Content

Learn the individual strengths of genAI and ML and then learn how they can complement each other in your business.
Learn about the best programming languages for microservices, plus how to compose and orchestrate microservices for your project.
What Intelligent Process Automation (IPA)? Learn the definition and benefits of IPA, plus use cases, workflows, and technologies needed to get started.