Category Archives: activiti

Activiti Maven Archetype

As activiti developer, It happens too often that one has to create sample test cases . Some boilerplate code is necessary even for the most basic test case to run. Things like sample config file, sample process model, test case.

To make it faster, Joram has created a “Activiti Unit Test template” which makes life much easier. Personally I have used it a lot. It does most of the job, but again you need to do some customization to fully use it. To say, if you use multiple copies, without change, you can import only one of them in Eclipse.

There are tools for scaffolding and generating which allow faster, more dynamic and customizable templates. As maven is already used, maven archetype seems to be the logical tool.

In the most recent version of activiti (version 5.17.1) , the  first activiti maven archetype contains the same code written by Joram, in a new shell.

The archetype will be available in standard activiti maven repository. This simple command, will create a unit test template.

mvn archetype:generate


The first three parameters  specify the archetype to be used. GroupId and artifactId determines your customized groupId and artifactId. Packages will be created according to the given group Id.

As simple is that.

This is just the start. Lot’s of extensions are possible. Ideas like creating a template for a web application (one with angular.js and one with jsf), selecting database type and so on.


Real Multithreaded parallel execution in activiti

Here is a real scenario:

We are going to implement a  web service using activiti. So we should be synchronous and quick. In this scenario call comes from Camel endoint, but it is not relevant for this discussion and we can ignore it safely without loosing generality.

To conclude the service, we need to call 3 other external web services. External web services take random amount of time to return back the result. After all 3 web services are returned, we need to aggregate their returned results and return the result of aggregation as the return value of our web service.

This diagram shows the outline:



This works fine. As each step is synchronous, activiti executes the next step only when the previous one is returned. The returned values is stored as a process variable. At aggregation we have all the returned values and we can aggregate.

But if the web service calls are independent, we can run them in parallel and save time and resources. And here is where all the problems begin.

BPMN has already a parallel gateway available. The first temptation is  using that feature. something like this:



It looks nice on the paper, but we know that at activiti uses a single thread to run all the three paths. This means that at the end of the day, they are sequenced. There is no differences to the previous one, except that we cannot be sure about the order of the execution of web service calls. Not very promising !

What if we make the parallel services asynchronous? Ok, that is not easy as well. Here is a nice post from Tijs about how it could be done. It might be a little outdated though, thank to the new Async Job Executor starting from version 5.17.0.

With this approach, aggregate part is done properly. Activiti makes sure, the aggregate service is called only when all 3 web service calls are finished. good !

But the problem is that we need to return aggregated result as the return of web service. As the parallel tasks are asynchronous, the control returns to the caller immediately, without waiting for the external web service calls to return. So we don’t have the result yet. We need to wait and synchronize for the result to come. Usual way to do it is by locks and monitors in Java. Not exactly feasible here, as the the very same objects are not passed to the process. They are serialized and deserialized  via database.

To solve that problem, we can maintain a global static array of objects, and only pass the reference to the proper lock object to the activiti. In aggregation service, the static object is found and signaled with the help of passed index. Signaling causes the waiting thread to release and result to return back to web service.

Just as a rough proof of concept there I have created a  sample, available  in   parallelMultiThreaded in github.

In the sample MyUnitTest test class simulates the body of newly created web service call. Service1, Service2, Service3 simulate external web service calls, simply by sleeping. The executed model is the second one above with parallel gateway.

The static lock and condition arrays are defined as:

 public static Condition[] conditions = new Condition[1000];
 public static Lock[] locks = new Lock[1000];


index, is given a random number (0, not much random I confess) and passed to the activiti.
After starting the instance, main code goes to an unlimited wait loop, waiting for the corresponding object to be unlocked:


After all the external web services are executed, activiti executes aggregation task, which in turn signals the lock:


The log files clearly show that everything goes fine:

02:35:16,817 [pool-1-thread-1] INFO  org.activiti.Service1  - Service 1 started, time = 70
02:35:16,817 [pool-1-thread-2] INFO  org.activiti.Service2  - Service 2 started, time = 70
02:35:16,817 [pool-1-thread-3] INFO  org.activiti.Service3  - Service 3 started, time = 70
02:35:17,318 [pool-1-thread-3] INFO  org.activiti.Service3  - Service 3 finished, time = 571
02:35:17,519 [pool-1-thread-2] INFO  org.activiti.Service2  - Service 2 finished, time = 772
02:35:17,819 [pool-1-thread-1] INFO  org.activiti.Service1  - Service 1 finished, time = 1072
02:35:17,859 [pool-1-thread-1] INFO  org.activiti.AggregateService  - All external ws calls returned. Try to unlock the lock. time = 1112
02:35:17,859 [main] INFO  org.activiti.MyUnitTest  - wait loop exited. Program finished. total taime spent = 1112

It is clear that tasks are running simultaneously  and aggregation and monitoring are done properly. Also there is no considerable delay for unlocking and signaling.


Ok, I am cheating. The time for sleeping are intentionally set to different values with proper distance. Otherwise contention will cause Asynchronous job executor, to retry. This adds several seconds to the whole process. But that is not the subject of this post.

The sample code is not considering optimistic locking problem as is described in the post above, neither it handles timeout in waiting for lock to be release,  but well shows the concept. So with this workaround, it is possible to use activiti parallel gateway for executing external web service really multi-threaded and respond the result synchronously.

But one may suggest to overlook activiti parallel gateways, and do the thread synchronization in a single Java delegate. That way it would be much  conciser and easier to implement. It is much faster, as it does need communicating via database. The cons is that you can not see the real flow in the diagram.


Obviously this only works when everything is running in a single jvm. It is possible to implement it in a cluster, but even the current implementation is not much practical to the long delay caused by asynchronous job picking mechanism

PS: Thank to Joram’s hint, I replaced the low level wait and notify to higher level ReenterantLock.




Dynamic Tasks In Activiti


Here is a hypothetical requirement:
We are talking about a help desk system here. When an incident occurs, a human task is created and assigned to the help desk supervisor. Supervisor may fix it himself and complete the task. As of now, this is standard activiti human task.
Now the supervisor may need to ask the idea of network admin. And/or he may need to ask oracle dba to do something. He needs to add unlimitted number of tasks, assigned to various actors.
Now it is more complex, but again could be implemented with SubTask undocumented feature of actviti.
In the next level, in some cases the supervisor may need to add two  or more consecutive task instead of one single task. Suppose that he wants to get the idea of Oracle DBA and forward it automatically to the software developer. Each of  the following actors, may in turn do the same thing which the supervisor can do. They in turn, may add one, two or more step task series to the process.
The process shall continue, when all the created tasks are completed.

Here is where it gets interesting.

Solution interface:
Here is what I came to, to address requirements.
Let’s start from the high level diagram.

After the start of the process, Service task creates the first dynamic task. Java delegate class simply calls  function from DynaTaskService service:

public class DynamicTaskDelegate implements JavaDelegate {
   public void execute(DelegateExecution execution) throws Exception {
      MyUnitTest.dynaTaskService.createOneTask((ExecutionEntity) execution);

This will create a dynamic with special behavior. Actually in the DynaTaskService two methods avalable. One for creating one single dynamic task, the other one for two consecutive dynamic tasks:

public interface DynaTaskService  {
    public void createOneTask(ExecutionEntity execution);
    public void createTwoTasks(ExecutionEntity execution);

calling createOneTask will create an instance of the unattached single User task, you can see in the diagram. Similarly the createTwoTasks function, creates an instance of unattached sequence of two consecutive tasks.
The next actors, can also call the same methods to create as many tasks as required.
Creating each task causes a counter process variable to be increased.

The magic happens when a task is completed using taskService.completeTask if the task is a dynamic one and there is some other task following, it will be executed as usual. But if the task is dynamic and it is the end task, then another counter, counting number of completed tasks will be increased, line of execution will be terminated, and the receive task is signalled. The gateway after receive task, checkes if the number of completed tasks are equal to number of created tasks. If there are still open tasks, then the loop continues.

This fully addresses the strange non usual requested functionality.

But let’s see how it works under the hood:

How it works:
Tasks are created by calling createOneTask or createTwoTask.
When called createt task function does these steps:

  • increase the counter for created tasks, create if it does not already exist
  • creates seperate execution for the new task
  • create a new task regarding a specific naming convention, with the proper activity name already available in Diagram
  • relates the created task to the created execution

user can call the create task functions arbitarily. The normal activity behavior for these kind of dynamically created task is simply to remove them, and remove the execution if nothing is coming afterwards. This is not exactly what we want.
To change this behavior, we need to override default activiti behavior for completing tasks. Here MyCustomActivitiBehaviorFactory factory comes into the play. It creates our custom behavior MyUserTaskActiviyBehavior when asked by engine.
The most important part of logic is handled in MyUserTaskActivityBehavior. Here is a list to show what is done in the behavior in short:

  • Using name of the task, determine if it is a dynatask. It is not used right now, but may become handy.
  • if there is an outgoing sequence, just follow it normally
  • if there is no outgoing sequence, it means the task is at the end of execution. increase completed tasks counter, end the signal the recieve task on the parent execution

The source file is available in github

JMX feature added to Activiti

I just contributed the JMX support feature to Activiti. It is merged and will be available in the coming release (release 5.16.4) of Activiti.

It is specially useful for people in operation teams. It reveals performance information and enables the operator to change some parameters on the fly.

To enable the feature it is enough to add the activiti-jmx module to dependencies. Jmx Mbean server will be automatically found and configured with the default settins.

Default configuration makes the JVM instance visible in Jconsole. If it is not visible for any reason, it can be found using this url: service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi/activiti using jconsole. By default Mbeans are exposed in org.activiti.jmx.Mbeans path.

As of now these informations and operations are available via JMX:


  • getting process definitions
  • getting deployment


  • Undeploy
  • suspend, activate process definitions by id and key

Default configuration can be changed by putting these parameters in config file or setting them directly via API:

  • registryPort (Default is 1099)
  • mbeanDomain (default is “DefaultDomain”)
  • connectorPort
  • serviceUrlPath (Default is /jmxrmi/activiti)
  • createConnector (default is true)
  • disables (default is false)
  • domain (default is org.activiti.jmx.mbeanObjectDomainName)

New Mbeans are easy to add thanks to created JMX annotation. You need only to annotate attributes with @ManagedAttribute and methods with @ManagedOperation. The only possible annotation parameter is “description”. The given description will be shown as Mbean description.

There are test cases, which bring up the jmx server and actually query and change the information via JMX. They can be used as the sample for have a taste of what a jmx client can look like.
DeploymentsJMXClientTest and JobExecutorJMXClientTest are such test cases.

A short history of model designer

It all started by work of  “Prof. Dr. Mathias Weske” from University of Potsdam. He and his colleagues started an open source general purpose web based designer named as Oryx Project.
Orxy uses raphael js library for showing graphics and offers a generous GPL license.
In May 2009 the same team, established Signavio company to enhance and sell the solution. This caused the good oryx prject to be stopped.
Activiti, coming from old JBPM project, was using an eclipse based designer from its first versions. Activiti switched to Signavio probably in 2010 and more or less forgot the eclipse plugin.
After some time activiti modeler development moved to commercial KIS-Project.
Again after some months efforts moved to “Alfresco Activiti” in the form of commercial cloud services.
It is officially stated that Activiti Modelers are not going to be maintained. Only minor donated bug fixes will be merged with the code and actually in the last two years there has not been much activiti around it.

Branched Comunda is using eclipse based designer till today, but they have also started to create their own web based designer, this time from scratch and not depending on orxy. The project name is It seems to be in its early stages.

It may be worth mentioning what other workflow providers use:
Intalio uses an eclipse based designer, very similar to the one used by activiti.
JBPM: uses a branch of orxy editor.

PS: There is a good related post here, a little bit old though.

Activiti and Hibernate/tynamo integration

We are using Tapestry as MVC framework in the company and we needed to integrate a sample tapestry based with activiti security tables.

“Apache Shiro” and “Tynamo” are used. “Apache Shiro” is a Java security framework from Aapche. Tynamo  helps in using Shiro in Tapestry. In this post I am going to explain how Activiti security could be used in Tapestry web based applications.



Target is creating a web based application. The related page should be visible only after authentication. For authentication these conditions should be validated:

1. User should exist in “act_id_user” table

2. password should match to the password in the same table.

3. In “act_id_membership” table, user should be associated with “admin” group
Activiti realm:

In Shiro, different realms are foreseen. JDBC, LDAP, Active. So one way is to create a new “Activiti Realm” and configure Shiro to use it, and configure Tapestry to use Shiro.

Creating ActivitiRealm is simple. You have to extend AuthorizingRealm class and implement doGetAuthenticationInfo member and put the authentication logic in it. Something like this:


[code lang=”java”]
protected AuthenticationInfo doGetAuthenticationInfo(
AuthenticationToken token) throws AuthenticationException {
UsernamePasswordToken usernamePasswordToken = (UsernamePasswordToken) token;
String username = usernamePasswordToken.getUsername();
char[] pswrd = usernamePasswordToken.getPassword();
String password = String.copyValueOf(pswrd);

// check if the username and password are correct
IdentityService identityService = processEngine.getIdentityService();
if (!identityService.checkPassword(username, password)) {
throw new IncorrectCredentialsException();

// check if the username ins member of “admin” role
GroupQuery query = identityService.createGroupQuery();
if (query.groupMember(username).groupName(ADMIN_ROLE).count() == 0) {
throw new IncorrectCredentialsException();
return buildAuthenticationInfo(username, password);


Service Injection:

As you may have seen in the above code, processEngine is just used. It is injected by Tapestry Injection framework. It could be injected in Spring configuration file or in build method in AppModule. In this implementation Spring based configuration is used. This code snipplet in application-context.xml file will do the job:

[code lang=”xml”]

Configure Shiro to use Activiti Realm
Now we have the realm and we have to make shiro use our new realm.
This method in will make shiro use the realm:

[code lang=”java”]
public static void addRealms(Configuration configuration, @Autobuild ActivitiRealm activitiRealm) {

Login page
Shiro is configured and can be used for checking security. Now we need a login page. It is possible to use default login page, but here I prefer to have my own login page.
The page is a normal login page, like in any other one. Just in validation, such a code uses Shiro for authentication:

[code lang=”java”]
public Object onActionFromJsecLoginForm()

Subject currentUser = securityService.getSubject();

if (currentUser == null) {
throw new IllegalStateException(“Subject can`t be null”);

UsernamePasswordToken token = new UsernamePasswordToken(jsecLogin, jsecPassword);

try {
} catch (UnknownAccountException e) {
loginMessage = “Account not exists”;
return null;
} catch (IncorrectCredentialsException e) {
loginMessage = “Wrong password”;
return null;
} catch (LockedAccountException e) {
loginMessage = “Account locked”;
return null;
} catch (AuthenticationException e) {
loginMessage = “Authentication Error”;
return null;

SavedRequest savedRequest = WebUtils.getAndClearSavedRequest(requestGlobals.getHTTPServletRequest());

if (savedRequest != null && savedRequest.getMethod().equalsIgnoreCase(“GET”)) {
try {
return null;
} catch (IOException e) {
logger.warn(“Can’t redirect to saved request.”);
return Index.class;
} else {
return Index.class;
Configure Tapestry to use Shiro for security
Ok, everything is there. Now we have to configure Tapestry to use Shiro for security and secure desired pages.
First we have to introduce our login page. Shiro will redirect user to this page, whenever he wants to access to a secured page and he is not logged in. This code in AppModule does this task:

[code lang=”java”]
public static void applicationDefaults(
MappedConfiguration<String, String> configuration) {
// Tynamo’s tapestry-security (Shiro) module configuration
configuration.add(SecuritySymbols.LOGIN_URL, “/login”);

and the last thing is to let Shiro know which pages should be protected:

[code lang=”java”]
public static void contributeSecurityConfiguration(Configuration configuration,
SecurityFilterChainFactory factory) {
// /authc/** rule covers /authc , /authc?q=name /authc#anchor urls as well

this configuration make the “/login” accessible without authentication and put protection on “/assets/”, “/myPage” and “/” pages.

Done, Done.


Sending more than error code in BPMN boundary error event

The basic usage of BPMN Boundary event, just sends a string named as ErrorCode. For some cases it might not be enough. We just had an interesting discussion in activiti forum about what should be proper definition according to the specification.

Discussion could be followed in issues    act-1411  and  act-462.


Here is the summary of my latest conclusion:

a clearer and more direct diagram is Figure 10.69 in section 10.4. It is clearly mentioned that “intermediate catch event” is inherited from “Catch Event” which is associated with “DataOutputAccociation”. On the other hand, “Intermediate Throw Event” is inherited from “ThrowEvent” which is associated with “DataInputAssociation”.

It could also be seen in xsd. in “Table 10.113 – IntermediateCatchEvent XML schema” is inherited from “tCatchEvent” which is explained in “table 10.104 – Catch Event XML Schema”. And there association with “DataOuput” and “DataOutputAssociation” is clear. With the same logic “dataInput” and “dataInputAssociation” is stated for “intermediate Throw Event”.

So the the catching part should be something like this:
<intermediateCatchEvent id=”myIntermediateCatchEvent” >

and throwing part can be something like:
<intermediateCatchEvent id=”catchSignalEvent” name=”On Alert”>
<!– signal event definition –>
<XXXEventDefinition” />

of course, with whistles and bells defined in activiti guide

Using Activiti with Xform


Thank you Joram to motivate me to write this post and explains what is mentioned in this post in more details.

We are using Activi in our company. Activiti provides an explorer application which is based on Vaadin. Vaadin is a Javascript based web framwork and extends Google GWT.

My first problem with this approach is that a new protocol for form definition is created from scratch. There are already mature form definition protocols with lots of open source and commercial implementations available. So why to reinvent the wheel ? Xform maybe the best standard available today. It is a little bit old and forgotten, but still works fine.

My second problem is the framework used. Vaadin is wonderful, but it is not easy to integrate it with other main stream servlet based frameworks.

First I tried to make current Vaadin based implementation to use Xform instead of home made protocol used in it. I spent several days on the subject and was persuaded that it is not easy if not impossible. One of the problems is discussed here in Vaadin forum.

To select a rendering engine, first I took ronal.van.kuik’s advice in this thread and selected BetterForms open source Xforms implementation. I was not able to configure it properly to work well in a short time. So I tested Orbeon, another open source implementation.This time it went well. I was able to create a proof of concept easily. Here I explain how it works.

Orbeon Integration

First of all Orbeon needs to be installed on the tomcat. Orbeon could be configured to process the Xforms for another application. First it should be deployed as a seperate web application, say /orbeon. It is a stand alone application with a nice user interface for management and even designing new forms. But what we need here is the form rendering functionality. For this, a jar file should be copied in the application and configured as a filter. This filter checks the HTTP traffic and renders any xforms content in it.

Details for configuration of Orbeon could be found here. Installation is very straight forward.  I used “seperate configuration” option, though “Integrated application” also does not seem to be much different.

Xforms file assignment

The next step is to define a standard for assigning the user task to a specific xforms file. Our application uses static tapestry forms. It is desired that application be backward compatible and can support traditional static forms. For this, I selected to add an “xform:” string to the start of the form name, if it is going to be rendered by Xforms engine. Else it will be processed as usual.   Here is how the definition looks like:

id="utProvideMerchantMainInfo" name="Provide merchant main info"

Rendering Xforms 

Now is the time to do the main part of the job and render the form. The form reading should be dynamic. As Orbeon filter is configured, it is enough to copy the content of xforms  file to  http output. To do this, i have created a Servlet. This servlet simply, gets the file name containing xform from a request parameter and copies the content of the file to http stream.

Here is the code for servlet:

String formName = request.getParameter(PARAM_XFORM_NAME);
String resourceName = "/xforms/" + formName + ".xhtml";
String line;
Writer writer = response.getWriter();
while ((line = reader.readLine()) != null) {

and here is its mapping in web.xml



now in the application, it is enough to set the file name parameter and forward the page to renderer servlet:

if (formPrefix == FormPrefix.XFORMS) {
formKey = FormPrefix.removePrefix(formKey);
Link link = new LinkImpl("/xa-mac-admin" + "/xforms/renderer", false,
false, response, null);
link.addParameter(XformsRendererServlet.PARAM_XFORM_NAME, formKey);
link.addParameter("taskId", taskId);
return link;

That was the main part, now the form is rendered by orbeon filter. Orbeon filter, also cares for events and ajax like interactions.

Submitting the xform

After the user sees the rendered xform, he fills into the fields and submits the form. In Xform standard, xforms:submission is responsible for this. Xform engine creates an xml file with from input data and sends the generated xml to the url mentioned in xforms:submission tag.

here is a sample value for this tag in our xform:

<xforms:submission action="http://localhost:8080/xa-mac-admin/xforms/submit" method="post"
id="submit" includenamespaceprefixes="" />

The xml is sent to the above url. Again we need a servlet to receive xml file, parse it and complete the task. The source for the code is a little bit long and trivial.

Initial values of  xform fields

The above procedure was the first step and usually the form in this step is clear. But it is probable that in the next steps, some of the previously input data should be shown. Again for this purpose, an xml document should be created and sent to Xform engine.