Dynamic Tasks In Activiti

Requirements:

Here is a hypothetical requirement:
We are talking about a help desk system here. When an incident occurs, a human task is created and assigned to the help desk supervisor. Supervisor may fix it himself and complete the task. As of now, this is standard activiti human task.
Now the supervisor may need to ask the idea of network admin. And/or he may need to ask oracle dba to do something. He needs to add unlimitted number of tasks, assigned to various actors.
Now it is more complex, but again could be implemented with SubTask undocumented feature of actviti.
In the next level, in some cases the supervisor may need to add two  or more consecutive task instead of one single task. Suppose that he wants to get the idea of Oracle DBA and forward it automatically to the software developer. Each of  the following actors, may in turn do the same thing which the supervisor can do. They in turn, may add one, two or more step task series to the process.
The process shall continue, when all the created tasks are completed.

Here is where it gets interesting.

Solution interface:
Here is what I came to, to address requirements.
Let’s start from the high level diagram.

dynamicTaskDiagram
After the start of the process, Service task creates the first dynamic task. Java delegate class simply calls  function from DynaTaskService service:

public class DynamicTaskDelegate implements JavaDelegate {
   @Override
   public void execute(DelegateExecution execution) throws Exception {
      MyUnitTest.dynaTaskService.createOneTask((ExecutionEntity) execution);
   }
}

This will create a dynamic with special behavior. Actually in the DynaTaskService two methods avalable. One for creating one single dynamic task, the other one for two consecutive dynamic tasks:

public interface DynaTaskService  {
    public void createOneTask(ExecutionEntity execution);
    public void createTwoTasks(ExecutionEntity execution);
}

calling createOneTask will create an instance of the unattached single User task, you can see in the diagram. Similarly the createTwoTasks function, creates an instance of unattached sequence of two consecutive tasks.
The next actors, can also call the same methods to create as many tasks as required.
Creating each task causes a counter process variable to be increased.

The magic happens when a task is completed using taskService.completeTask if the task is a dynamic one and there is some other task following, it will be executed as usual. But if the task is dynamic and it is the end task, then another counter, counting number of completed tasks will be increased, line of execution will be terminated, and the receive task is signalled. The gateway after receive task, checkes if the number of completed tasks are equal to number of created tasks. If there are still open tasks, then the loop continues.

This fully addresses the strange non usual requested functionality.

But let’s see how it works under the hood:

How it works:
Tasks are created by calling createOneTask or createTwoTask.
When called createt task function does these steps:

  • increase the counter for created tasks, create if it does not already exist
  • creates seperate execution for the new task
  • create a new task regarding a specific naming convention, with the proper activity name already available in Diagram
  • relates the created task to the created execution

user can call the create task functions arbitarily. The normal activity behavior for these kind of dynamically created task is simply to remove them, and remove the execution if nothing is coming afterwards. This is not exactly what we want.
To change this behavior, we need to override default activiti behavior for completing tasks. Here MyCustomActivitiBehaviorFactory factory comes into the play. It creates our custom behavior MyUserTaskActiviyBehavior when asked by engine.
The most important part of logic is handled in MyUserTaskActivityBehavior. Here is a list to show what is done in the behavior in short:

  • Using name of the task, determine if it is a dynatask. It is not used right now, but may become handy.
  • if there is an outgoing sequence, just follow it normally
  • if there is no outgoing sequence, it means the task is at the end of execution. increase completed tasks counter, end the signal the recieve task on the parent execution

The source file is available in github

JMX feature added to Activiti

I just contributed the JMX support feature to Activiti. It is merged and will be available in the coming release (release 5.16.4) of Activiti.

It is specially useful for people in operation teams. It reveals performance information and enables the operator to change some parameters on the fly.

To enable the feature it is enough to add the activiti-jmx module to dependencies. Jmx Mbean server will be automatically found and configured with the default settins.

Default configuration makes the JVM instance visible in Jconsole. If it is not visible for any reason, it can be found using this url: service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi/activiti using jconsole. By default Mbeans are exposed in org.activiti.jmx.Mbeans path.

As of now these informations and operations are available via JMX:

Attributes:

  • getting process definitions
  • getting deployment

Operations:

  • Undeploy
  • suspend, activate process definitions by id and key

Default configuration can be changed by putting these parameters in config file or setting them directly via API:

  • registryPort (Default is 1099)
  • mbeanDomain (default is “DefaultDomain”)
  • connectorPort
  • serviceUrlPath (Default is /jmxrmi/activiti)
  • createConnector (default is true)
  • disables (default is false)
  • domain (default is org.activiti.jmx.mbeanObjectDomainName)

New Mbeans are easy to add thanks to created JMX annotation. You need only to annotate attributes with @ManagedAttribute and methods with @ManagedOperation. The only possible annotation parameter is “description”. The given description will be shown as Mbean description.

There are test cases, which bring up the jmx server and actually query and change the information via JMX. They can be used as the sample for have a taste of what a jmx client can look like.
DeploymentsJMXClientTest and JobExecutorJMXClientTest are such test cases.

A short history of model designer

It all started by work of  “Prof. Dr. Mathias Weske” from University of Potsdam. He and his colleagues started an open source general purpose web based designer named as Oryx Project.
Orxy uses raphael js library for showing graphics and offers a generous GPL license.
In May 2009 the same team, established Signavio company to enhance and sell the solution. This caused the good oryx prject to be stopped.
Activiti, coming from old JBPM project, was using an eclipse based designer from its first versions. Activiti switched to Signavio probably in 2010 and more or less forgot the eclipse plugin.
After some time activiti modeler development moved to commercial KIS-Project.
Again after some months efforts moved to “Alfresco Activiti” in the form of commercial cloud services.
It is officially stated that Activiti Modelers are not going to be maintained. Only minor donated bug fixes will be merged with the code and actually in the last two years there has not been much activiti around it.

Branched Comunda is using eclipse based designer till today, but they have also started to create their own web based designer, this time from scratch and not depending on orxy. The project name is bpmn.io. It seems to be in its early stages.

It may be worth mentioning what other workflow providers use:
Intalio uses an eclipse based designer, very similar to the one used by activiti.
JBPM: uses a branch of orxy editor.

PS: There is a good related post here, a little bit old though.

Updating git from svn

I just had to improt a huge svn repository into git.
If you spend enough time on importing, importing with git svn clone is not that hard. It keeps the history, and works very good.
The problem arose when I tried to remove big jar files from the repository. I planned to move it to maven and it was not a good idea to keep jar files in repository.
Simple removing them, left the footsteps in the history and made it unncessarily big.
I tried to rewrite history the way explained here. It took more than one week. I cancelled it.
This tools helped me:
http://rtyley.github.io/bfg-repo-cleaner/

It is rally fast and efficient. It did the job in matter of hours.

I cleane it up and pushed it into repository. Afte one week I found out, the team has done some commits on old svn.
using svn git, meant to import every thing from scratch.
I decided to write a python script, create one patch for each svn revision. Then applying them and commiting with the same description as subversion. It needed some manual work but was much faster than importing the whole svn

How to mock web services

In black box testing, sometimes we need to test a system which consumes and produces web services. Here is my experience on how to test such a system.

you need to call a the exposed web service and check how the consumed webs services are called. Exposed web service is already provided by the system, and you need only a client to call it. I used CXF. very straight forward and easy.

But the consumed web services are not there yet. They are not in the scope of tests and mostly not in our control at all.

One approach is to create fix implementation of thos web services, again using a library like CXF and  when the callback functions are called, check that the parametrs are passed correctly and resturn a suitable result for test of other parts at the same time.

This approach works fine only for the simplest cases. If you want to test different scenarios, the fixed mock web service should react differently and you have to put the logic inside it. Sometimes you meay need to have more than one implementation and re-publish the service with the alternative implementations. Not beautiful I admit.

A more beautiful approach is to use mocking frameworks. Here I have selected Mockito.

Here are the steps:

First you have to create a mock object, a normal one based on the service interface. I do it via annotation. like this:

@Mock
static ServiceInterface serviceMock;

But this should be called from a web service. For any strange reason, it could not be done directly. I had to create a reflecter class to do the job:

@Mock
public class ServiceProxy implements java.lang.reflect.InvocationHandler {
    private Object obj;

    public static Object newInstance(Object obj) {
        return java.lang.reflect.Proxy.newProxyInstance(
            obj.getClass().getClassLoader(),
            obj.getClass().getInterfaces(),
            new ServiceProxy(obj));
    }


    private ServiceProxy(Object obj) {
        this.obj = obj;
    }

    @Override
    public Object invoke(Object proxy, Method m, Object[] args)
        throws Throwable {
        Object result;
        try {
    
            result = m.invoke(obj, args);
        } catch (Exception e) {
            throw new RuntimeException("unexpected invocation exception: " +
            e.getMessage());
        }

        return result;
    }
}

then it is easy, you can define your web service mock and start it.


service = (ServiceInterface)         ServiceProxy.newInstance(serviceMock);
serviceEndpoint = Endpoint.publish(
				"http://localhost:8080/myService", service);
 

Every thing is ready now. you can run the system and use the mock, just like a normal mock.
As an example of the usage, this code tests if a method is called in web service. Obviously all the mock language of Mockito can be used for more complex tests.

verify(serviceMock).myMethod(anyInt);

 
Enjoy your testing !

To present it in a more concrete way I have created a sample usage. It could be found here: https://github.com/smirzai/webServiceMock.git

There is also a worldline version. It uses resource locator to find web service.

Synchronous and Asancronous binary tree comparison

if you have not taken “Principles of reactive programming” course about advanced Scala and Aker framework, probably you are not using internet properly, think again. It is a wonderful course from EPFL, one of the hardest in Coursera.

To pass this course you have to complete some programming excersized. One of them is “Actor Binary Tree”, In this example you have to implement an asynchronous search binary tree. When you insert a node, it is distributed asynchronously through the tree. Nothing is locked and this makes it really fast. The sender actor gets acknowledged when the insertion is done.

In theory the idea is really beautiful,  but how useful is it in reality? Why we have to use this relatively complex architecture. The answer is that because it is asynchrouns nothing is locked and you should have a very high performance.

But how it comares with a similar pure java implementation ? For consistency usually we lock the tree. That definitely will not compete. But if we use an immutable version we do not need to worry about the consistency.

To do the comparison I searched and found an immutable binary tree in Kansas State University.

To compare I will insert large numbers of nodes in both implementations and wait untill all are acknowledged and measure the time. In Java implementation different number of threads is tried.

BinarySearchComparison

 

The maximum TPS of Scala version is 2952 and java version 2701739, more than 90 times.

So the conclusion is, at least if the operation itself is fast and not IO intensive, it is no use in usage of asynchronous type of programming. The overhead of aker is considerable if the operations themselves are fast.

 

The source for java implementation is available in github.

 

 

Scala, Scala

It is a while I am following courses on coursera. It is really amazing.

Right now I am followin “Principles of reactive programming”. I have to say it is just fascinating !

Programming assignments are anything but easy. Needs a lot of time.

Idea of Future, Response and observables  are so natural. I have to do a comparison to the same facilities in Java and Python.

I really recommend the courses.

Expression Language Validation

We used juel as an expression language EL implementation in one our projects. The problem we had, was that we could not afford waiting for the expression to be checked at runtime. For example if you are accessing a pojo field, and that field does not exist, they you only know it in runtime.

To fix this, I created an extension and added a validate method. This method traverses the tree and checks all possible options to make sure the expression is valid with given objects.

I offered the new extention to the community, but they thought it is better to keep the Juel to minimum required to match the standard.

I am going  to publish as an extension to Juel in seperate branch. Maybe someone else also can make use of it. Not planned yet though.

Python Library for bluetooth

Just for its fun, I have started to build a remote control robot, using

– Mindstorm

– Raspberry pi

– Camera

– Joystick

– python

First step is to read joystic in python. I checked pygame.joystick and it works fine.

Sending values to NXT over Bluetooth is not that simple. There are not much on we. I checked jaraco.nxt . The source looks very sophisticated and a large range of commands are supported. For me it did not work for any reason.

Second attempt was “Roboter Platform“. A simple low library which works. Does not cover all commands but works find for what I want. The good part is it is very easy to see how things work and expand it.

Now I am able to send information in Message formats to NXT. The next step would be to control the movement of Robot with Joystick.

 

DBVisualizer custom JDK

VisualDB is a handy tool to browse and check the Database schemes. It is not the most fancy and powerful tool available though.

One of my colleagues had a problem of installing new JDK and seems that Visual-db had some problems with Java 7. So the question is how to make visual-db use a specific version of JDK.

I am not if this is documented some where or not, but took some time for him to solve the problem.

The solution is to copy the JDK directly in its directory. Visual-db, will use the jdk in its own directory if available. If there is no JDK in directory, then it uses the default JDK installed on machine. In our case, the name of the directory was jre and it was directly inside visual-db directory. It works  fine with version 5 and 6 of Java but maybe not with latest version 7.