OSGi Blueprint visualization

What is blueprint?

Blueprint is a dependency injection framework for OSGi bundles. It could be written by hand or generated using Blueprint Maven Plugin. Blueprint file is only an XML describing beans, services and references. Each OSGi bundle could have one or more blueprint files.

Blueprint files represent architecture of our bundle. Let's visualize it using groovy script and graphviz available in my github repository and analyze.

Example generation

Pre: All you need is groovy and graphviz installed on your OS

I am working mostly with bundles with generated blueprint, so I will use blueprint file generated from Blueprint Maven Plugin tests as example. All examples are included in github repository.

Generation could be invoked by running run.sh script with given destination file prefix (png extension will be added to it) and path to blueprint file:

mkdir -p target

./run.sh target/fullBlueprint fullBlueprint.xml

Visualization is available here.

Separating domains

First if you look at the image, you see that some beans are grouped. You could easily extract such domains with tree roots: beanWithConfigurationProperties and beanWithCallbackMethods to separate blueprint files and bundles in future and generate images from them:

./run.sh target/beanWithCallbackMethods example/firstCut/beanWithCallbackMethods.xml
./run.sh target/beanWithConfigurationProperties example/firstCut/beanWithConfigurationProperties.xml
./run.sh target/otherStuff example/firstCut/otherStuff.xml

Now we have three, a bit cleaner, images: beanWithConfigurationProperties.png, beanWithCallbackMethods.png and otherStuff.png.

We also could generate image from more than one blueprint:

./run.sh target/joinFirstCut example/firstCut/otherStuff.xml example/firstCut/beanWithConfigurationProperties.xml example/firstCut/beanWithCallbackMethods.xml

And the result is here. The image contains beans grouped by file, but if you do not like it, you could force generation without such separation using option --no-group-by-file:

./run.sh target/joinFirstCutGrouped example/firstCut/otherStuff.xml example/firstCut/beanWithConfigurationProperties.xml example/firstCut/beanWithCallbackMethods.xml --no-group-by-file

It will generate image with all beans from all files.

Exclusion

Sometimes it is difficult to spot and extract other domains. It will be easier to do some experiments on blueprint. For example, bean my1 is a dependency for too many other beans. You could consider converting my1 bean to OSGi service and extracting it to another bundle.

Let's exclude my1 bean from generation via -e option and see what happens:

./run.sh target/otherStuffWithoutMy example/firstCut/otherStuff.xml -e my1

Result is available here. Now we see, that tree with root bean myFactoryBeanAsService could be separated and my1 could be inject to it as osgi service in another bundle.

You could exclude more than one bean adding -e switch for each of them, e. g. -e my1 -e m2 -e myBean123.

Conclusion

Blueprint is great for dependency injection for OSGi bundles, but it is easy to create quite big context containing many domains. It is much easier to recognize or search for such domains using blueprint visualizer script.


YOUR CODE HRER

Do not use AllArgsConstructor in your public API

Introduction

Do you think about compatibility of your public API when you modify classes from it? It is especially easy to miss out that something incompatibly changed when you are using Lombok. If you use AllArgsConstructor annotation it will cause many problems.

What is the problem?

Let's define simple class with AllArgsConstructor:
@Data
@AllArgsConstructor
public class Person {
    private final String firstName;
    private final String lastName;
    private Integer age;
}
Now we can use generated constructor in spock test:
def 'use generated allArgsConstructor'() {
    when:
        Person p = new Person('John', 'Smith', 30)
    then:
        with(p) {
            firstName == 'John'
            lastName == 'Smith'
            age == 30
        }
}
And the test is green. Let's add new optional field to our Person class - email:
@Data
@AllArgsConstructor
public class Person {
    private final String firstName;
    private final String lastName;
    private Integer age;
    private String email;
}
Adding optional field is considered compatible change. But our test fails...
groovy.lang.GroovyRuntimeException: Could not find matching constructor for: com.github.alien11689.allargsconstructor.Person(java.lang.String, java.lang.String, java.lang.Integer)

How to solve this problem?

After adding field add previous constructor

If you still want to use AllArgsConstructor you have to ensure compatibility by adding previous version of constructor on your own:
@Data
@AllArgsConstructor
public class Person {
    private final String firstName;
    private final String lastName;
    private Integer age;
    private String email;

    public Person(String firstName, String lastName, Integer age) {
        this(firstName, lastName, age, null);
    }
}
And now our test again passes.

Annotation lombok.Data is enough

If you use only Data annotation, then constructor, with only mandatory (final) fields, will be generated. It is because Data implies RequiredArgsConstructor:
@Data
public class Person {
    private final String firstName;
    private final String lastName;
    private Integer age;
}
class PersonTest extends Specification {
    def 'use generated requiredFieldConstructor'() {
        when:
            Person p = new Person('John', 'Smith')
            p.age = 30
        then:
            with(p) {
                firstName == 'John'
                lastName == 'Smith'
                age == 30
            }
    }
}
After adding new field email test still passes.

Use Builder annotation

Annotation Builder generates for us PersonBuilder class which helps us create new Person:
@Data
@Builder
public class Person {
    private final String firstName;
    private final String lastName;
    private Integer age;
}
class PersonTest extends Specification {
    def 'use builder'() {
        when:
            Person p = Person.builder()
                    .firstName('John')
                    .lastName('Smith')
                    .age(30).build()
        then:
            with(p) {
                firstName == 'John'
                lastName == 'Smith'
                age == 30
            }
    }
}
After adding email field test still passes.

Conclusion

If you use AllArgsConstructor you have to be sure what are you doing and know issues related to its compatibility. In my opinion the best option is not to use this annotation at all and instead stay with Data or Builder annotation. Sources are available here.

Primitives and its wrapped types compatibility

Introduction

How often do you think about possible changes in your API? Do you consider that something required could become optional in future? How about compatibility of such change? One of this changes is going from primitive (e. g. int) to its wrapped type (e. g. Integer). Let's check it out.

API - first iteration

Let's start with simple DTO class Dep in our public API.

public class Dep {
    private int f1;

    public int getF1(){
        return f1;
    }

    public void setF1(int f1){
        this.f1 = f1;
    }

    // other fields and methods omitted
}

f1 is obligatory field that never will be null.

Let's use it in Main class:

public class Main {
    public static void main(String... args) {
        Dep dep = new Dep();
        dep.setF1(123);
        System.out.println(dep.getF1());
    }
}

compile it:

$ javac depInt/Dep.java
$ javac -cp depInt main/Main.java

and run:

$ java -cp depInt:main Main
123

It works.

API - obligatory field become optional

Now suppose our business requirements have changed. f1 is not longer obligatory and we want possibility to set it to null.

So we provide next iteration of Dep class where f1 field has type Integer.

public class Dep {
    private Integer f1;

    public Integer getF1(){
        return f1;
    }

    public void setF1(Integer f1){
        this.f1 = f1;
    }

    // other fields and methods omitted
}

We compile only the new Dep class because we do not want to change the Main class:

$ javac depInteger/Dep.java

and run it with old Main:

$ java -cp depInteger:main Main
Exception in thread "main" java.lang.NoSuchMethodError: Dep.setF1(I)V
    at Main.main(Main.java:4)

Wow! It does not work...

Why does it not work?

We can use javap tool to investigate Main class.

$ javap -c main/Main.class
Compiled from "Main.java"
public class Main {
  public Main();
    Code:
       0: aload_0
       1: invokespecial #1                  // Method java/lang/Object."<init>":()V
       4: return

  public static void main(java.lang.String...);
    Code:
       0: new           #2                  // class Dep
       3: dup
       4: invokespecial #3                  // Method Dep."<init>":()V
       7: astore_1
       8: aload_1
       9: bipush        123
      11: invokevirtual #4                  // Method Dep.setF1:(I)V
      14: getstatic     #5                  // Field java/lang/System.out:Ljava/io/PrintStream;
      17: aload_1
      18: invokevirtual #6                  // Method Dep.getF1:()I
      21: invokevirtual #7                  // Method java/io/PrintStream.println:(I)V
      24: return
}

The most important are 11th and 18th instructions of main method. Main lookups for methods which use int (I in method signature).

Next let's compile the Main class with Dep which has f1 of type Integer:

$ javac -cp depInteger main/Main.java

and use javap on this class:

$ javap -c main/Main.class
Compiled from "Main.java"
public class Main {
  public Main();
    Code:
       0: aload_0
       1: invokespecial #1                  // Method java/lang/Object."<init>":()V
       4: return

  public static void main(java.lang.String...);
    Code:
       0: new           #2                  // class Dep
       3: dup
       4: invokespecial #3                  // Method Dep."<init>":()V
       7: astore_1
       8: aload_1
       9: bipush        123
      11: invokestatic  #4                  // Method java/lang/Integer.valueOf:(I)Ljava/lang/Integer;
      14: invokevirtual #5                  // Method Dep.setF1:(Ljava/lang/Integer;)V
      17: getstatic     #6                  // Field java/lang/System.out:Ljava/io/PrintStream;
      20: aload_1
      21: invokevirtual #7                  // Method Dep.getF1:()Ljava/lang/Integer;
      24: invokevirtual #8                  // Method java/io/PrintStream.println:(Ljava/lang/Object;)V
      27: return
}

Now we see the difference. The main method:

  • converts int to Integer in instruction 11th,
  • invokes method setF1 which takes parameter of type Integer (Ljava/lang/Integer;) in instruction 14th,
  • invokes method getF1 which returns Integer in instruction 21st.

These differences do not allow us to use the Main class with Dep without recompilation if we change f1.

How about Groovy?

We have GroovyMain class which do the same as Main class written in Java.

class GroovyMain {
    static void main(String... args) {
        Dep dep = new Dep(f1: 123)
        println(dep.f1)
    }
}

We will compile GroovyMain class only with Dep which uses int:

$ groovyc -cp lib/groovy-all-2.4.5.jar:depInt -d main main/GroovyMain.groovy

It runs great as expected with int:

$ java -cp lib/groovy-all-2.4.5.jar:depInt:main GroovyMain
123

but with Integer... It works the same!

$ java -cp lib/groovy-all-2.4.5.jar:depInteger:main GroovyMain
123

Groovy is immune to such change.

With CompileStatic

But what if we compile groovy with CompileStatic annotation? This annotation instructs groovy compiler to compile class with type checking and should produce bytecode similar to javac output.

GroovyMainCompileStatic class is GroovyMain class with only CompileStatic annotation:

import groovy.transform.CompileStatic

@CompileStatic
class GroovyMainCompileStatic {
    static void main(String... args) {
        Dep dep = new Dep(f1: 123)
        println(dep.f1)
    }
}

When we compile this with Dep with int field:

$ groovyc -cp lib/groovy-all-2.4.5.jar:depInt -d main main/GroovyMainCompileStatic.groovy

then of course it works:

$ java -cp lib/groovy-all-2.4.5.jar:depInt:main GroovyMainCompileStatic
123

but with Dep with Integer field it fails like in Java:

$ java -cp lib/groovy-all-2.4.5.jar:depInteger:main GroovyMainCompileStatic
Exception in thread "main" java.lang.NoSuchMethodError: Dep.setF1(I)V
    at GroovyMainCompileStatic.main(GroovyMainCompileStatic.groovy:6)

Conclusion

Change from primitive to its wrapped java type is not compatible change. Bytecode which uses dependent class assumes that there will be method which consumes or returns e. g. int and cannot deal with the same class which provides such method with Integer in place of int.

Groovy is much more flexible and could handle it, but only if we do not use CompileStatic annotation.

The source code is available here.

Spring autowire with qualifiers

Introduction

Autowired is great annotation, which by default inject beans by type to annotated element (constructor, setter or field). But how to use it, when there is more than one bean of requested type.

Autowired with one bean

Suppose we will work with small interface:
interface IHeaderPrinter {
    String printHeader(String header)
}
When we have only one bean implementing IHeaderPrinter:
@Component
class HtmlHeaderPrinter implements IHeaderPrinter{
    @Override
    String printHeader(String header) {
        return "<h1>$header</h1>"
    }
}
then everything works great and test passes.
@Autowired
IHeaderPrinter headerPrinter

@Test
void shouldPrintHtmlHeader() {
    assert headerPrinter.printHeader('myTitle') == '<h1>myTitle</h1>'
}

Two implementations

But what will happen, if we add another implementation of IHeaderPrinter, e. g. MarkdownHeaderPrinter?
@Component
class MarkdownHeaderPrinter implements IHeaderPrinter {
    @Override
    String printHeader(String header) {
        return "# $header"
    }
}
Now out test with fail with exception:
Error creating bean with name 'com.blogspot.przybyszd.spring.autowire.SpringAutowireWithQualifiersApplicationTests': Injection of autowired dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Could not autowire field: private com.blogspot.przybyszd.spring.autowire.IHeaderPrinter com.blogspot.przybyszd.spring.autowire.SpringAutowireWithQualifiersApplicationTests.headerPrinter; nested exception is org.springframework.beans.factory.NoUniqueBeanDefinitionException: No qualifying bean of type [com.blogspot.przybyszd.spring.autowire.IHeaderPrinter] is defined: expected single matching bean but found 2: markdownHeaderPrinter,htmlHeaderPrinter
We have to decide which implementation we want to use in our test, so ...

Two implementations with Qualifier

Each bean is registered with name equal its class. For example HtmlHeaderPrinter is named htmlHeaderPrinter. The name is also its qualifier. We have to tell Autowired, that it should inject htmlHeaderPrinter:
@Autowired
@Qualifier('htmlHeaderPrinter')
IHeaderPrinter headerPrinter
Now our test passes again.

Two implementations qualified by field name

If field is names like implementing class (for example htmlHeaderPrinter), then this class implementation will be injected:
@Autowired
IHeaderPrinter htmlHeaderPrinter
And test passes:
@Test
void shouldPrintHtmlHeader() {
    assert htmlHeaderPrinter.printHeader('myTitle') == '<h1>myTitle</h1>'
}
Thanks to @marcinjasion.

Two implementation with Primary

We often have one implementation which we almost always want to inject, so do we still have to put Qualifier with its name wherever we want to use it? No. We could mark one implementation as Primary and this bean will be wired by default (unless we explicit give another Qualifier to use injection point):
@Component
@Primary
class HtmlHeaderPrinter implements IHeaderPrinter{
    // ...
}
@Autowired
IHeaderPrinter headerPrinter

Summary

Autowired annotation allows us to inject dependencies to beans. It works great without additional configuration, when each bean could be uniquely find by type. When we have more than one bean, that could be injected, we have to use Qualifier or Primary annotation to help it find desired implementation. Source code is available here.

Scheduling tasks using Message Queue

Introduction

How to schedule your task for later execution? You often create table in database, configure job that checks if due time of any task occured and then execute it.

But there is easier way if only you have message broker with your application... You could publish/send your message and tell it that it should be delivered with specified delay.

Scheduling messages using ActiveMQ

ActiveMQ is open source message broker written in Java. It is implementation of JMS (Java Message Service).

You could start its broker with scheduling support by adding flag schedulerSupport to broker configuration:

<beans ...>
...
<broker xmlns="http://activemq.apache.org/schema/core"
brokerName="localhost"
dataDirectory="${activemq.data}"
schedulerSupport="true">
...
</broker>
...
</beans>

Now, if you want to delay receiving message by few seconds, you could add property during message creation, e.g.:

message.setLongProperty(ScheduledMessage.AMQ_SCHEDULED_DELAY, 8000)

Delay unit is miliseconds.

Of course queue must be persisted.

When you listen for message on the same queue, then you will see that message indeed will be received with 8 second delay.

...
Send time: Tue Dec 01 18:51:23 CET 2015
...
Message received at Tue Dec 01 18:51:31 CET 2015
...

Scheduling messages using RabbitMQ

Scheduling tasks is not only the feature of ActiveMQ. It is also available with RabbitMQ.

RabitMQ is message broker written in Erlang. It uses protocol AMQP.

First you have to install plugin rabbitmq_delayed_message_exchange. It could be done via command:

rabbitmq-plugins enable --offline rabbitmq_delayed_message_exchange

You have to define exchange in RabbitMQ which will use features from this plugin. Queue for delayed messages should be bound to this exchange. Routing key should be set to queue name.

channel.exchangeDeclare(exchange, 'x-delayed-message', true, false, ['x-delayed-type': 'direct']);
channel.queueBind(queue, exchange, queue);
channel.queueDeclare(queue, true, false, false, null);

Of course queue must be persisted.

To test it just publish new message with property x-delay:

channel.basicPublish(exchange,
queue,
new AMQP.BasicProperties.Builder().headers('x-delay': 8000).build(),
"Message: $currentUuid".bytes)

Message will be delayed with 8 seconds:

...
Send time: Tue Dec 01 19:04:18 CET 2015
...
Message received at Tue Dec 01 19:04:26 CET 2015
...

Conclusion

Why you create similar mechanism for handling scheduled tasks on your own, when you could use your message brokers and delayed messages to schedule future tasks?

Sources are available here.

Kotlin’s extensions for each class

Extensions in Kotlin are very powerful mechanism. It allows for add any method to any of existing classes. Each instance has (as in Java) equals, toString and hashCode methods, but there is much more in Kotlin.

Example classes


Let's define some simple classes describing person: normal class and data class.

class PersonJaxb {
var firstName: String? = null
var lastName: String? = null
var age: Int? = null
}

data class Person(val firstName: String, val lastName: String, val age: Int)

Normal class extensions


All instances have methods described below.

apply method


I often work with jaxb classes similar to PersonJaxb, which has not all arg constructor and all fields must be set via setters. Kotlin helps to deal with it via apply method. Target instance is provided as delagate to closure so we could define all fields values in it and returns this. The signature is T.apply(f: T.() -> Unit): T.

@Test
fun applyTest() {
//when
val person = PersonJaxb().apply {
firstName = "John"
lastName = "Smith"
age = 20
}

//then
assertEquals(20, person.age)
assertEquals("John", person.firstName)
assertEquals("Smith", person.lastName)
}

let method


Another extension is let method which is similar to map operation for collections. It has signature T.let(f: (T) -> R): R. this is passed as parameter to given closure/function.

@Test
fun letTest() {
//when
val fullName = Person("John", "Smith", 20).let {
"${it.firstName} ${it.lastName}"
}

//then
assertEquals("John Smith", fullName)
}

run method


run method looks like merge of apply and let methods: access to this is via delegate as in apply, but it also returns value as in let method. It has signature T.run(f: T.() -> R): R.

@Test
fun runTest() {
//when
val fullName = Person("John", "Smith", 20).run {
"$firstName $lastName"
}

//then
assertEquals("John Smith", fullName)
}

to method


Each instance has also defined to infix operator, which is used to create Pair. Pairs is helpful to create map entries. It has signature A.to(that: B): Pair<A, B>.

@Test
fun toTest() {
//when
val pair = Person("John", "Smith", 20) to 5

//then
assertEquals(Person("John", "Smith", 20), pair.first)
assertEquals(5, pair.second)
}

Data class methods


Data class instances have also some other helpful methods (which are not extensions, but are generated for us).

componentX methods


Data class Person has three fields and it has component method generated for each of them: component1 for firstName, component2 for lastName and component3 for age.

@Test
fun componentsTest() {
//when
val p = Person("John", "Smith", 20)

//then
assertEquals("John", p.component1())
assertEquals("Smith", p.component2())
assertEquals(20, p.component3())
}

Why is it helpful? componentX methods are used in extracting (similar to Scala case classes extracting mechanism), e. g.:

@Test
fun extractingTest() {
//when
val (first, last, age) = Person("John", "Smith", 20)

//then
assertEquals(20, age)
assertEquals("John", first)
assertEquals("Smith", last)
}

copy method


copy method allows to create new instance based on current instance.

@Test
fun copyTest() {
//when
val person = Person("John", "Smith", 20).copy(lastName = "Kowalski", firstName = "Jan")

//then
assertEquals(Person("Jan", "Kowalski", 20), person)
}

Summary


Kotlin's extensions for each instances are very simple and help to solve many problems. The code written with these extensions is much more readable and concise than written in Java.

Sources are available here.