Category Archives: General

Java Empowers Our Digital World – From the start to the Future

Java is at the heart of our digital world. It’s the platform for releasing professions, checking out human-to-digital user interfaces, architecting the world’s finest applications, and opening development everywhere– from garages to global organizations.

This is the reason why I LOVE Java World. I can not stress enough the things that JAVA can do for you.

Here in this website, everybody can learn a simple JAVA Script to a complex JAVA Script. Everybody can contribute with me so I can play as the POSTMASTER here in the Website.

Thank you, and if you have any inquiry, the contribution of knowledge, please feel free to comment or email me. I look forward to your experience in JAVA World.

JCA connectors – Part 2

Friday 25 October 2013

After philosophizing about application configuration in Part 1 it’s now time to get our hands dirty. What are we going to write an outbound resource adapter! The resource adapter is based on my GitHub project outbound-connector which provides some base classes that reduces the code for new simple resource adapters to almost nothing!

As I still add new features to the outbound-connector project the code of this blog is based on a branch called branch-1.0.x and not the master branch.

echo-connector – A simple stupid resource adapter

We will now create a resource adaptor that echos a message. To show that the configuration of url, username and password is working, even when working with multiple configurations of the resource adapter the echo resource adapter will add this information to the result. You might have noticed already that we’re not going to connect to a remote system at all for this demonstration.

outbound-connector – The base implementation

When writing a resource adapter from scratch you won’t come around some boilerplate code. It’s that code that makes your resource look complex. When writing a simple resource this boilerplate doesn’t do much at all. So, it can be placed in a few base classes that hide it from you. That’s exactly what the outbound-connector project is doing!

The project comes with two main maven projects and some examples.

The project comes with two main maven projects and some examples.


A resource adapter should always consist out of an API and a implementation project. The remote-system-connector-api contains two interfaces that APIs of new resource adapters can extend. The interfaces are:


These interfaces mainly aggregate some java interfaces so that you don’t forget to include them 🙂 The API of our new resource adapter will extend these two interface and we’re done with the API project.


On the implementation side, the project remote-system-connector provides the base classes listed next. We will extend these for our new resource adapter.


These classes will do the communication or integration with the container. Extending five classes for a new resource adapter looks like a lot of work, but relax, our classes will almost be empty. They simply must exist to fulfill the JCA contracts.

Define the API for your resource adapter

We create a new maven project called echo-connector-api. As the only dependency we add the the remote-system-connector-api project


Interface: EchoConnection

We will provide one echo Method. We put it in a class called EchoConnection. The interface extends the Connection interface from the remote-system-connector-api.

package com.ja.rsc.echo.api;
import com.ja.rsc.api.Connection;
public interface EchoConnection extends Connection 
{EchoResponse echo(String text);

We use a EchoRespone object as return value. We use this object to store the current configuration properties of the connection. You won’t do this in real life, of course.

package com.ja.rsc.echo.api;

public class EchoResponse {
private String text;
private String url;
private String username;
private String password;
// getters and setters omitted

Interface: EchoConnectionFactory

The resource that we are going to inject into a web application or an EJB will be a connection factory. We define it in the interface EchoConnectionFactory that extends ConnectionFactory located in remote-system-connector-api. You will notice that we only have to define the generic connection type. No methods have to be defined.

package com.ja.rsc.echo.api;
import com.ja.rsc.api.ConnectionFactory;
public interface EchoConnectionFactory extends ConnectionFactory<EchoConnection{

That’s it. The API for the echo-connector is already defined! If you’re writing a resource adapter for a SOAP service this API project would be the perfect location to generate the java code from the WSDL of the web service.

Implement the resource adapter

The resource adapter gets implemented in a separate maven project echo-connector. Dependencies are:


As mentioned above, we need to implement five classes. They all extends base classes found in remote-system-connector.

Class: EchoResourceAdapter

package com.ja.rsc.echo;
import javax.resource.spi.Connector;
import javax.resource.spi.TransactionSupport;
import com.ja.rsc.AbstractAdapter;
reauthenticationSupport = false,
transactionSupport =
public class EchoAdapter extends AbstractAdapter {

The base class handles the life cycle of the adapter. Out class is annotated as @Connector. Define the transaction behavior in the annotation. Our simple echo adapter doesn’t support transactions at all. The use of annotations makes the use of a ra.xml descriptor obsolete.

Class: EchoManagedConnectionFactory

The EchoManagedConnectionFactory creates managed connections. Managed connections are used to implement the transaction behavior. We don’t support transactions in this example what makes implementation simple. Even if the following class looks massive, the class only contains two methods with only one statement actually.

package com.ja.rsc.echo;
import javax.resource.spi.ConnectionDefinition;
import javax.resource.spi.ConnectionManager;
import javax.resource.spi.ConnectionRequestInfo;
import javax.resource.spi.ManagedConnection;
import javax.resource.spi.ManagedConnectionFactory;
import com.ja.rsc.GenericManagedConnectionFactory;
import com.ja.rsc.UrlBasedManagedConnection;
import com.ja.rsc.UrlBasedManagedConnectionFactory;
import com.ja.rsc.UrlConnectionConfiguration;
import com.ja.rsc.echo.api.EchoConnection;
import com.ja.rsc.echo.api.EchoConnectionFactory;
      connectionFactory = EchoConnectionFactory.class,
      connectionFactoryImpl = InMemoryEchoConnectionFactory.class,
      connection = EchoConnection.class,
      connectionImpl = InMemoryEchoConnection.class)
public class EchoManagedConnectionFactory extends
  UrlBasedManagedConnectionFactory<UrlConnectionConfiguration> {
  public EchoManagedConnectionFactory() {
    super(new UrlConnectionConfiguration());
  protected Object createConnectionFactory(
   GenericManagedConnectionFactory mcf, ConnectionManager cm) {
    return new InMemoryEchoConnectionFactory(mcf, cm);
  protected ManagedConnection createManagedConnection(
    UrlConnectionConfiguration connectionConfig,
    ManagedConnectionFactory mcf,
    ConnectionRequestInfo connectionRequestInfo) {
    return new UrlBasedManagedConnection<UrlConnectionConfiguration, InMemoryEchoConnection>(
      connectionConfig, mcf, connectionRequestInfo) {
      protected InMemoryEchoConnection createConnection(
        UrlConnectionConfiguration connectionConfiguration,
        ManagedConnectionFactory mcf,
        ConnectionRequestInfo connectionRequestInfo,
        Closeable managedConnection) {
          return new InMemoryEchoConnection(connectionConfiguration, managedConnection);

The @ConnectionDefinition provides information about your resource adapter. It wires the connection interfaces to the connection classes.

Class: InMemoryEchoConnectionFactory

This class is named InMemory because our resource adapter will do its work purely in the memory and won’t do any remote calls.

package com.ja.rsc.echo;
import javax.resource.spi.ConnectionManager;
import javax.resource.spi.ManagedConnectionFactory;
import com.ja.rsc.AbstractConnectionFactory;
import com.ja.rsc.echo.api.EchoConnection;
import com.ja.rsc.echo.api.EchoConnectionFactory;

public class InMemoryEchoConnectionFactory extends

 AbstractConnectionFactory<EchoConnection> implements
EchoConnectionFactory {
  public InMemoryEchoConnectionFactory(ManagedConnectionFactory mcf, ConnectionManager cm) {
    super(mcf, cm);

This class allocates connection from the connection pool. Luckily this has been consolidated in the base class AbstractConnectionFactory that we extend here. Therefore we don’t have to implement any code besides a constructor.

Class: InMemoryEchoConnection

Now things are starting to get interesting. This class is where you place the actual business code of the resource adapter. The base class UrlBasedConnection provides access to our properties url, username and password. Nice, isn’t it. They’re just there waiting for you 🙂 You could use them to open a connection to the remote system. For the sake of demonstration we store the configuration values in the EchoResponse.

package com.ja.rsc.echo;
import com.ja.rsc.UrlBasedConnection;
import com.ja.rsc.UrlConnectionConfiguration;
import com.ja.rsc.echo.api.EchoConnection;
import com.ja.rsc.echo.api.EchoResponse;

public class InMemoryEchoConnection extends

    UrlBasedConnection<UrlConnectionConfiguration> implements
    EchoConnection {
  public InMemoryEchoConnection(
      UrlConnectionConfiguration connectionConfiguration,
      Closeable closeable) {
      super(connectionConfiguration, closeable);
  public EchoResponse echo(String text) {
      EchoResponse response = new EchoResponse();
      return response;

With this the implementation of the echo resource adapter is done.

Configure an echo connection

The code of the resource adapter can now be compiled and deployed. The package will be of type RAR and can be deployed like an application in a JEE container. After the deployment connection pools and connection resources have to be defined inside your JEE container. How to do this depends on the JEE container. I will only describe the configuration steps for Glassfish. In the examples there is are scripts that will do it through command line tools for Glassfish and JBoss/Wildly.

Configure a Connector Connection Pool

  1. After deploying the RAR file open the Glassfish admin console, go to Resources-> Connectors -> Connector Connection Pools and click New.
  2. Select a pool name and select the echo-connector resource adapter in the drop down and click Next
  3. On this page you can configure the connection pool as you known it from database connection. Minimum and maximum connections and so on. At the bottom of the page we find three properties that we accessed in the EchoConnection before. Add some values.
  4. Click Finish

Please note that the configuration parameters for the connectivity is no longer part of your application. It’s completely separated! It’s up to the operator or the deployer to configure it. The application developer doesn’t has to know production password anymore what might be a good thing!

Configure a Connector Resource

The resource we configure here is what we are going to inject into a web application or ejb.

  1. go to Resources-> Connectors -> Connector Resource and click New.
  2. Define a JDNI name and select the connection pool you created before. The JNDI name is going to be used later in @Resource annotations.
  3. Click OK

The echo resource adapter is now configured and can be used inside web applications and ejbs.

Using the resource adapter

You can now access the echo connection by injecting a EchoConnectionFactory that provides access to the EchoConnection. The name attribute of the @Resource annotation is the JNDI name we specified before. You can inject the EchoConnectionFactory into servlets or ejbs for example.

@Resource(name = "jca/echo")
private EchoConnectionFactory echo;

Connections always have to be closed after use so place your code inside a try-with-resource or close it in a finally block.

try (EchoConnection connection = echo.getConnection()) {
EchoResponse response = connection.echo(text);
return response.toString();
} catch (Exception e) {
throw new WebApplicationException(e, Status.INTERNAL_SERVER_ERROR);

The complete example

The working sample code with echo connector and demo REST application can be found on GitHub in the examples directory.

There is a (un)deploy script that works with Glassfish and JBoss/Wildfly. They will deploy resource adapter and demo application and configure two connections with different configuration parameters. The different connections can be accessed through the demo application by going to the urls

If you want to test in your console then call the scripts and

$ ./ glassfish
$ ./
$ ./


We are done. What have we reached?

  • Connectivity configuration is separated from the application
    • Less application configuration, maybe even no application at all
    • Same WAR/EAR/RAR for all (testing) environment
  • Configuration similar to datasources
  • Usage similar to datasources
  • Resource adapter and application can be release independently
  • Faster builds

The application doesn’t know with which echo implementation it’s talking. The implementation can be replaced by the deployer or the operator without touching the application or the application configuration at all! That’s is just perfect for testing environments for example where certain peripheral systems might not be available for every stage.

I thinks it’s worth investing some training in working with resource adapters even though it’s not that obvious at the beginning.

JCA connectors – Part 1

Tuesday 01 October 2013

In most JEE projects I came across so far there were issues with the configuration. The handling is somehow cumbersome and error-prone. Feeding changes to all configuration files for all testing environments is not my favorite work.

Two kinds of configuration data

The configuration of an application consisted out of some business configuration if at all and connectivity configuration for accessing remote systems. Please have a look at your own configuration files, I hope you’ll agree.

In a distributed world you’ll have web applications or EJBs that need to connect to remote systems. You’ll have to configure this access somewhere. In real life for accessing these properties people might:

  • Inject a JNDI entry
  • Read a system property
  • Build your own configuration mechanism

Whereas JNDI would be a JEE way for doing this people often want to have their properties in one single property file. The JNDI approach gets rated as too complicated, especially when it comes to testing. In most case people start to write their own configuration framework, either simply for loading properties as system properties or for loading and accessing. That’s when things get out of control:

  • You can’t put the properties files into your war or jar because urls, users and passwords will be different in production than in test
    • Therefore you place it outside the JEE container somewhere on the file system, what in my opinion is a habit
  • You’ll provide some singleton class for accessing the configuration
    • Unit tests will start to fail just because some other test initialized that class with different values
    • Singletons (not the JEE ones) are evil anyway
  • You will test less because configuration is a pain in the ass

Have a close look at your current application configuration

What to do? When I looked closely at the application configuration I noticed that there is mostly just connectivity configuration. There’s also configuration concerning the business logic but do we really need that?

Business configuration data

Business configuration configures the business behavior of your code. In my experience these configuration values are set once by the developer and never get changed again. Not for test and not even for production. If it gets changed then a developer is doing the change. Maybe even only in the trunk of your source code. Therefore this configuration property is actually not needed at all if you use it as default in your code. This blog is not about this kind of configuration and how to access it (I would prefer JNDI) that’s why I stop here.

Connectivity configuration data

The configuration that changes often on different test and prod environments is connectivity configuration. This might be urls of remote systems, users and passwords.

Why are these values in our application configuration? Ever used a database connection? Yes? What do we do with Data Sources?

  • We configure it in the container and inject it into the code
  • The database configuration is outside our application configuration.

If we could do this with the configuration for our other remote systems the common application configuration might become obsolete 🙂

Treat remote systems like Data Sources

Now let’s go JEE. A database is a remote system or a remote resource. Let’s do the same with all remote resources. JDBC is a very old standard. With JEE the Java EE Connector Architecture was introduced for accessing remote resources. Actually, JDBC is (almost) the same as a JCA connector. To make this clear have a look at JBoss. JBoss uses Ironjacamar as the JCA implementation. To define a Data Source you can deploy it as Ironjacamar connector, therefore, on JBoss your database connection is realized as a connector.


OK, let’s assume we put our remote connection code into JCA connectors, what are the benefits of that except more (maven) projects, more code, more complexity

  • The business code works on connection objects and does not care about how this connection is established
    • This is the same handling that we already know from JDBC Data Sources
  • The business code can easily be mocked for unit testing (but this might also be true with the current approach)
  • In a JEE container you can mock the remote system by implementing a second connector with the same api
    • This might become handy if a remote system is not available on all test environments. No code or configuration change is needed in your business code.
  • If multiple modules connect to the same remote system deploy the connector once and create instances with different properties for each application in the container
  • Code generation (e.g. from WSDL) is done in the connector api project. Your business code project is free of WS frameworks for code generation.
  • Confidential configuration like user names and passwords are no longer in the application configuration and under the control of the developer
    • It’s now under control of the application deployer or the operator. Of course this could still be the same developer but in a different role at a different time

Counter-arguments you’ll face

If you mention connectors or JCA in your office people will probably call you crazy. You’ll hear counter-arguments like * This is too much overhead * This is over-engineered * This is too complicated * This makes configuration complicated

Yes, there is overhead. You’ll end with several new (maven) projects. I would suggest two per remote system. One for the api and one for the connector implementation. The benefit of this is that these projects are quite simple and pom.xml files for example are easy to understand even if we do some kind of source generation from WSDL. On the release management side you can provide bug fixes without rebuilding and redeploying your business application, you simple deploy the connector.

Is it over-engineered? I would call it well-engineered. It’s JEE to the core! It splits your code into smaller artefacts and you’ll get a better separation of concerns

Is it complicated? Well, you should have a look at a JCA example before you start or base your connectors on a base implementation. Adam Bien provides a very simple example of a connector in his connectorz project. It’s not that complicated at all. But yes, when it comes to deployment there’s more to deploy than one single war file. In my opinion you should prefer the additional deployment complexity to a huge and messy project (and pom.xml file) that mix business code with connectivity.

When it comes to configuration I say that it makes configuration easier. Your application configuration is smaller or even disappears and depending on your company’s setup you are not responsible for configuring connectivity matters. If you’re still responsible then this might be because you act in the JEE roles of a application component provider and an application deployer at the same time. This is OK, no worries, but: Always keep in mind what role you’re currently playing and what responsibilities it includes!

Let’s write a connector

As this post is already quite long I will postpone the code to a later blog post.

In Part 2 I will introduce a base implementation I wrote that is based on the connectorz project. It will provide base classes for creating connectors that only use url, user and password as configuration parameters. With only a little effort you’ll be able to create additional connectors that connect to remote systems.

So long… and please have a close look at your configuration files. 

German Java Magazin

Thursday 04 April 2013

JavaMagazin: Himbeerkuchen mit Kaffee

Just a quick note. The german Java Magazin just published my article about extending JavaFx Applications with self-made hardware on a RaspberryPi.

JavaMagazin 05.2013 (Page 59)

You can find the source code on my GitHub Page

A Code you can Maintain or High Productivity

I have been doing a little development lately in addition to my routine task. Something that’s struck me: checking a one-line code repair requires some minutes.

Development goes in stages between maintainable and productive, typically hitting among those extremes at the same time.

The art of programs moves rapidly. Some people have taken part in Rapid Application Development (RAD), where making a modification and getting it to production happens from an IDE (or not) and takes seconds. On the other hand, we’ve all seen catastrophic production interruptions, when some developer pushes a product to production that should not exist.

In other situations we’ve done extremely maintainable development where nothing is a one-line code change and releasing what would be a one-line code modification to production is an act of sheer will with a lot of moving pieces. The software world likes to do this and makes intricacy extremely well, thank you quite.

Take a historical example of RAD. In the Java world, JBuilder utilized to be able to release to Weblogic incrementally. In the PHP world, you could modify a file on the web server or locally, then SCP it into the ideal directory. In any case, you could quickly check that file locally. In the Microsoft world, back in the VB days, you could easily make a modification, then struck Run and test it once again. Microsoft still leads in the cloud era with the auto swap, however, let’s admit it, it ain’t like it used to be.

Take the greatest historical example of software application development. Java EE abstracted you from the hardware and, in exchange, needed you to create about 20 embedded Zip files (OK, a small exaggeration) and 15 different XML descriptors (not an exaggeration in a considerable app) to check a one-line code modification in your Design 2 controller. On the one hand: Look ma, say goodbye to someone-did-something-by-mistake in production! And say goodbye to buffer under/overruns. On the contrary: it was a productivity suck.

Fast-forward to today, as well as the language efficiency for a full stack application, is an intriguing performance suck. Adding a small detail in a Java servlet or C# app is nothing compared to an entirely practical shows monster. No more will you pull a header from an injected environment thing, oh no, we need to determine how to do this in an entirely stateless manner that stays “functional” throughout.


RaspberryPi with Faytech Touchscreen

Wednesday 13 February 2013

Just a quick note for anybody who’s struggling with Touchscreens on the RaspberryPi.

I just downloaded the latest Raspbian Image and started it. My Faytech 7” Touchscreen just works out of the box and it even works with JavaFX applications. No more kernel compilation!

One tiny thing you might want to do is to calibrate the touch controller. xinput_calibrator will do the job but you have to compile it yourself.

sudo apt-get install libx11-dev libxext-dev libxi-dev x11proto-input-dev
sudo make install

Touch the points until the application closes. Follow the instructions on the console output and create the configuration for X11. The directory for the configuration file is /usr/share/X11/xorg.conf.d instead of the directory shown by xinput_callibrator.

Reboot and you’re done!

JUnit Rules!

Monday 11 February 2013

Some weeks ago at Devoxx in Belgium I heard about the JUnit feature called Rules. The feature got introduced without causing a stir but it’s exactly what I was waiting for!

package org.junit.rules;
import org.junit.runner.Description;
import org.junit.runners.model.Statement;

public interface TestRule {
* Modifies the method-running {@link Statement} to implement this
* test-running rule.
* @param base The {@link Statement} to be modified
* @param description A {@link Description} of the test implemented in
*        {@code base}
* @return a new statement, which may be the same as {@code base},
*         a wrapper around {@code base}, or a completely new Statement.
Statement apply(Statement base, Description description);

Well, I have to admit that when I looked at this interface for the first time I had clue what this can be useful for. Looking at the implementations shipped with JUnit made things clear. It’s just perfect for solving the test-case inheritance mess and @Before and @After stuff that makes tests less readable. Rules reduce your tests code, setup and teardown code can be moved into separate classes.

An example

Itegrationtests so far:

public void find() throws Exception {
  // Assemble
  PersonDao dao = new PersonDao();
  Person expectedPerson = new Person("Dummy");;
  try {
    // Activate
    Person actualPerson = dao.find(expectedPerson.getName());
    // Assert
    assertThat(actualPerson, equalTo(expectedPerson));
  } finally {
   // Cleanup

and pimped with @Rules:

public PersonRule personCreator = new PersonRule();
public void find() throws Exception {
  // Assemble
  Person expectedPerson = personCreator.createPerson("Dummy");
  PersonDao dao = new PersonDao();
  // Activate
  Person actualPerson = dao.find(expectedPerson.getName());
  // Assert
  assertThat(actualPerson, equalTo(expectedPerson));

Try-catch has been disappeared! And best is, it didn’t even move into the PersonRule!

import java.util.ArrayList;
import java.util.List;
import org.junit.rules.ExternalResource;
public class PersonRule extends ExternalResource {
  private List<Person> persons = new ArrayList<>();
  protected void after() {
    System.out.println("Cleaning up");
    for (Person p : persons) {
  private void delete(Person p) {
    System.out.println(String.format("Delete %s from database", p));
    new PersonDao().delete(p);
  public Person createPerson(String name) {
    System.out.println(String.format("Create %s in database", name));
    Person p = new Person(name);
    new PersonDao().save(p);
    return p;

As you can see the code implements an after() method where the cleanup is done. The base class is implemented in a way that the after() method gets called after each test exactly as methods annotated with @Afterare.

There’s more

JUnit comes with some handy base classes providing template methods for different purposes. One of these base classes is ExternalResource which provides the template methods before() and after(). Working with the base classes is much easier than implementing the TestRule on your own. Don’t forget to have a look at the other classes!

  • ExternalResource
    before(), after()
  • TestWatcher
    starting(), succeeded(), finished(), skipped(), failed()
  • Verifier

Also have a look at the ExpectedException Rule which enables you to look in detail at exceptions thrown by your test. Use it if @Test(expected=Exception.class) is not enough!

public ExpectedException thrown= ExpectedException.none();

public void throwsNullPointerExceptionWithMessage() {
  throw new NullPointerException("What happened?");

So long, happy testing!

Dangerous drop-in replacement Jars

Monday 04 February 2013

Have you ever put an updated jar into your classpath, eg. a new version of a library for your webapp? In this post I’d like to show you something you should be aware of, especially if you get strange behavior after a simple jar update.

The riddle

Create an interface:

public interface MyInterface {
String GREETING = "Hello World";

Now we compile the interface and pack it into a jar.

> javac
> jar -cf lib.jar MyInterface.class

Next is a simple demo application

public class App {
public static void main(String[] args) {
System.out.println("Greeting: " + MyInterface.GREETING);

The app gets compiled using the library-jar we created before and pack it.

> javac -cp lib.jar
> jar -cf app.jar App.class

Now that the demo app is ready we start it with the command:

> java -cp app.jar:lib.jar App

What a suprise, the output is as expected:

Greeting: Hello World

Now the fun starts. Edit the MyInterface, change the value of GREETING.

public interface MyInterface {
    String GREETING = "Good Bye";

Recompile and repack our interface

> javac
> jar -cf drop-in.jar MyInterface.class

And now, here comes the million dollar question! What will we get if we start our app with the new drop-in-jar instead of the old lib.jar in the classpath?

> java -cp app.jar:drop-in.jar App


What do you guess? Greeting: Good Bye? Wrong! We still get Greeting: Hello World even though our interface in the classpath says good bye!

What did just happen?

For optimization reasons the java compiler writes our constant value directly into the App bytecode. Java doesn’t read it from the interface at runtime! Therefore, the value from the drop-in jar is never read at runtime.


When using drop-in replacement jars keep in mind that changed constant values of the library will only make it into the runtime if you recompile your code!