Why learning Java Script so important now and the future?

Many computer programmers know numerous languages. The range of programs languages is broad, with some words used in particular contexts, but some more basic. For instance, Java can implement applications for both the desktop and the Web. Programming languages likewise take different techniques to performing processing so that composing applications can include various tasks and projects depending on the language in use. There are a couple of standard benefits to understanding many programming languages that can boost success in any development career.

The JAVA Adventures Importance


When companies seek software advancement services, the procedure typically starts with a consultation period. Throughout this time, the customer will describe what he needs, while the designer makes certain she collects sufficient details to provide the client noise suggestions. Among the leading advisory and decision-making procedures in any advancement, a job is selecting an innovation or set of innovations. Numerous complex applications, for instance, those working on the Web or in conjunction with other technologies such as databases, use several programs languages. For example, a Web application may include database programs in SQL, client-side scripting in JavaScript and additional languages, such as HTML and XML and server side scripting in PHP. Developers need to comprehend the benefits and downsides of each word choice to recommend customers reliably.


Innovation is in a continuous state of adjustment. From Web applications to desktop and mobile environments, the series of languages in use is always evolving. Programmers who continue to make a welcome contribution to the tasks they work on are those developers who want to discover brand-new skills, platforms, and languages continuously. The more languages a developer creates, the much easier it becomes to get brand-new languages, so making this a routine function of your working life puts you in a great position for the future.

Application Knowledge

When discovering programs languages, designers typically find elements of how these words are executed within calculating systems. This indicates that each time you learn a new language, you learn something more about the effectiveness, performance and style aspects of programs in general. Numerous languages execute their structures in similar ways, so learning about general implementation concepts gives you the knowledge to process with efficiency in mind, whatever language you are utilizing.


Some programming languages are similar, but some take vastly different techniques to application processing. For instance, object-oriented languages, such as Java, divided application tasks between a set of items with specific obligations. Languages are often categorized as high or low level. The higher level a language is, the more it involves abstraction from calculating hardware. Procedural languages provide the computer system a series of specific directions to perform, whereas functional languages specify application habits using mathematical functions. Understanding about various programming language approaches gives you a wider variety of choices regarding how you approach particular projects yourself.

Latest Java Update: Maintainable Code or High Productivity?

Latest Java Update: Maintainable Code or High Productivity?

picture of a java window

I have been doing a little development lately in addition to my routine task. Something that’s struck me: checking a one-line code repair requires some minutes.

Development goes in stages between maintainable and productive, typically hitting among those extremes at the same time.

The art of programs moves rapidly. Some people have taken part in Rapid Application Development (RAD), where making a modification and getting it to production happens from an IDE (or not) and takes seconds. On the other hand, we’ve all seen catastrophic production interruptions, when some developer pushes a product to production that should not exist.

In other situations we’ve done extremely maintainable development where nothing is a one-line code change and releasing what would be a one-line code modification to production is an act of sheer will with a lot of moving pieces. The software world likes to do this and makes intricacy extremely well, thank you quite.

Take a historical example of RAD. In the Java world, JBuilder utilized to be able to release to Weblogic incrementally. In the PHP world, you could modify a file on the web server or locally, then SCP it into the ideal directory. In any case, you could quickly check that file locally. In the Microsoft world, back in the VB days, you could easily make a modification, then struck Run and test it once again. Microsoft still leads in the cloud era with the auto swap, however, let’s admit it, it ain’t like it used to be.

Take the greatest historical example of software application development. Java EE abstracted you from the hardware and, in exchange, needed you to create about 20 embedded Zip files (OK, a small exaggeration) and 15 different XML descriptors (not an exaggeration in a considerable app) to check a one-line code modification in your Design 2 controller. On the one hand: Look ma, say goodbye to someone-did-something-by-mistake in production! And say goodbye to buffer under/overruns. On the contrary: it was a productivity suck.

Fast-forward to today, as well as the language efficiency for a full stack application, is an intriguing performance suck. Adding a small detail in a Java servlet or C# app is nothing compared to an entirely practical shows monster. No more will you pull a header from an injected environment thing, oh no, we need to determine how to do this in an entirely stateless manner that stays “functional” throughout.

Additionally, take a look at Docker. I enjoy Docker. Had Sun Microsystems decided to stick product packaging in with Solaris Zones/Containers, then perhaps Sun would’ve totally recovered from the dot-com bomb and acquired Oracle instead of the other way around.

When Docker is a vital part of your develop procedure, you have a lighter-weight variation of the Java EE product packaging issue. To make a modification, I have to build the change, reduce the container, restore the tank, and raise the container. There is no incremental anything.

For now, enjoy waiting minutes to check an incredibly small modification even in your area while production remains stable. We reside in interesting– however maintainable– times.

This story, “Select one: High efficiency or code you can maintain” was initially published by InfoWorld.

To find out what JAVA is all about, watch this video on how to find out about Maintainable Code:

Testing WebSockets with CURL

Tuesday 23 June 2015

Just played around with Socket.io to access a backend service over WebSockets. As you might guess, it didn’t worked right from the beginning.

So, I wondered if there’s a way to test the backend with the good old curl command. And yes! There is!

$ curl -i -N \
-H "Connection: Upgrade" \
-H "Upgrade: websocket" \
-H "Host: localhost:8080" \
-H "Origin:http://localhost:8080" \

After starting, curl will wait and dump all messages that the server sends.

Running bower behind a firewall

Wednesday 17 June 2015

While working with Bower is nice, working with bower behind a company firewall is not that nice. It seems that by default Bower is trying to download the dependencies fromGitHub using SSH (or the git://) protocol. Unfortunately the SSH port is blocked by many firewalls.

Actually this is not a Bower problem but a Git problem. Bower uses Git to fetch the dependencies.

You can solve the problem by telling Git to use https instead of git url.
cd /project/dir
git config url."https://".insteadOf git://
bower install

This command solves the problem only for your current project.

You can even make this change globally, but I’m not sure if you really want to do that. For my personal projects I only use git:// for cloning and I have two-factor authentication in place for https connections. So, this might break my setup but I’ve never tried so far.

git config --global url."https://".insteadOf git://

JBoss EAP6: Upgrade to RESTEasy3.x

Tuesday 01 July 2014

When working with JBoss EAP 6.x you might reach the point, where the you would like to use the latest JAX-RS 2.0 features instead of the provided JAX-RS 1.1. In my case we hat to upgrade because of a bug in RESTEasy 2 concerning sub-resource locators when using the proxy clients.

According to the RESTEasy page you simply have to unzip a file that contains the new modules for JBoss. This is a wonderful solution when working locally but it’s a very bad solution for production servers where the modules are installed through RPM packages. In that case you shouldn’t manually unzip some modules over the JBoss installation.

After several attempts of packing our own RESTEasy modules we found the solution for packing RESTEasy in the WAR file and leave the provided RESTEasy 2 installation alone. After all, it’s not that complicated. There are even blog post about that topic but some are a bit outdated when it comes to naming of modules and extensions.

So, here is what’s needed to upgrade JBoss EAP 6.2 to RESTEasy 3.


As we provide our own version of RESTEasy 3 we have to pack them into our WAR file. You’ll need the following dependencies:

java encoding 2

Telnet without Telnet

Monday 18 November 2013

Recently I had to verify the connectivity of a new server. So, I logged in over SSH and simply typed

telnet my.server.com 5432

to test the firewall rules. But to my suprise, telnet was not installed on that machine.

So, what now?

Ok, we could check if stuff like curl or wget is installed but this won’t help in every case. If you simply want to know if a certain port is open you can use the command:

exec 3> /dev/tcp/my.server.com/5432;[ $? == "0" ] && echo ok || echo fail

Not as neat as telnet but does the job 🙂

JEE: Connecting to the outside world with JCA connectors – Part 2

Friday 25 October 2013

After philosophizing about application configuration in Part 1 it’s now time to get our hands dirty. What are we going to write an outbound resource adapter! The resource adapter is based on my GitHub project outbound-connector which provides some base classes that reduces the code for new simple resource adapters to almost nothing!

As I still add new features to the outbound-connector project the code of this blog is based on a branch called branch-1.0.x and not the master branch.

echo-connector – A simple stupid resource adapter

We will now create a resource adaptor that echos a message. To show that the configuration of url, username and password is working, even when working with multiple configurations of the resource adapter the echo resource adapter will add this information to the result. You might have noticed already that we’re not going to connect to a remote system at all for this demonstration.

outbound-connector – The base implementation

When writing a resource adapter from scratch you won’t come around some boilerplate code. It’s that code that makes your resource look complex. When writing a simple resource this boilerplate doesn’t do much at all. So, it can be placed in a few base classes that hide it from you. That’s exactly what the outbound-connector project is doing!

The project comes with two main maven projects and some examples.

The project comes with two main maven projects and some examples.


A resource adapter should always consist out of an API and a implementation project. The remote-system-connector-api contains two interfaces that APIs of new resource adapters can extend. The interfaces are:


These interfaces mainly aggregate some java interfaces so that you don’t forget to include them 🙂 The API of our new resource adapter will extend these two interface and we’re done with the API project.


On the implementation side, the project remote-system-connector provides the base classes listed next. We will extend these for our new resource adapter.


These classes will do the communication or integration with the container. Extending five classes for a new resource adapter looks like a lot of work, but relax, our classes will almost be empty. They simply must exist to fulfill the JCA contracts.

Define the API for your resource adapter

We create a new maven project called echo-connector-api. As the only dependency we add the the remote-system-connector-api project


Interface: EchoConnection

We will provide one echo Method. We put it in a class called EchoConnection. The interface extends the Connection interface from the remote-system-connector-api.

package com.ja.rsc.echo.api;
import com.ja.rsc.api.Connection;
public interface EchoConnection extends Connection 
{EchoResponse echo(String text);

We use a EchoRespone object as return value. We use this object to store the current configuration properties of the connection. You won’t do this in real life, of course.

package com.ja.rsc.echo.api;

public class EchoResponse {
private String text;
private String url;
private String username;
private String password;
// getters and setters omitted

Interface: EchoConnectionFactory

The resource that we are going to inject into a web application or an EJB will be a connection factory. We define it in the interface EchoConnectionFactory that extends ConnectionFactory located in remote-system-connector-api. You will notice that we only have to define the generic connection type. No methods have to be defined.

package com.ja.rsc.echo.api;
import com.ja.rsc.api.ConnectionFactory;
public interface EchoConnectionFactory extends ConnectionFactory<EchoConnection{

That’s it. The API for the echo-connector is already defined! If you’re writing a resource adapter for a SOAP service this API project would be the perfect location to generate the java code from the WSDL of the web service.

Implement the resource adapter

The resource adapter gets implemented in a separate maven project echo-connector. Dependencies are:


As mentioned above, we need to implement five classes. They all extends base classes found in remote-system-connector.

Class: EchoResourceAdapter

package com.ja.rsc.echo;
import javax.resource.spi.Connector;
import javax.resource.spi.TransactionSupport;
import com.ja.rsc.AbstractAdapter;
reauthenticationSupport = false,
transactionSupport =
public class EchoAdapter extends AbstractAdapter {

The base class handles the life cycle of the adapter. Out class is annotated as @Connector. Define the transaction behavior in the annotation. Our simple echo adapter doesn’t support transactions at all. The use of annotations makes the use of a ra.xml descriptor obsolete.

Class: EchoManagedConnectionFactory

The EchoManagedConnectionFactory creates managed connections. Managed connections are used to implement the transaction behavior. We don’t support transactions in this example what makes implementation simple. Even if the following class looks massive, the class only contains two methods with only one statement actually.

package com.ja.rsc.echo;
import java.io.Closeable;
import javax.resource.spi.ConnectionDefinition;
import javax.resource.spi.ConnectionManager;
import javax.resource.spi.ConnectionRequestInfo;
import javax.resource.spi.ManagedConnection;
import javax.resource.spi.ManagedConnectionFactory;
import com.ja.rsc.GenericManagedConnectionFactory;
import com.ja.rsc.UrlBasedManagedConnection;
import com.ja.rsc.UrlBasedManagedConnectionFactory;
import com.ja.rsc.UrlConnectionConfiguration;
import com.ja.rsc.echo.api.EchoConnection;
import com.ja.rsc.echo.api.EchoConnectionFactory;
      connectionFactory = EchoConnectionFactory.class,
      connectionFactoryImpl = InMemoryEchoConnectionFactory.class,
      connection = EchoConnection.class,
      connectionImpl = InMemoryEchoConnection.class)
public class EchoManagedConnectionFactory extends
  UrlBasedManagedConnectionFactory<UrlConnectionConfiguration> {
  public EchoManagedConnectionFactory() {
    super(new UrlConnectionConfiguration());
  protected Object createConnectionFactory(
   GenericManagedConnectionFactory mcf, ConnectionManager cm) {
    return new InMemoryEchoConnectionFactory(mcf, cm);
  protected ManagedConnection createManagedConnection(
    UrlConnectionConfiguration connectionConfig,
    ManagedConnectionFactory mcf,
    ConnectionRequestInfo connectionRequestInfo) {
    return new UrlBasedManagedConnection<UrlConnectionConfiguration, InMemoryEchoConnection>(
      connectionConfig, mcf, connectionRequestInfo) {
      protected InMemoryEchoConnection createConnection(
        UrlConnectionConfiguration connectionConfiguration,
        ManagedConnectionFactory mcf,
        ConnectionRequestInfo connectionRequestInfo,
        Closeable managedConnection) {
          return new InMemoryEchoConnection(connectionConfiguration, managedConnection);

The @ConnectionDefinition provides information about your resource adapter. It wires the connection interfaces to the connection classes.

Class: InMemoryEchoConnectionFactory

This class is named InMemory because our resource adapter will do its work purely in the memory and won’t do any remote calls.

package com.ja.rsc.echo;
import javax.resource.spi.ConnectionManager;
import javax.resource.spi.ManagedConnectionFactory;
import com.ja.rsc.AbstractConnectionFactory;
import com.ja.rsc.echo.api.EchoConnection;
import com.ja.rsc.echo.api.EchoConnectionFactory;

public class InMemoryEchoConnectionFactory extends

 AbstractConnectionFactory<EchoConnection> implements
EchoConnectionFactory {
  public InMemoryEchoConnectionFactory(ManagedConnectionFactory mcf, ConnectionManager cm) {
    super(mcf, cm);

This class allocates connection from the connection pool. Luckily this has been consolidated in the base class AbstractConnectionFactory that we extend here. Therefore we don’t have to implement any code besides a constructor.

Class: InMemoryEchoConnection

Now things are starting to get interesting. This class is where you place the actual business code of the resource adapter. The base class UrlBasedConnection provides access to our properties url, username and password. Nice, isn’t it. They’re just there waiting for you 🙂 You could use them to open a connection to the remote system. For the sake of demonstration we store the configuration values in the EchoResponse.

package com.ja.rsc.echo;
import java.io.Closeable;
import com.ja.rsc.UrlBasedConnection;
import com.ja.rsc.UrlConnectionConfiguration;
import com.ja.rsc.echo.api.EchoConnection;
import com.ja.rsc.echo.api.EchoResponse;

public class InMemoryEchoConnection extends

    UrlBasedConnection<UrlConnectionConfiguration> implements
    EchoConnection {
  public InMemoryEchoConnection(
      UrlConnectionConfiguration connectionConfiguration,
      Closeable closeable) {
      super(connectionConfiguration, closeable);
  public EchoResponse echo(String text) {
      EchoResponse response = new EchoResponse();
      return response;

With this the implementation of the echo resource adapter is done.

Configure an echo connection

The code of the resource adapter can now be compiled and deployed. The package will be of type RAR and can be deployed like an application in a JEE container. After the deployment connection pools and connection resources have to be defined inside your JEE container. How to do this depends on the JEE container. I will only describe the configuration steps for Glassfish. In the examples there is are scripts that will do it through command line tools for Glassfish and JBoss/Wildly.

Configure a Connector Connection Pool

  1. After deploying the RAR file open the Glassfish admin console, go to Resources-> Connectors -> Connector Connection Pools and click New.
  2. Select a pool name and select the echo-connector resource adapter in the drop down and click Next
  3. On this page you can configure the connection pool as you known it from database connection. Minimum and maximum connections and so on. At the bottom of the page we find three properties that we accessed in the EchoConnection before. Add some values.
  4. Click Finish

Please note that the configuration parameters for the connectivity is no longer part of your application. It’s completely separated! It’s up to the operator or the deployer to configure it. The application developer doesn’t has to know production password anymore what might be a good thing!

Configure a Connector Resource

The resource we configure here is what we are going to inject into a web application or ejb.

  1. go to Resources-> Connectors -> Connector Resource and click New.
  2. Define a JDNI name and select the connection pool you created before. The JNDI name is going to be used later in @Resource annotations.
  3. Click OK

The echo resource adapter is now configured and can be used inside web applications and ejbs.

Using the resource adapter

You can now access the echo connection by injecting a EchoConnectionFactory that provides access to the EchoConnection. The name attribute of the @Resource annotation is the JNDI name we specified before. You can inject the EchoConnectionFactory into servlets or ejbs for example.

@Resource(name = "jca/echo")
private EchoConnectionFactory echo;

Connections always have to be closed after use so place your code inside a try-with-resource or close it in a finally block.

try (EchoConnection connection = echo.getConnection()) {
EchoResponse response = connection.echo(text);
return response.toString();
} catch (Exception e) {
throw new WebApplicationException(e, Status.INTERNAL_SERVER_ERROR);

The complete example

The working sample code with echo connector and demo REST application can be found on GitHub in the examples directory.

There is a (un)deploy script that works with Glassfish and JBoss/Wildfly. They will deploy resource adapter and demo application and configure two connections with different configuration parameters. The different connections can be accessed through the demo application by going to the urls

If you want to test in your console then call the scripts testEcho.sh and testEcho2.sh:

$ ./deploy.sh glassfish
$ ./testEcho.sh
$ ./testEcho2.sh


We are done. What have we reached?

  • Connectivity configuration is separated from the application
    • Less application configuration, maybe even no application at all
    • Same WAR/EAR/RAR for all (testing) environment
  • Configuration similar to datasources
  • Usage similar to datasources
  • Resource adapter and application can be release independently
  • Faster builds

The application doesn’t know with which echo implementation it’s talking. The implementation can be replaced by the deployer or the operator without touching the application or the application configuration at all! That’s is just perfect for testing environments for example where certain peripheral systems might not be available for every stage.

I thinks it’s worth investing some training in working with resource adapters even though it’s not that obvious at the beginning.

JEE: Connecting to the outside world with JCA connectors – Part 1

Tuesday 01 October 2013

In most JEE projects I came across so far there were issues with the configuration. The handling is somehow cumbersome and error-prone. Feeding changes to all configuration files for all testing environments is not my favorite work.

Two kinds of configuration data

The configuration of an application consisted out of some business configuration if at all and connectivity configuration for accessing remote systems. Please have a look at your own configuration files, I hope you’ll agree.

In a distributed world you’ll have web applications or EJBs that need to connect to remote systems. You’ll have to configure this access somewhere. In real life for accessing these properties people might:

  • Inject a JNDI entry
  • Read a system property
  • Build your own configuration mechanism

Whereas JNDI would be a JEE way for doing this people often want to have their properties in one single property file. The JNDI approach gets rated as too complicated, especially when it comes to testing. In most case people start to write their own configuration framework, either simply for loading properties as system properties or for loading and accessing. That’s when things get out of control:

  • You can’t put the properties files into your war or jar because urls, users and passwords will be different in production than in test
    • Therefore you place it outside the JEE container somewhere on the file system, what in my opinion is a habit
  • You’ll provide some singleton class for accessing the configuration
    • Unit tests will start to fail just because some other test initialized that class with different values
    • Singletons (not the JEE ones) are evil anyway
  • You will test less because configuration is a pain in the ass

Have a close look at your current application configuration

What to do? When I looked closely at the application configuration I noticed that there is mostly just connectivity configuration. There’s also configuration concerning the business logic but do we really need that?

Business configuration data

Business configuration configures the business behavior of your code. In my experience these configuration values are set once by the developer and never get changed again. Not for test and not even for production. If it gets changed then a developer is doing the change. Maybe even only in the trunk of your source code. Therefore this configuration property is actually not needed at all if you use it as default in your code. This blog is not about this kind of configuration and how to access it (I would prefer JNDI) that’s why I stop here.

Connectivity configuration data

The configuration that changes often on different test and prod environments is connectivity configuration. This might be urls of remote systems, users and passwords.

Why are these values in our application configuration? Ever used a database connection? Yes? What do we do with Data Sources?

  • We configure it in the container and inject it into the code
  • The database configuration is outside our application configuration.

If we could do this with the configuration for our other remote systems the common application configuration might become obsolete 🙂

Treat remote systems like Data Sources

Now let’s go JEE. A database is a remote system or a remote resource. Let’s do the same with all remote resources. JDBC is a very old standard. With JEE the Java EE Connector Architecture was introduced for accessing remote resources. Actually, JDBC is (almost) the same as a JCA connector. To make this clear have a look at JBoss. JBoss uses Ironjacamar as the JCA implementation. To define a Data Source you can deploy it as Ironjacamar connector, therefore, on JBoss your database connection is realized as a connector.


OK, let’s assume we put our remote connection code into JCA connectors, what are the benefits of that except more (maven) projects, more code, more complexity

  • The business code works on connection objects and does not care about how this connection is established
    • This is the same handling that we already know from JDBC Data Sources
  • The business code can easily be mocked for unit testing (but this might also be true with the current approach)
  • In a JEE container you can mock the remote system by implementing a second connector with the same api
    • This might become handy if a remote system is not available on all test environments. No code or configuration change is needed in your business code.
  • If multiple modules connect to the same remote system deploy the connector once and create instances with different properties for each application in the container
  • Code generation (e.g. from WSDL) is done in the connector api project. Your business code project is free of WS frameworks for code generation.
  • Confidential configuration like user names and passwords are no longer in the application configuration and under the control of the developer
    • It’s now under control of the application deployer or the operator. Of course this could still be the same developer but in a different role at a different time

Counter-arguments you’ll face

If you mention connectors or JCA in your office people will probably call you crazy. You’ll hear counter-arguments like * This is too much overhead * This is over-engineered * This is too complicated * This makes configuration complicated

Yes, there is overhead. You’ll end with several new (maven) projects. I would suggest two per remote system. One for the api and one for the connector implementation. The benefit of this is that these projects are quite simple and pom.xml files for example are easy to understand even if we do some kind of source generation from WSDL. On the release management side you can provide bug fixes without rebuilding and redeploying your business application, you simple deploy the connector.

Is it over-engineered? I would call it well-engineered. It’s JEE to the core! It splits your code into smaller artefacts and you’ll get a better separation of concerns

Is it complicated? Well, you should have a look at a JCA example before you start or base your connectors on a base implementation. Adam Bien provides a very simple example of a connector in his connectorz project. It’s not that complicated at all. But yes, when it comes to deployment there’s more to deploy than one single war file. In my opinion you should prefer the additional deployment complexity to a huge and messy project (and pom.xml file) that mix business code with connectivity.

When it comes to configuration I say that it makes configuration easier. Your application configuration is smaller or even disappears and depending on your company’s setup you are not responsible for configuring connectivity matters. If you’re still responsible then this might be because you act in the JEE roles of a application component provider and an application deployer at the same time. This is OK, no worries, but: Always keep in mind what role you’re currently playing and what responsibilities it includes!

Let’s write a connector

As this post is already quite long I will postpone the code to a later blog post.

In Part 2 I will introduce a base implementation I wrote that is based on the connectorz project. It will provide base classes for creating connectors that only use url, user and password as configuration parameters. With only a little effort you’ll be able to create additional connectors that connect to remote systems.

So long… and please have a close look at your configuration files. 

RaspberryPi with Faytech Tochscreen

Wednesday 13 February 2013

Just a quick note for anybody who’s struggling with Touchscreens on the RaspberryPi.

I just downloaded the latest Raspbian Image 2013-02-09-wheezy-raspbian.zip and started it. My Faytech 7” Touchscreen just works out of the box and it even works with JavaFX applications. No more kernel compilation!

One tiny thing you might want to do is to calibrate the touch controller. xinput_calibrator will do the job but you have to compile it yourself.

sudo apt-get install libx11-dev libxext-dev libxi-dev x11proto-input-dev
wget http://github.com/downloads/tias/xinput_calibrator/xinput_calibrator-0.7.5.tar.gz
sudo make install

Touch the points until the application closes. Follow the instructions on the console output and create the configuration for X11. The directory for the configuration file is /usr/share/X11/xorg.conf.d instead of the directory shown by xinput_callibrator.

Reboot and you’re done!