Search This Blog

Saturday, September 27, 2008

Hibernate Shards, Maven a Simple Example

In my career as a Java Developer, I have encountered cases where data of a particular type is split across multiple database instances having the same structure but segregated by some logical criteria. For example, if we are in a health organization, it is possible that member (you and me, patients) data for one health plan A is in one database while member data for health plan B is in another database.
There could be many reasons for the above. It is possible that there is just too much data for a single database, possible that health plan A just does not want their data mixed with health plan B or maybe the network latency for Health Plan B to store the data in the same database might be high.
Sharding can be thought of segregating and partitioning the data across multiple databases driven by functional or non-functional forces.
When a data consumer is talking to an architecture where the data is sharded, one typically experiences one of the following cases:
  1. The application at any time requires interaction with a single shard only.For example, in a health plan application that is supporting Health Plan A and Health Plan B, a query to find members will always be restricted to a particular plan. In such a case a simple routing logic can assist in directing the query to the particular database. I did have a similar challenge in a previous engagement, my favorite framework Spring and the Routing DataSource helped solve the problem. The Blog by Mr.Mark Fisher was the direction that I followed.
  2. The application needs to interact with data that is part of both databases. For example, to obtain data about members stored in Health Plan A or Health Plan B that match a particular criteria. This requirement is more complicated as one needs to obtain result sets from both databases.
Hibernate Shards is a project that facilitates working with database base architectures that are sharded by providing a single transparent API to the horizontal data set. Hibernate shards was created by a group of Google Engineers that have since open sourced their efforts with their goal to reach a GA soon with assistance from the open source community. More documentation regarding the architecture, concepts and design can be read from the Hibernate Shards site.
The Requirements: We are of to the world cup of soccer. Every country playing in the game maintains a database of its players. The application we would like to develop would like to query across these different databases. We need to develop an application for FIFA that will allow each country to create and maintain players and obtain information about other players from other countries as well. This database instance is provided for each country but the country does not have direct access to be able to insert data or query data against it but instead must use the FIFA application for all operations.
The Example: Denoting a Player in the space of the application, we define a POJO called NationalPlayer. The NationalPlayer has some interesting attributes,
  • First and Last Name of the Player
  • Maximum Career Goals scored by the player
  • His individual ranking in the world as a player
  • The country he plays for
The Country that the player plays for indicates the shard database that his data resides on.
@Table (name="NATIONAL_PLAYER")
public class NationalPlayer {
  @Id @GeneratedValue(generator="PlayerIdGenerator")
  @GenericGenerator(name="PlayerIdGenerator", strategy="")
  private BigInteger id;

  @Column (name="FIRST_NAME")
  private String firstName;

  @Column (name="LAST_NAME")
  private String lastName;
  @Column (name="CAREER_GOALS")
  private int careerGoals;
  @Column (name="WORLD_RANKING")
  private int worldRanking;

  public Country getCountry() { return country; }
  public NationalPlayer withCountry(Country country) { = country;
    return this;
  @Column(name= "COUNTRY", columnDefinition="integer", nullable = true)
      type = "com.welflex.hibernate.GenericEnumUserType",
      parameters = {

                  name  = "enumClass",                      
                  value = "com.welflex.model.Country"),
                  name  = "identifierMethod",
                  value = "toInt"),

                  name  = "valueOfMethod",
                  value = "fromInt")

  private Country country;
We define a Simple Java 5 enum type denoting the different countries that are participating in the World Cup. Sadly, we have only 3, India, Usa and Italy for the 2020 world cup.
public enum Country {
  INDIA (0), USA(1), ITALY(2);
  private int code;
  Country(int code) {
    this.code = code;

  public int toInt() {
    return code;
The application uses three sharded databases for the three countries participating in the database and each database is defined using hibernate configuration file, hibernate0.cfg.xml for India, hibernate1.cfg.xml for USA and hibernate2.cfg.xml for Italy. To keep the example easy, we have decided that Country code's defined in the Enum map to the hibernate configs. For example, India is Country 0 as denoted by the enum and it maps to shard database 0. One could maintain a map for the same if required.
For example, the hibernate configuration for the Indian database is as follows:
 <session-factory name="HibernateSessionFactory0">
  <property name="dialect">org.hibernate.dialect.HSQLDialect</property>

  <property name="connection.driver_class">org.hsqldb.jdbcDriver</property>
  <property name="connection.url">jdbc:hsqldb:mem:shard0</property>
  <property name="connection.username">sa</property>

  <property name="connection.password"></property>
  <property name="">update</property>
  <property name="hibernate.connection.shard_id">0</property>

  <property name="hibernate.shard.enable_cross_shard_relationship_checks"> true </property>
  <property name="hibernate.show_sql">true</property>
  <property name="hibernate.format_sql">true</property>

  <property name="hibernate.jdbc.batch_size">20</property>
In order to route data to the appropriate database for storage, we define a custom shard selection strategy that uses the country code of the NationalPlayer object to route persistence to a particular database as shown below:
public class ShardSelectionStrategy extends RoundRobinShardSelectionStrategy {
  public ShardSelectionStrategy(RoundRobinShardLoadBalancer loadBalancer) {

  public ShardId selectShardIdForNewObject(Object obj) {
    if (obj instanceof NationalPlayer) {
      ShardId id = new ShardId(((NationalPlayer) obj).getCountry().toInt());
      return id;
    return super.selectShardIdForNewObject(obj);
To easily work with Hibernate, we have a HibernateUtil class that factories shard sessions and also Sessions to individual databases. How Shards are selected/delegated to can be customized by providing an implementation of the interface ShardStrageyFactory. In the example, we have only chosen to customize the selection strategy.
To test our example, we have some unit tests. The tests will ensure the following are working:
  • When Players are persisted, they are stored only in the appropriate database instance. In order to ensure the same, a direct connection to the target sharded database is obtained to ensure the existence of the persisted player. In addition, direct connections to the other databases in the shards are obtained to ensure that the player in question has not been inserted there.
  • Queries executed against the shard will obtain data from all the sharded databases accurately. The tests will ensure that all databases in the shard are being accessed.
The Players in the unit test are as follows:
NationalPlayer indiaPlayer = new NationalPlayer().withCountry(Country.INDIA)

NationalPlayer usaPlayer = new NationalPlayer().withCountry(Country.USA)

NationalPlayer italyPlayer = new NationalPlayer().withCountry(Country.ITALY)
The Persistence test is the following:
public void testShardingPersistence() {
    BigInteger indiaPlayerId = null;
    BigInteger usaPlayerId = null;
    BigInteger italyPlayerId = null;
    // Save all three players
    indiaPlayerId = indiaPlayer.getId();
 System.out.println("Indian Player Id:" + indiaPlayerId);
    usaPlayerId = usaPlayer.getId();
    System.out.println("Usa Player Id:" + usaPlayerId);
    italyPlayerId = italyPlayer.getId();
    System.out.println("Italy Player Id:" + italyPlayerId);
    assertNotNull("Indian Player must have been persisted", getShardPlayer(indiaPlayerId));
    assertNotNull("Usa Player must have been persisted", getShardPlayer(usaPlayerId));
    assertNotNull("Italy Player must have been persisted", getShardPlayer(italyPlayerId));

    // Ensure that the appropriate shards contain the players    
    assertExistsOnlyOnShard("Indian Player should have existed on only shard 0", 0, indiaPlayerId);    
    assertExistsOnlyOnShard("Usa Player should have existed only on shard 1", 1, usaPlayerId);    
    assertExistsOnlyOnShard("Italian Player should have existed only on shard 2", 2, italyPlayerId);    
The Simple Criteria based tests are the following:
  public void testSimpleCriteira() throws Exception {
    Session session = HibernateUtil.getSession();
    Transaction tx = session.beginTransaction();

    try {
      Criteria c = session.createCriteria(NationalPlayer.class)
        .add(Restrictions.eq("country", Country.INDIA));

      List<Nationalplayer> players = c.list();
      assertTrue("Should only return the sole Indian Player", players.size() == 1);
      assertContainsPlayers(players, indiaPlayer);
      c = session.createCriteria(NationalPlayer.class).add("careerGoals", 50));
      players = c.list();

      assertEquals("Should return the usa and india players", 2, players.size());
      assertContainsPlayers(players, indiaPlayer, usaPlayer);

      c = session.createCriteria(NationalPlayer.class)
        .add(Restrictions.between("worldRanking", 5, 15));

      players = c.list();
      assertEquals("Should only have the usa player", 1, players.size());

      assertContainsPlayers(players, usaPlayer);
      c = session.createCriteria(NationalPlayer.class)
        .add(Restrictions.eq("lastName", "Acharya"));
      players = c.list();

      assertEquals("All Players should be found as they have same last name", 3, players.size());
      assertContainsPlayers(players, indiaPlayer, usaPlayer, italyPlayer);

    } catch (Exception e) {
      throw e;
    } finally {
      if (session != null) {
Running The Example: The example is a Maven 2 project. JDK 1.6.X was used to develop/test. Database of choice for the example is HSQL, why would anyone select Oracle or anything other database :-). One can obtain the project from HERE. Simply run "mvn test" to see the tests being run and/or import the code into Eclipse using Q4E or your favorite maven eclipse plugin. As a note, the examples are simply examples.
The Parting: A requirement to using Hibernate Shards is Java 1.5 or higher. I am not quite sure why this requirement exists as one does not necessarily need to use annotations or java 5 features for hibernate configuration. A todo for me. Session Factory configuration via JPA is apparently not supported as yet. Different ID generation strategies can be used. One interesting problem in sharding is sorting of the results. Hibernate Shards works around the same by insisting that all objects returned by a Criteria query with Order By clause implement the Comparable interface. Sorting will only occur after obtaining the result set from each member of the shard within Hibernate Shards. From the documentation, it appears that "Distinct" clauses are not supported as yet. HQL support is not well supported as yet. For example, I wanted to execute a "delete from NationalPlayer" across the shard, i.e., clear the table on every database. I could not execute the same with an unsupported exception being the result. The documentation recommends staying away if possible. Cross Sharding is not yet supported. What this means is that if Object A is on shard 1 and object B is on shard 2, one cannot create an associated between them. Have not encountered a need for this yet. Links,
Hibernate Shards Documentation
Nice Read on Sharding
Now only if I can get my Jersey example working...

Wednesday, September 24, 2008

JAXWS Example with Maven and Spring

For the past few months, I have been quite far away from the SOAP world with my primary focus being on REST looking at JAXWS implementors, Restlet etc. The other day, I attended the Utah Java User's Group (UJUG) meeting. The UJUG is managed superbly by Mr.Chris Maki and I quite enjoyed myself when there with the presentations. The presentations centered around how different companies implemented web services and their stack.

One company in particular has me interested in their Web service Stack. The LDS church uses JAXWS with JDK1.6. The presenter, although did not enter into details, did however mention how easy it is to create and consumer SOAP web services with JAXWS. One of the things he mentioned was that they did not need to step outside JDK in order to be able to create SOAP based WS. They did not feel the need for external impls like CXF etc.

In the projects that I have worked with SOAP, the words, simple or easy have never come to mind :-) So what was this gentleman talking about? I had to find out. Would have loved to chat with him more but could not, so here I go finding out the hard way :-).

So what is JAXWS? JAXWS (Java API For XML Web Services) can be thought of as a
Java Programming API to create Web Services. JAXWS was introduced in JAVA SE 5 and
uses Annotations in the creation of end points and service clients. JAXWS 2.0 replaced or
encompassed the JAX-RPC API. For more details on the same look at this developer works article.
JAXWS uses JAXB 2.0 for data binding.

Simple JAXWS Example:
Like my JAXRS Maven/Spring examples, the JAXWS web service is implemented using a
multi module maven project with a structure as shown below:

|_ client - Contains generated code from WSDL
|_ service - Contains model and business logic
|_ webapp - Contains Web service wrapper, servlet defs etc

For the example, I am using the JAXWS-Maven plugin.
The plugin has two goals that the example uses, a. wsgen, that reads a service end point
class and generates service artifacts and b. wsimport used to generate the consumer code.

The webapp maven module contains JAXB annotated DTO's like an OrderDto and LineItemDto. The
Web service is exposed via a simple wrapper class:

 public class OrderWebService {
   private static final Logger log = Logger.getLogger(OrderWebService.class);
   @Autowired private MapperIF beanMapper;
   @Autowired private OrderService serviceDelegate;

   public OrderDto getOrder(Long orderId) throws OrderWebException {
     Order order = null;
     try {
       order = serviceDelegate.getOrder(orderId);
     catch (OrderNotFoundException nfe) {
       throw new OrderWebException(nfe.getMessage());
     OrderDto orderDTO = (OrderDto), OrderDto.class);
     return orderDTO;

The @WebService annotation indicates that the above class is a Web Service end point. The Service delegate
does the heart of the service work with the OrderWebService class acting only as a front. As
seen above, we are using Spring and auto-wiring for the delegate and Bean mapper. Just
one other thing to note is that we have a class called OrderWebException, this is a class
annotated with @WebFault indicating this exception is raised when there is a web fault.

I have a layer which is a webservice layer and an actual service layer as I feel there
are some advantages to such an approach. I have discussed the same in my earlier blogs.

During the maven build process, the jaxws maven plugin will come into play and generate
service artifacts. The maven plugin is configured as follows:

 1 <plugin>
 2   <groupId>org.codehaus.mojo</groupId>

 3     <artifactId>jaxws-maven-plugin</artifactId>
 4        <executions>
 5     <execution>

 6       <goals>
 7         <goal>wsgen</goal>
 8       </goals>

 9      <configuration>
10        <packageName>com.weflex.service</packageName>
11         <!-- The name of your generated source package -->
12        <sei></sei>

13        <keep>true</keep>
14        <genWsdl>true</genWsdl>
15       </configuration>
16         </execution>

17      </executions>
18 </plugin>

The generator will generate the WSD and web service artifacts. The schema itself is created
at ${base.dir}/target/jaxws/wsgen/wsdl.

One point that is worth noting is that, in order to launch the web service for testing,
there is no need for a separate Servlet container, web.xml etc. JDK 1.6.X provides for
a simple container that will allow you to test the service. One can bind the service using:

public class OrderWebService {
  public static void main(String args[]) {"http://localhost:8080/ws", new OrderWebService());

Viola, then you connect to the service, using SOAP UI or your favorite SOAP tool and test the service.

Now the client code. Unlike in my rest examples where one wrote the client code explicitly, in this example, the client is entirely generated from the service WSDL. There is no
reason to use auto generation. Seems easy to me though.

The wsimport goal can create a Web Service client when provided a WSDL. The client project is dependent on the webapp project. Once the webapp project
is built a WSDL is created in the above mentioned target directory. The plugin on the
client is configured to point to this WSDL in order to generate client side artifacts as shown

 1 <plugin>
 2   <groupId>org.codehaus.mojo</groupId>

 3   <artifactId>jaxws-maven-plugin</artifactId>
 4   <executions>
 5     <execution>

 6       <goals>
 7         <goal>wsimport</goal>
 8       </goals>

 9       <configuration>
10         <wsdlUrls>
11    <wsdlUrl>       ../../jaxws-example/webapp/target/jaxws/wsgen/wsdl/OrderWebServiceService.wsdl
12    </wsdlUrl>

13  </wsdlUrls>
14        </configuration>
15     </execution>
16    </executions>
17 </plugin>

Now that we have the client, the integration test project can run which starts the
web service and issues requests from the client to the service.

The integration test creates a client via the following call:

URL wsdlLocation = new URL(INTEGRATION_TEST_URI + "?wsdl");
ORDER_CLIENT = new OrderWebServiceService(wsdlLocation,
      new QName("", "OrderWebServiceService"))

The default generated client internal URL points to our file system. The above the URL
the same to point to an actual server instance.
Running the Example:
You need to be on JDK 1.6.X, maven 2.0.8 installed. From the root of the project directory,
issue a "mvn install" and you should see the entire project being built with the
Integration Tests running. The Project itself can be downloaded from HERE.

Some of my friends will echo my thoughts with the pains they have had with Web service
creation using certain vendor tools. The promise is always, "Watch while I create a
Web Service in just two minutes!". JAXWS also tends to address performance of SOAP
WS. Take a look at the article on implementing high performance web services with jaxws.

With JAXWS, I do not have to extend any particular class in order to expose a POJO as
a Web Service. The ability to instantly test my web service without the need for an
external Servlet container starting up, +1. The ease of generating the Client code,
another +1. Maven Plugin +1. Spring Integration +1. Easy deployment without a heavy
weight container +1. The plugin however did not seem to provide for easy mapping of
name spaces to custom java packages with wsimport.

JAXWS also provides for easier than before implementation of WS-* features if you need them.

I am specially interested in the performance characteristics of JAXWS using JAXB 2.0. I have always believed that one of the major performance penalties paid with SOAP Web service have been on the marshalling/unmarshalling fronts. I need to do some performance benchmarks I guess.

Wednesday, September 17, 2008

One for the worker bee...

Ok! Time to confess. Nope, I am not gay , I am a simple worker bee and proud of the same.

The worker bee in nature or an organization is responsible for taking orders, innovations and direction from their superiors bee's and simply executing them, no questions asked. I think of the SS as am exterme parallel.

I must admit, I am a worker bee of sorts by birth and by nature. My younger bro inherited the brains of the family. I remember that for his 10th grade project the guy created a graphing libary in BASIC. I on the other hand had a simple video cassette libary where people could checkin/checkout. Sadly, for the world of CS, my bro decided to head toward management and sales. If only he were in CS, we would have seen a Rod Johnson or maybe ambitiously enough, a Gosling. Sadness!!!! He had the making of an uber bee.

Me, on the other hand, I have been a hard worker all my life. I work like a maniac at times to achieve a goal. I get it wrong, I get it more wrong, but then I get it right...yep, sometimes its trial and error, no logic, just trail and error. But heh, I don't quit till its done, the worker bee in me..aah, now I am rhyming...

Some folks want a pure worker bee, a non questioning, non confrontational worker bee, incapable of revolting :-)... I must leave the alleigance of the worker bees temporarily as I cannot classsify with the herd and instead prefer to make a faction. I am a questioning worker bee. (new genus not genius). If my general or queen bee says, "Just kill them", I would ask "Why? Whats the reason? What for? How do we benefit from the same? Is there an alternative and definitely, Show me the money!!!". I am still the bee so don't abandon me just as yet :-). But yes, I am also a watchful bee, I can guage properly whose "" is on the line. If it ain't mine, I ain't gonna make a loud noise. A bee who knows his boundaries I figure.

Worker bee's are extremely valuable for any endeavour, they ensure the end goal is met. Think teamwork. Einstein or Tesla might have come up with the best of inventions to faciliate time travel, but if Mr.Worker Bee does not get the bolt in correctly, the mechanism can expect quite contrasting results :-).

We worker bee's have a very important place in any endeavour. Don't ever think less of us. We might not be major decision makers but our minor decisions can make major decisions reflect as either good or bad. Not a threat but a call for respect. Also every bee other than the top bee is a worker bee. Right?

I sadly am not very gifted, mentally or physically. I have a brain that can execute some tasks; tasks that fall in my comprehension level at the moment. I am a bee that is growing with experience. It does not mean that I cannot rise to an occasion? Does not mean, I cannot exceed your expectations at any given time? Give me time, I work hard, I will succeed..I am a worker bee. I have been in the trenches, I have been on the clouds, I am a proven bee.

Reminds me of an ole saying "Kick an old man on the butt, u know what he is! Don't kick a young man on his butt, you don't know what he'll become!" I feel people have moments of greatness all through their lives, some more than others.

Worker bee signing out here I come :-)

Sunday, September 14, 2008

Java Instrumentation with JDK 1.6.X, Class Re-definition after loading

It was recently brought to my notice that classes can be redefined after the VM has started and the class has been loaded, thanks to a JDK 1.6.X feature introduction. In other words if we have class that is loaded whose simple method greet() returns "Hello Sanjay", it can be changed to return "Hello world". I knew that you could redefine the class during load time but that changing a class after it has loaded was news to me. We could do the same before with JPDA where classes where hot swapped but not with Instrumentation, at least that is my understanding. In a previous blog of mine, I had discussed the redefinition of classes to augment them with profiling information, the same was done during class load time using an agent and a byte code enhancement library, javaassist.

Armed with the ammunition from above, I explore the changes. With Java 6, one can load a java agent after the VM has started. This means, we have control on when the agent is loaded. Redefinition of classes is supported by the following changes to the instrumentation package in java 6 available on an instance of the java.lang.instrument.Instrumentation interface:

Instrumentation.addTransformer(ClassFileTransformer, boolean)

With the above, the research begs the following questions:

How do we enhance byte code of a Class?
A class byte code can be enhanced using a byte code engineering library. There are many popular libraries such as javassist, asm, bcel that can be used to alter a class.

How do we transform the class at run time with the enhanced byte code?
Consider the following example that describes the process,

// Obtain altered class byte code
byte bytes[] = addProfiling(Foo.class);

// Definition indicating class to be transformed and the new definition.
ClassDefinition definition = new ClassDefinition(Foo.class, bytes);

// Ask the instrumentation library to redefine the classes.

The addProfiling(Foo.class) method returns the bytecode of Foo.class.
NOTE: One thing we cannot do with redefining a class is change its "SCHEMA". In other words, we cannot add new methods/remove methods etc. When using JPDA for debugging, you might remember that athough you could change code inside an existing method, attempting to change the signature etc would not work.

How does one obtain a handle to the java.lang.instrument.Instrumentation instance to redefine the classes?

In my previous instrumentation example blog, the agent class was initialized via the following method public static void premain(String agentArgs, Instrumentation inst). With Java 6 in order to start the agent after the VM has started, the Agent class needs to define the method:

public static void agentmain(String agentArgs, Instrumentation inst)
Both the methods shown above provide an instance of the Instrumentation interface and thus a place to obtain a handle to the Instrumentation implementation.

An agent class with the following can facilitate the re-transformation of classes:

public class Agent {
private static Instrumentation instrumentation;

public static void agentmain(String agentArgs, Instrumentation inst) {
Agent.instrumentation = inst;

public static void redefineClasses(ClassDefinition ...defs) throws Exception {

How does one load the agent?

Apart from having a class that defines the agentmain method, the jar that will contain the agent must have certain entries in its MANIFEST file as shown below:

Agent-Class: com.welflex.Agent
Can-Retransform-Classes: true
Can-Redefine-Classes: true

The Can-Redefine-Classes attribute describes the ability of the agent to re-transform classes.

With JDK 5.X the agent had to specified in the command line via: -javaagent:jarpath in order to start the agent.With JDK 6.X the same is not required. We could load an agent at runtime via the location of the jar file. The loading of the agent is faciliated by the VirtualMachine class present in the tools.jar of the JDK. An example of how an agent is loaded is shown below:

// Location of agent jar
String agentPath = "/home/sanjay/agent.jar";

// Attach to VM
VirtualMachine vm = VirtualMachine.attach(processIdOfJvm);

// Load Agent

The VirtualMachine class is part of the package which is present in the tools.jar file located in $JDK_HOME/lib. Thus if loading of an agent post JVM start is desired, it naturally follows that the tools.jar has to be available in the class path.

An Example:
For the sake of discussion, let us consider the following requirements of our application:

  • Select classes to profile. For example, display profiling information for class Foo but not for class Bar

  • Profiling information should be removable. For example, if class Application was profiled in one test, then on a subsequent test, the profiling information for class Application should not exist unless requested by the test.

I have a class called Profiler. The profiler class has the following definition:

public class Profiler {
public static void addClassesToProfile(Class ...classes) {
ClassDefintions defs[] = Profiler.buildProfiledClassDefintions(classes);

private static Map<class<?>, byte[]> byteHolderMap = new HashMap..;

private static ClassDefinition[] buildProfiledClassDefinitions(Class classes) {
//1.Store original class byte code in byteHolderMap
//2.Enhance classes with profiling information and
// return back ClassDefinitions with enhanced code.

public static void restoreClasses() {
// For each entry in the byteHolderMap, redefine the class.

When the method addClassesToProfile(Class classes) is invoked, the Profiler will create ClassDefintions that contain profiling information. After this the agent (InstrumentationDelegate) is loaded thereby obtaining a handle to the java.lang.instrument.Instrumentation implementation. The call to redefineClasses() will restore the original class schema. The method buildProfiledClassDefinitions() creates a Map of [Class, byte[]], that stores the original byte code of the class. Thus, when the restoreClasses() method is invoked, the original classes are re-instated.

Shown below is a unit test that utilizes the profiling agent:
  • Runs a Test using the class without instrumentation
  • Enhances the code with profiling information and runs the tests
  • Restores the class under test to the one before the profiling information was added
  • Re-runs the test with no profiling information.

public void testApplication() throws Exception {
System.out.println("Test Profiling Application Class.");

// Run test before Profiling, class will not have profiling information
Application appInit = new Application(new Description().setDescription("Pre-Profiling"));
System.out.println("Class not augmented yet, so should not see profiling information.");
assertEquals("Pre-Profiling", appInit.getDescription());

System.out.println("Adding Profiling information and running test, should see profiling.");

// Redefine and add Profiling information, should see profiling info
// printed out

Application app = new Application(new Description().setDescription("Simple App"));;;

assertEquals("Simple App", app.getDescription());

// Remove profiling information

// Classes must not display profiling information.
System.out.println("Classes have been restored, should not see profiling information now..");
Application afterRestore = new Application(new Description().setDescription("After Restore"));
assertEquals("After Restore", afterRestore.getDescription());

Running com.welflex.profiletarget.ApplicationTest
Test Profiling Application Class.
Class not augmented yet, so should not see profiling information.
Adding Profiling information and running test, should see profiling.
-> Enter
<- Exit completed in 2112 nano secs
-> Enter
<- Exit completed in 1000060102 nano secs
-> Enter
<- Exit completed in 980 nano secs
-> Enter
<- Exit completed in 3000029988 nano secs
-> Enter Method:com.welflex.profiletarget.Application.getDescription()
<- Exit Method:com.welflex.profiletarget.Application.getDescription() completed in 29749 nano secs
Classes have been restored, should not see profiling information now..
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.927 sec
Running the Example:
The example can be downloaded from HERE. The example uses the byte code engineering library javassist to augment the classes. The project itself is a Maven 2 project and requires JDK 1.6.X to execute. You might need to tweak the pom.xml file of the profiler-instrumentation project to point to your locally installed tools.jar, i.e., change the line ${java.home}/../lib/tools.jar, to point to your local tool.jar via a fully qualified path. I have had some difficulty getting this correctly working and had to use an absolute path to the tools.jar. After this, issue a "mvn install" from the root project directory to see the profiling example in action. Note also that if you import the project into Eclipse via Q4E, then when running the example, ensure that the profiler-integration project is closed otherwise the profiler-target project will pick the eclipse project instead of looking for the deployed profiler-integration jar file.

If you have difficulties running the example, please do not hesitate to contact me.

Closing Thoughts:
One can instrument classes on a running VM quite easily with the above concept.
Mr.Jack Shirazi has a nice example of hotpatching a java application
I have personally not tried attaching to a different VM but should be pretty straight forward. The example I have demonstrated is quite largely based on the JMockit testing framework that utilizes byte code enhancement. It is quite a powerful testing framework and I need to spend some time playing with it. But right now, I am off to play with my XBox :-)

Sunday, September 7, 2008

Service Locator Pattern, How shall I find you?

I have worked in companies that have used the Service Locator pattern to obtain dependencies of an object. Service Locator is one way of promoting orthogonality, another is of course Dependency Injection. I am not going to delve into detail regarding pros/cons of each pattern but instead will focus on the Service Locator and some simple investigations into the same.

The primary purpose of a Service Locator IMO is to act as Registry of sorts to obtain managed objects. The Service Locator can also be augmented with functionality to lazily load the serviced object and also provide as a single place where a new copy of the serviced object can be obtained (think prototype pattern here).

One simple example of obtain an object using the Service locator is shown below:
public class ServiceLocator {
  private static final ServiceLocator instance = new ServiceLocator();
  private ServiceLocator() {}
  public static ServiceLocator getInstance() {
    return instance;
  private OrderDAO dao = new OrderDAOImpl();

  public OrderDAO getOrderDAO() {
    return dao;

public class OrderServiceImpl {
  private final OrderDAO orderDAO;
  public OrderServiceImpl() {
    orderDAO = ServiceLocator.getInstance().getOrderDAO();

In the above example, the OrderDAO implementation is obtained from the Service Locator via a strongly typed call to getOrderDAO(). This is quite nice. However, what if we had many different objects that we would like to locate? Adding multiple methods like getProductDAO(), getCartDAO(), getAuditService()...etc will tend to bloat the ServiceLocator with a lot of code. One could create separate Service Locator classes for example, ProductDAOLocator, CreditCardDAOLocator etc, etc. However, we again have a lot of classes to maintain and thus a penalty to pay.

What if we could provide an easy way to obtain the objects via a "key"? This method involves using a "key" into the Service Locator based of which the located examines its registry and provides back the corresponding object. Consider an altered version of the locator as shown below:
public class ServiceLocator {
  private static final ServiceLocator instance = new ServiceLocator();
  private Map locatorMap = new HashMap();

  private ServiceLocator() {
    locatorMap.put("orderDAO", new OrderDAOImpl());
    locatorMap.put("productDAO", new ProductDAOImpl());
    locatorMap.put("auditService", new AuditServiceImpl());

  public static ServiceLocator getInstance() {
    return instance;

  public void setServicedObject(String key, Object impl) {
    locatorMap.put(key, impl);

  public Object getServicedObject(String key) {
    return locatorMap.get(key);

public class OrderServiceImpl {
  private final OrderDAO orderDAO;
  public OrderServiceImpl() {
   orderDAO = (OrderDAO) ServiceLocator.getInstance()

public class OrderServiceTest {
  public void setUp() {
      new MockOrderDAOImpl());
  public void testService() {
In the above example, the code is more dynamic and it becomes easy to add more locatable services. In addition, the above locator works very well when using a command/factory pattern as shown below:
public class ServiceLocator {
  private static final ServiceLocator instance = new ServiceLocator();
  private Map locatorMap = new HashMap();

  private ServiceLocator() {
    locatorMap.put("saveOrder", new OrderSaveCommand());
    locatorMap.put("saveProduct", new ProductSaveCommand());

  public static ServiceLocator getInstance() {
    return instance;

  public void setServicedObject(String key, Object impl) {
    locatorMap.put(key, impl);

  public Object getServicedObject(String key) {
    return locatorMap.get(key);

public class Action {
  public void doAction(Request request) {
    String commandKey = request.getCommandKey();
    CommmandIF c = (CommandIF) ServiceLocator.getInstance()

The ServiceLocator functions as a factory of sorts here providing concrete implementations of the Command interface at runtime based of the key.

The above examples of Service Locator do have a major short coming. The code is now not strongly typed. Compile time type checking is lost. In the example which explicitly maintained getter methods for each known service, a call to a getter guranteed compile time type checking and run time safety. In the above shown versions of the Locator, if the Registry had an accidentally mapping of ["orderDAO", new ProductDAOImpl()], at run time a ClassCastException can occur.

Clearly both the styles of using the Service Locator pattern shown above have their pro's and con's. Can we think of a third way that will provide compile time type checking of the first example while providing the ease of use of the second?

Yes, we have hope with Java 5.X+ and Generics. With Generics introduced with Java 5.X, compile time type safety has obtained a big boost. In order to explore the problem further, I must credit the work influenced by:

With the above examples, we have laid down the pattern for implementing a type safe version of the Service Locator. The key to lookup a class is now the interface. With the same said, the following snippet represents a new type safe version of the Service Locator:
public class ServiceLocator {
  private static Map<Class<?>, Object> infInstanceMap
    = new HashMap<Class<?>, Object>();

  public static <T, I extends T> void setService(Class<T> inter, I impl) {
    infInstanceMap.put(inter, impl);

  public static <T> T getService(Class<T> ofClass) {
    Object instance = infInstanceMap.get(ofClass);

    return ofClass.cast(instance);

public class BootStrap {
  public static void start() {
    ServiceLocator.setService(OrderDAO.class, new OrderDAOImpl());
    // Note the following will not compile
    // ServiceLocator.setService(ProductDAO.class, new OrderDAOImpl());

public class OrderService {
  private final OrderDAO orderDAO;

  public OrderService() {
    orderDAO = ServiceLocator.getService(OrderDAO.class);

public class OrderServiceTest {
  public void setUp() {
    ServiceLocator.setService(OrderDAO.class, new MockOrderDAOImpl());

  public void testService() {
The above example is type safe while promoting testability. So is there some functionality we have lost with this locator? The answer is 'yes'. The locator does not permit the use of multiple implementations of the same interface for location. Two values for the same key becomes a bit of a problem. How can we overcome the same?

Well, I do not know if this is good solution or not. However, what I propose is that if there are more than one implementations of a given interface that need to be located, then the implementors need to specifically identify themselves via unique identifiers (Think Spring annotations and @Qualifier here). So the solution is, if there is exactly one known implementation, then use the interface based lookup, if not and this should be known during development, lookup by a unique identifier. Applying these changes we have:
public class ServiceLocator {
// Code for interface based lookup is not show here but should be considered included

  private static Map<String, Object> idObjectMap 
   = new HashMap<String, Object>();

  public static <T, I extends T> void setIdBasedLookupService(String uid, 
    Class<T> infClass, I impl) {
   idObjectMap.put(uid, impl);

  public static <T> T getServiceById(String uid, Class<T> infClass) {
   Object instance = idObjectMap.get(uid);

   return ifClass.cast(instance);

public class BootStrap {
  public static void start() {
    ServiceLocator.setIdBasedLookupService("saveOrderCommand", CommandIF.class,
     new SaveOrderCommand());

    // Note the following will not compile
    // ServiceLocator.setService("saveProductCommand", CommandIF.class,
    // new OrderDAOImpl());

public class Action {
  public execute(Request request) {
    String commandKey = request.getCommandKey();
    CommandIF command = ServiceLocator
     .getServiceById(commandKey, CommandIF.class)

public class RequestTester {
  public void setUp() {
     CommandIF.class,new MockSaveCommand());

  public void testRequest() {

Ok so are we done yet? I hear the shouts of "About time!" but beg for forgiveness and continue :-)

There are some things that I do not like about the above approach:
  • Explicit setting of the objects in the locator by the BootStrap. Every locatable interface and implementation needs to be specified.
  • No lazy loading option

I would like to lean on my favorite open source framework, Spring's classpath scanning feature of locating interfaces and implementations. In addition, Spring's feature of using the @Qualifier tag to denote the case where there are multiple implementations.

To facilitate the same, I define an Annotation called @Locatable which interfaces that will participate in the Service Location process will need to employ. If an interface has only one concrete implementation, then the same will be dicovered and bound in the locator. If there are multiple implementations, then those implementations must have an annotation called @ImplUid that contain's one attribute called uid. So as examples of the same, see below:
public interface OrderService {
  public void saveOrder(Order order);

public abstract class AbstractOrderServiceImpl implements OrderService {
// Single Concrete OrderService implementation
public class OrderServiceImpl extends AbstractOrderServiceImpl {
  public void saveOrder(Order order) {

public interface Phone {
  public boolean dialNumber(int number);

// Analog implementation of Phone
public class AnalogPhone implements Phone {
  public boolean dialNumber(int number) {

// Digital implementation of phone.
public class DigitialPhone implements Phone {
  public boolean dialNumber(int number) {
In the above example, note that for the OrderService there is only one concrete implementation, namely, the OrderServiceImpl. However, for the Phone interface, there are two implementation and each are tagged with the @ImplUid annotation to set them apart.

In order to perform the tie up between interface and implementation, I have defined a class called LocatorClassFinder. This class will scan the classpath, perform the tie up between interfaces and implementations and then return back two maps. The first map will map [interface name to class name] for cases where there is only one implementation of a given interface. The second map provides mapping for interfaces that have multiple implementation and the implementations are correspondingly tagged with @ImplUid. The ServiceLocator has a start() method that will ask the LocaterClassFinder to construct the maps. The actual class path scanning is performed using Spring frameworks adapters of the asm library. When the Service locater is asked for an object, it will lazily instantiate the same based of the information provided by the two maps. So summarizing via code:
public class ServiceLocator {
  // All look up/setter static methods discussed earlier.

  private IntefaceImplMaps maps;

  // Called once at startup
  public static void start() {
    // Package to scan
    LocatorClassFinder finder = new LocatorClassFinder("com/welflex");
    maps = finder.locateInterfaceImpls();

public class LocatorClassFinder {
   public InterfaceImplMaps locateInterfaceImpls() {
With the above solution, we have an easy way to implement Service Location with support for the command/factory patterns as well. There is no explicit mapping of interface and impls required for the developer as these are calculated at run time.

In Summary:

I have an example of the implementation of the Service Locator available HERE for Download. It is a Maven 2 project. The example is purely for purposes of demonstration and learning. I have used code from the Spring project and am NOT redistributing the same anywhere or utilizing it in any project of sorts. It is purely an educational exercise.

  • The LocaterClassFinder implemented uses code from Spring Framework. Some clases that the Spring framework has scoped as Package protected, I have simply copied and used. There is no compelling reason to use Spring's asm support. I however found it to be "easy". In addition, some other Spring classes are being used to acheive the classpath scanning. The example of Service Locator does not depend on the Spring container in any way and only uses certain classes provided by Spring. It can easily be stripped of all dependencies to spring if required.
  • Although the example code hopes to facilitate lazy loading, issues with synchronicity have NOT been addressed with the same. Should not be hard to do so.
  • The code has not discussed creating objects via the Prototype pattern. However, that can easily be done with some tweaking
  • The LocaterClassFinder class is a very hacky and bad implementation. It was done just to test the theory and is by no means ready to stand the rigors of a code review :-)) I am ashamed of the same and blame it on a 3 drinks on a saturday night ;-)