Search This Blog

Tuesday, November 25, 2008

Interviews - A chance at playing god.

There are a few times where one gets a chance to directly influence the life of another individual substantially. If one is a judge, the statement is considered void and null, as it happens all the time. I am talking about a scenario where an interviewer that interviews a candidate and whose opinion and decision determine whether the candidate is hired or not.

I do interviews some times. I must however admit that I do not feel this as a moment of power by any means. I tend to put myself in an interviewing candidate shoes, here is a person who is either unemployed and desperately trying to find a job or a person who has let go a day of vacationing in the Bahama's in search of a better position for them and their families and I am tasked with providing my extremely valued decision of "Yes or No". As an interviewer, one must do proper justice to the interviewing process as one owes that to the organization on behalf of which they are conducting the interview, while still respecting the individual being interviewed and providing their best judgement.


That said, let me narrate a factual interview that occurred:

Candidate is seated awaiting the interviewer to come pick him up. The candidate is right outside the office of the interviewer. A lady storms out, another candidate interviewing for the same position, tears in her eyes. Clearly the interview did not go as expected. But tears in her eyes, that's quite disconcerting to say the least for our candidate.


Following suit, emerges the interviewer, stern faced, a no-nonsense type of character and beckons the candidate to enter. Candidate is already a bit disturbed by the plight of the previous candidate. The candidate now follows the interviewer into his room and takes a seat. The questioning process starts. Some standard questions and introduction are exchanged, no pleasantries, just formalities, before the meat of the interview starts.

Interviewer: I hope you realize, your CEO is a moron, he has done absolutely nothing for the company since he started.
Candiate: I must disagree with your assessment. It is my understanding that he has achieved all the goals that he set out to do and has in fact increased the revenue of our company by 20% during his tenure.
Interviewer: I still think he is a moron, anyway, lets go on. So tell me why should I hire you ?
Candidate: I have the relevant experience and skill required of this job, I am personable and have the confidence to execute the duties of this position well, I have good contacts, I..
Interviewer (interrupting): Your previous experience is of no value in this job as its a whole new environment. Regarding personality and confidence, my 10 year old son has the same attributes that you mentioned. Maybe I should take him for the position instead?
Candidate: That is your decision to make but I must hold my stance that he lacks the necessary experience that I possess.
Interviewer: I am not convinced, anyway, lets move on. So do you think you look good?
Candidate: I am no Tom Cruise but at the same time I consider myself presentable.
Interviewer: I am of the opinion that you look far better than Tom Cruise.
Candidate: I am flattered by your assessment and am glad that someone feel that way as my wife certainly does not (candidate trying to add some humor here).
Interviewer: If that is the case, I feel that you married the wrong person and you and your wife argue and fight a lot!
Candidate: This is the only topic we really ever fight about (still maintaining composure)
Interviewer: I am not yet convinced and still maintain that you are far better looking than Tom Cruise.
Candidate (Smiling): Thank you once again. If you would be so kind as to mention the same to the wife, it would eliminate our one reason for arguing.
Interviewer (Looking around the room): OK lets move on now. Give me 10 uses of a coat hanger within the next minute and a half.
Candidate (Answering with rapid fire): It can be used to hang a coat, draw a triangle, a weapon, remove cob webs........
Interviewer: OK, now tell me three good reasons to hire you as I am still not convinced.
Candiate: I have a lot of experience, I have excellent contacts, I have the drive and enthusiasm.
Interviewer: This is absolutely no good, everyone says the same thing. You have not even provided one good reason as to why I should hire you.
Candidate (At this point irritated): I have given you the reason why I believe that I am a fit for this organization. If you remain unconvinced, then, you are definitely entitled to your view. I, however have nothing else to add.
....
Some final statements and the interview ends. The candidate is taken outside to a cab by the interviewer. No smiles, no chilling after the interview, no return to earth, no small talk, just a goodbye.

So what happened? Do you think the candidate got an offer? You bet he did! Did he take it? Read on to find out...

Analyzing the interview, why was it so aggresive? This is clearly not a regular interview by any means. The interview was for a very senior position involving sales. The interviewer was apparently simulating a typical sales environment where he acted like a tough customer. The environment was to test how the candidate would perform and sell. It was a test to see how thick skinned the candidate is, how well he handles pressure, how composed he is and most importantly how he can defend his position and make a sale.

So, where does that leave the candidate? Does the candidate take the position? The candidate did not. The candidate although uncomfortable with the line of questioning, did understand the direction of the leading questions. However, the problem with the interviewer was his continued aggressive behavior after the conclusion of the interview. There was absolutely no way the candidate wanted to work with an individual with a personality like the interviewer. So the candidate turned down the offer and found a far more lucrative and suitable position in a different organization. The point of note here is that the candidate did not fail! The interviewer failed! He failed his organization as an interviewer as the deciding factor of whether the candidate accepted the job or not was really determined by the behavior of the interviewer. The interviewer lost a really good talent and the same could be equated to a $$$ of loss of the organization as they lost a really good sales person.

I think for a moment, as to how I would have reacted to the line of questioning. I must admit that I have a very transparent face and a far lower tolerance level. I would have been red within the first few levels of questioning and would have stormed out of the office after hurling some really choicest words at the interviewer. Would I have got the job, errr I doubt it :-).
I don't think I have the skin to be in sales. More importantly, I do not think I can tolerate people who would make others cry during an interview process. I do not believe that any individual, interviewing for a sales position or any other should have to undergo such a tortuous line of questioning. I also believe that as an interviewer or as a candidate, one should stick away from any topics that are personal in nature, such as marriage, family etc. Apart from being just plain territories that should not be charted, they represent food for law suits as well.

So, as a person conducting the interview, we have the power play, our opinion will count, regardless of whether it is a sole decision or is a collaborative one after discussing with other "gods". We can choose to intimidate, befriend or tread a path in between while interviewing the candidate. As an interviewer, one should control the interview session but not come across as rude. It is a balance. Remember that as an interviewer, one is representing not just one self but one's organization as well. What the interviewer portrays will be the impression of the organization for the candidate. From the perspective of the candidate, the keyword has to be "impress". Impress by personality or skill or both. As an interviewer on can easily judge the latter but the former is often a grey area. At best, I would rate gauging a candidate as a dark art. Remember that the candidate is assessing the interviewer and organization as well....Its a two way sale!

I must state one thing! I work at Overstock.com and the interviewing process here is so tangential from that mentioned above. Candidates who come to Overstock for interviews are treated with the utmost respect and hospitality. I joined the organization after all :-))))

Saturday, November 22, 2008

Home town of the Boss, jax-rs, jersey, spring, maven

I have previously tried jax-rs implementations by Restlet and JBoss RESTEasy. You can find the following same at :


One implementation that I had been postponing was Sun's RI, i.e., jersey. Trying to save 'hopefully' the best for last ;-). The name of the implementation has a part of the Boss's town of birth after all! Born in the USA! I am not born in the USA, but love it as much as my own country and is my home away from home! Moving on...

As before, I tweaked the simple Order Web Service example to use jersey.
  • Support for a Client API to communicate with Rest Resources.
  • Very Easy Spring integration.
  • Sun's RI, i.e., from the source
  • Support for exceptions
  • Very good support for JSON Media Type
  • Maven
  • Good set of examples
  • Automatic WADL generator
  • IOC
  • Embedded deployment using Grizzly
  • Filters on client and server side
  • Utilities for working with URI
One of the things that has impressed me about jersey is their out of the box JSON support. Being able to support JSON format without having to create a new javax.ws.rs.ext.Provider is rather convenient. By default the JSON convention is JSONJAXBContext.JSON_NOTATION. One can quite easily change the same to use Jettison or Badgerfish convention.

I was easily able to enable JSON representation for my Product resource by defining the Product data transfer objects with JAXB annotations, adding a @Produces("application/json") in the ProuductsResource class and ensuring that I have the jersey-json jar in my build.



ProductDTO.java
@XmlType(name = "product")

@XmlRootElement(name = "product")
public class ProductDTO implements Serializable {

....
}

ProductListDTO.java
@XmlRootElement(name = "productList")

@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "products", propOrder = {"productDTOs"})

public class ProductListDTO implements Iterable<ProductDTO> {
....
}

ProductsResource.java
@GET @Produces("application/json")
public ProductListDTO getProducts() {

...
}






<dependency>
<groupId>com.sun.jersey</groupId>
<artifactId>jersey-json</artifactId>

<version>${jersey-version}</version>
</dependency>




There is an excellent tutorial, Configuring JSON for RESTful Web Services in Jersey 1.0 by Jakub Podlesak that you can read more about.

To support Spring integration, the web applications deployment descriptor has been modified to use the Jersey Spring Servlet. All the Spring managed beans defined by the @Component, @Service, @Resource annotations are automatically wired.



<context-param>
<param-name>contextConfigLocation</param-name>
<param-value>classpath:applicationContext.xml</param-value>

</context-param>

<listener>
<listener-class>org.springframework.web.context.ContextLoaderListener
</listener-class>
</listener>

<servlet>
<servlet-name>JerseySpring</servlet-name>

<servlet-class>com.sun.jersey.spi.spring.container.servlet.SpringServlet
</servlet-class>
<load-on-startup>1</load-on-startup>
</servlet>





On the Client Side, I was pleasantly surprised by the ease in which I could invoke REST calls. As mentioned in my RESTEasy blog, jax-rs has no requirements for a client side specifications. However, those implementations that do provide one will be the ones to gain more adoption IMO. The jersey implementation provides a DSL like implementation to talk to the services. Very "VERBY", if there is no word in the dictionary like "VERBY", I stake my claim on the same :-). I modified the client from the RESTEasy implementation of the Order Service to use Jersey Client support as follows :



public class OrderClientImpl implements OrderClient {

private final WebResource resource;

/**
* @param uri Server Uri
*/

public OrderClientImpl(String uri) {

ClientConfig cc = new DefaultClientConfig();
// Include the properties provider

Client delegate = Client.create(cc);
// Note that the Resource has been created here

resource = delegate.resource(uri).path("order");

}

public OrderDTO createOrder(OrderDTO orderDTO) throws IOException {

return resource.accept(MediaType.APPLICATION_XML).type(MediaType.APPLICATION_XML)

.post(OrderDTO.class, orderDTO);
}

public void deleteOrder(Long orderId) throws OrderException {

resource.path(orderId.toString()).type(MediaType.APPLICATION_XML).delete();
}

public OrderDTO getOrder(Long orderId) throws OrderNotFoundException, IOException {

try {
return resource.path(orderId.toString())

.type("application/xml").accept("application/xml").get(OrderDTO.class);

} catch (UniformInterfaceException e) {

if (e.getResponse().getStatus() == Status.NOT_FOUND.getStatusCode()) {

throw new OrderNotFoundException(e.getResponse().getEntity(String.class));

}
throw new RuntimeException(e);
}
}

public void updateOrder(OrderDTO orderDTO, Long id) {

resource.path(id.toString()).type("application/xml").put(orderDTO);

}
}





As you can see from the above the use of the Jersey client API is rather straight forward and intuitive. One point to note is that Jersey provides a Exception framework for easily handling common exception cases like 404 etc. There are some classes that enable this support, com.sun.jersey.api.NotFoundException and com.sun.jersey.api.WebApplicationException that one can use. As I did not want to tie my data transfer object maven project to jersey in particular, I did not use jersey exceptions but instead stuck with my custom Exception Provider.

Running the Example:
The Example can be downloaded from HERE.

This example has been developed using maven 2.0.9 and jdk1.6.X. Unzip the project using your favorite zip tool, and from the root level execute a "mvn install". Doing so will execute a complete build and run some integration tests. One interesting thing to try is to start the jetty container from the webapp project using "mvn jetty:run" and then access the jersey generated WADL from http://localhost:9090/SpringTest-webapp/application.wadl

Now that you have the WADL, you should be able to use tools like SOAPUI or poster-extension (a Firefox plugin) to test your RESTful services as well.

It would be interesting to see how wadl2java and the maven plugin provided there in can be used to create a separate client project to talk to the web services.

The jersey web site has pretty good documentation about using jax-rs. It is not thorough but getting there. There are a good set of examples that one can download and try as well. It is my understanding that NetBeans has good tooling support for jersey as well.

So now that I have tried Restlet, different jax-rs implementations and jax-ws what would be my direction if I were to take a decision on what to use for my SOA project? Some food for my next blog :-)

Again, if the example fails to run, ping me...Enjoy!

Monday, October 20, 2008

SOAPUI REST Support

Some time ago, I had blogged lamenting the absence of a nice testing tool for REST. At that time, a gentleman from the SOAP UI team mentioned the upcoming support for REST from Soap UI.

I could not help but try out the Beta release of the same. SOAP UI should probabaly change their name to WsUI (Web Service UI). These guys seem to be doing a pretty neat job. If their SOAP UI was any thing to go by, their REST support should turn out to be quite solid.

So I went to the SOAP UI site, downloaded the free version (SOAP UI 2.5 beta1) and installed the same. SOAP UI supports reverse engineering from WADLS quite like they do with WSDLs. An example of using WADL with SOAP UI is documented on their site.

In my current organization we do not as 'yet' use WADL, I wanted to test out the ease in which one could test REST calls using SOAP UI. Operations such as GET/POST etc.

A simple web service that services two resources, "/products" returning a list of JSON products and "/order", a resource that allows for POST/GET/PUT and DELETE of Orders.

When the SOAP UI application opens, clicking on "New SOAPUI project" brings forth a dialog where in one can either choose use specify a WADL or provides the Option for a REST Service as shown below:







Clicking OK on the above, brings forth a dialog that allows one to create a Service as shown below:





Note the end point supplied points to the Web app. In addition, select the check box to create a Rest Resource before clicking OK to bring forth the Rest Resource Dialog:



After accepting the above dialog, proceed to the Dialog shown below to be able to execute a call to the web service to obtain products.



In the above dialog although I specified application/json as the media type, the JSON tab on return of JSON content does not display the same. However, in the RAW tab, one can view the JSON content.




The same process above can be followed in order to create an order resource with different types of HTTP operations supported by the resource.

Pretty neat..makes testing REST Web Services rather easy. In particular this looks like a boon to the Tester as well as they now have an easy way to test out Rest services. One can also set up load tests, intercept request/response using tcp mon and or pull SOAP UI into eclipse as a plugin.

I have attached herewith a Maven project that represents a simple web service. There are two resources provided, "/products" and "/order". The former returns a JSON representation of products. On the latter resource, issuing a GET via "/order/1" will return back an Order. You can create an order easily by using POST. The xml for the same is:

<?xml version="1.0" encoding="UTF-8" standalone="yes">
<order>
<lineItem>
<itemId>12123</itemId>
<itemName>XBOX 360</itemName>
<quantity>2</quantity>
</lineItem>
</order>


A sample SOAPUI project for the same can be obtained from HERE. Import the same into SOAP UI. Execute "mvn jetty:run" on the Maven project to start the webapp. Subsequently execute each resource call using SOAP UI, Create Load Tests, Create a WADL, Enable TCP monitoring, Enable JAXB support, WADL.....Rock on! :-)))

Saturday, September 27, 2008

Hibernate Shards, Maven a Simple Example

Sharding:
In my career as a Java Developer, I have encountered cases where data of a particular type is split across multiple database instances having the same structure but segregated by some logical criteria. For example, if we are in a health organization, it is possible that member (you and me, patients) data for one health plan A is in one database while member data for health plan B is in another database.
There could be many reasons for the above. It is possible that there is just too much data for a single database, possible that health plan A just does not want their data mixed with health plan B or maybe the network latency for Health Plan B to store the data in the same database might be high.
Sharding can be thought of segregating and partitioning the data across multiple databases driven by functional or non-functional forces.
When a data consumer is talking to an architecture where the data is sharded, one typically experiences one of the following cases:
  1. The application at any time requires interaction with a single shard only.For example, in a health plan application that is supporting Health Plan A and Health Plan B, a query to find members will always be restricted to a particular plan. In such a case a simple routing logic can assist in directing the query to the particular database. I did have a similar challenge in a previous engagement, my favorite framework Spring and the Routing DataSource helped solve the problem. The Blog by Mr.Mark Fisher was the direction that I followed.
  2. The application needs to interact with data that is part of both databases. For example, to obtain data about members stored in Health Plan A or Health Plan B that match a particular criteria. This requirement is more complicated as one needs to obtain result sets from both databases.
Hibernate Shards is a project that facilitates working with database base architectures that are sharded by providing a single transparent API to the horizontal data set. Hibernate shards was created by a group of Google Engineers that have since open sourced their efforts with their goal to reach a GA soon with assistance from the open source community. More documentation regarding the architecture, concepts and design can be read from the Hibernate Shards site.
The Requirements: We are of to the world cup of soccer. Every country playing in the game maintains a database of its players. The application we would like to develop would like to query across these different databases. We need to develop an application for FIFA that will allow each country to create and maintain players and obtain information about other players from other countries as well. This database instance is provided for each country but the country does not have direct access to be able to insert data or query data against it but instead must use the FIFA application for all operations.
The Example: Denoting a Player in the space of the application, we define a POJO called NationalPlayer. The NationalPlayer has some interesting attributes,
  • First and Last Name of the Player
  • Maximum Career Goals scored by the player
  • His individual ranking in the world as a player
  • The country he plays for
The Country that the player plays for indicates the shard database that his data resides on.
@Entity
@Table (name="NATIONAL_PLAYER")
public class NationalPlayer {
  @Id @GeneratedValue(generator="PlayerIdGenerator")
  @GenericGenerator(name="PlayerIdGenerator", strategy="org.hibernate.shards.id.ShardedUUIDGenerator")
  @Column(name="PLAYER_ID")
  private BigInteger id;

  @Column (name="FIRST_NAME")
  private String firstName;

  @Column (name="LAST_NAME")
  private String lastName;
  
  @Column (name="CAREER_GOALS")
  private int careerGoals;
  
  @Column (name="WORLD_RANKING")
  private int worldRanking;

  public Country getCountry() { return country; }
  ....  
  public NationalPlayer withCountry(Country country) {
    this.country = country;
    return this;
  }
  ......
  @Column(name= "COUNTRY", columnDefinition="integer", nullable = true)
  @Type(
      type = "com.welflex.hibernate.GenericEnumUserType",
      parameters = {

              @Parameter(
                  name  = "enumClass",                      
                  value = "com.welflex.model.Country"),
              @Parameter(
                  name  = "identifierMethod",
                  value = "toInt"),
              @Parameter(

                  name  = "valueOfMethod",
                  value = "fromInt")
              }

  ) 
  private Country country;
  ...
  ....
}
We define a Simple Java 5 enum type denoting the different countries that are participating in the World Cup. Sadly, we have only 3, India, Usa and Italy for the 2020 world cup.
public enum Country {
  INDIA (0), USA(1), ITALY(2);
  
  private int code;
  
  Country(int code) {
    this.code = code;
  }

  public int toInt() {
    return code;
  }
  ...
}
The application uses three sharded databases for the three countries participating in the database and each database is defined using hibernate configuration file, hibernate0.cfg.xml for India, hibernate1.cfg.xml for USA and hibernate2.cfg.xml for Italy. To keep the example easy, we have decided that Country code's defined in the Enum map to the hibernate configs. For example, India is Country 0 as denoted by the enum and it maps to shard database 0. One could maintain a map for the same if required.
For example, the hibernate configuration for the Indian database is as follows:
<hibernate-configuration>
 <session-factory name="HibernateSessionFactory0">
  <property name="dialect">org.hibernate.dialect.HSQLDialect</property>

  <property name="connection.driver_class">org.hsqldb.jdbcDriver</property>
  <property name="connection.url">jdbc:hsqldb:mem:shard0</property>
  <property name="connection.username">sa</property>

  <property name="connection.password"></property>
  <property name="hibernate.hbm2ddl.auto">update</property>
  <property name="hibernate.connection.shard_id">0</property>

  <property name="hibernate.shard.enable_cross_shard_relationship_checks"> true </property>
  <property name="hibernate.show_sql">true</property>
  <property name="hibernate.format_sql">true</property>

  <property name="hibernate.jdbc.batch_size">20</property>
 </session-factory>
</hibernate-configuration>
In order to route data to the appropriate database for storage, we define a custom shard selection strategy that uses the country code of the NationalPlayer object to route persistence to a particular database as shown below:
public class ShardSelectionStrategy extends RoundRobinShardSelectionStrategy {
  public ShardSelectionStrategy(RoundRobinShardLoadBalancer loadBalancer) {
    super(loadBalancer);
  }

  @Override
  public ShardId selectShardIdForNewObject(Object obj) {
    if (obj instanceof NationalPlayer) {
      ShardId id = new ShardId(((NationalPlayer) obj).getCountry().toInt());
   
      return id;
    }
    return super.selectShardIdForNewObject(obj);
  }
}
To easily work with Hibernate, we have a HibernateUtil class that factories shard sessions and also Sessions to individual databases. How Shards are selected/delegated to can be customized by providing an implementation of the interface ShardStrageyFactory. In the example, we have only chosen to customize the selection strategy.
To test our example, we have some unit tests. The tests will ensure the following are working:
  • When Players are persisted, they are stored only in the appropriate database instance. In order to ensure the same, a direct connection to the target sharded database is obtained to ensure the existence of the persisted player. In addition, direct connections to the other databases in the shards are obtained to ensure that the player in question has not been inserted there.
  • Queries executed against the shard will obtain data from all the sharded databases accurately. The tests will ensure that all databases in the shard are being accessed.
The Players in the unit test are as follows:
NationalPlayer indiaPlayer = new NationalPlayer().withCountry(Country.INDIA)
   .withCareerGoals(100).withFirstName("Sanjay").withLastName("Acharya").withWorldRanking(1);

NationalPlayer usaPlayer = new NationalPlayer().withCountry(Country.USA)
   .withCareerGoals(80).withFirstName("Blake").withLastName("Acharya").withWorldRanking(10);

NationalPlayer italyPlayer = new NationalPlayer().withCountry(Country.ITALY)
      .withCareerGoals(20).withFirstName("Valentino").withLastName("Acharya").withWorldRanking(32);
The Persistence test is the following:
@Test
public void testShardingPersistence() {
    BigInteger indiaPlayerId = null;
    BigInteger usaPlayerId = null;
    BigInteger italyPlayerId = null;
    
    // Save all three players
    savePlayer(indiaPlayer);
    savePlayer(usaPlayer);
    savePlayer(italyPlayer);
    
    indiaPlayerId = indiaPlayer.getId();
 System.out.println("Indian Player Id:" + indiaPlayerId);
    
    usaPlayerId = usaPlayer.getId();
    System.out.println("Usa Player Id:" + usaPlayerId);
    
    italyPlayerId = italyPlayer.getId();
    System.out.println("Italy Player Id:" + italyPlayerId);
    
    assertNotNull("Indian Player must have been persisted", getShardPlayer(indiaPlayerId));
    assertNotNull("Usa Player must have been persisted", getShardPlayer(usaPlayerId));
    assertNotNull("Italy Player must have been persisted", getShardPlayer(italyPlayerId));

    // Ensure that the appropriate shards contain the players    
    assertExistsOnlyOnShard("Indian Player should have existed on only shard 0", 0, indiaPlayerId);    
    assertExistsOnlyOnShard("Usa Player should have existed only on shard 1", 1, usaPlayerId);    
    assertExistsOnlyOnShard("Italian Player should have existed only on shard 2", 2, italyPlayerId);    
}
The Simple Criteria based tests are the following:
 @Test
  public void testSimpleCriteira() throws Exception {
    Session session = HibernateUtil.getSession();
    Transaction tx = session.beginTransaction();

    try {
      Criteria c = session.createCriteria(NationalPlayer.class)
        .add(Restrictions.eq("country", Country.INDIA));

      List<Nationalplayer> players = c.list();
      assertTrue("Should only return the sole Indian Player", players.size() == 1);
      assertContainsPlayers(players, indiaPlayer);
      
      c = session.createCriteria(NationalPlayer.class).add(Restrictions.gt("careerGoals", 50));
      players = c.list();

      assertEquals("Should return the usa and india players", 2, players.size());
      assertContainsPlayers(players, indiaPlayer, usaPlayer);

      c = session.createCriteria(NationalPlayer.class)
        .add(Restrictions.between("worldRanking", 5, 15));

      players = c.list();
      assertEquals("Should only have the usa player", 1, players.size());

      assertContainsPlayers(players, usaPlayer);
      
      c = session.createCriteria(NationalPlayer.class)
        .add(Restrictions.eq("lastName", "Acharya"));
  
      players = c.list();

      assertEquals("All Players should be found as they have same last name", 3, players.size());
      assertContainsPlayers(players, indiaPlayer, usaPlayer, italyPlayer);

      tx.commit();
    } catch (Exception e) {
      tx.rollback();
      throw e;
    } finally {
      if (session != null) {
        session.close();
      }
    }
}
Running The Example: The example is a Maven 2 project. JDK 1.6.X was used to develop/test. Database of choice for the example is HSQL, why would anyone select Oracle or anything other database :-). One can obtain the project from HERE. Simply run "mvn test" to see the tests being run and/or import the code into Eclipse using Q4E or your favorite maven eclipse plugin. As a note, the examples are simply examples.
The Parting: A requirement to using Hibernate Shards is Java 1.5 or higher. I am not quite sure why this requirement exists as one does not necessarily need to use annotations or java 5 features for hibernate configuration. A todo for me. Session Factory configuration via JPA is apparently not supported as yet. Different ID generation strategies can be used. One interesting problem in sharding is sorting of the results. Hibernate Shards works around the same by insisting that all objects returned by a Criteria query with Order By clause implement the Comparable interface. Sorting will only occur after obtaining the result set from each member of the shard within Hibernate Shards. From the documentation, it appears that "Distinct" clauses are not supported as yet. HQL support is not well supported as yet. For example, I wanted to execute a "delete from NationalPlayer" across the shard, i.e., clear the table on every database. I could not execute the same with an unsupported exception being the result. The documentation recommends staying away if possible. Cross Sharding is not yet supported. What this means is that if Object A is on shard 1 and object B is on shard 2, one cannot create an associated between them. Have not encountered a need for this yet. Links,
Hibernate Shards Documentation
Nice Read on Sharding
Now only if I can get my Jersey example working...

Wednesday, September 24, 2008

JAXWS Example with Maven and Spring

Introduction:
For the past few months, I have been quite far away from the SOAP world with my primary focus being on REST looking at JAXWS implementors, Restlet etc. The other day, I attended the Utah Java User's Group (UJUG) meeting. The UJUG is managed superbly by Mr.Chris Maki and I quite enjoyed myself when there with the presentations. The presentations centered around how different companies implemented web services and their stack.

One company in particular has me interested in their Web service Stack. The LDS church uses JAXWS with JDK1.6. The presenter, although did not enter into details, did however mention how easy it is to create and consumer SOAP web services with JAXWS. One of the things he mentioned was that they did not need to step outside JDK in order to be able to create SOAP based WS. They did not feel the need for external impls like CXF etc.

In the projects that I have worked with SOAP, the words, simple or easy have never come to mind :-) So what was this gentleman talking about? I had to find out. Would have loved to chat with him more but could not, so here I go finding out the hard way :-).

JAXWS:
So what is JAXWS? JAXWS (Java API For XML Web Services) can be thought of as a
Java Programming API to create Web Services. JAXWS was introduced in JAVA SE 5 and
uses Annotations in the creation of end points and service clients. JAXWS 2.0 replaced or
encompassed the JAX-RPC API. For more details on the same look at this developer works article.
JAXWS uses JAXB 2.0 for data binding.

Simple JAXWS Example:
Like my JAXRS Maven/Spring examples, the JAXWS web service is implemented using a
multi module maven project with a structure as shown below:

jaxws-example
|_ client - Contains generated code from WSDL
|_ service - Contains model and business logic
|_ webapp - Contains Web service wrapper, servlet defs etc

For the example, I am using the JAXWS-Maven plugin.
The plugin has two goals that the example uses, a. wsgen, that reads a service end point
class and generates service artifacts and b. wsimport used to generate the consumer code.

The webapp maven module contains JAXB annotated DTO's like an OrderDto and LineItemDto. The
Web service is exposed via a simple wrapper class:

 @WebService
 public class OrderWebService {
   private static final Logger log = Logger.getLogger(OrderWebService.class);
   @Autowired private MapperIF beanMapper;
   @Autowired private OrderService serviceDelegate;
   ...

   public OrderDto getOrder(Long orderId) throws OrderWebException {
     Order order = null;
   
     try {
       order = serviceDelegate.getOrder(orderId);
     }
     catch (OrderNotFoundException nfe) {
       throw new OrderWebException(nfe.getMessage());
     }
    
     OrderDto orderDTO = (OrderDto) beanMapper.map(order, OrderDto.class);
     return orderDTO;
   }
   ...
}

The @WebService annotation indicates that the above class is a Web Service end point. The Service delegate
does the heart of the service work with the OrderWebService class acting only as a front. As
seen above, we are using Spring and auto-wiring for the delegate and Bean mapper. Just
one other thing to note is that we have a class called OrderWebException, this is a class
annotated with @WebFault indicating this exception is raised when there is a web fault.

I have a layer which is a webservice layer and an actual service layer as I feel there
are some advantages to such an approach. I have discussed the same in my earlier blogs.

During the maven build process, the jaxws maven plugin will come into play and generate
service artifacts. The maven plugin is configured as follows:

 1 <plugin>
 2   <groupId>org.codehaus.mojo</groupId>

 3     <artifactId>jaxws-maven-plugin</artifactId>
 4        <executions>
 5     <execution>

 6       <goals>
 7         <goal>wsgen</goal>
 8       </goals>

 9      <configuration>
10        <packageName>com.weflex.service</packageName>
11         <!-- The name of your generated source package -->
12        <sei>com.welflex.soap.ws.OrderWebService</sei>

13        <keep>true</keep>
14        <genWsdl>true</genWsdl>
15       </configuration>
16         </execution>

17      </executions>
18 </plugin>


The generator will generate the WSD and web service artifacts. The schema itself is created
at ${base.dir}/target/jaxws/wsgen/wsdl.

One point that is worth noting is that, in order to launch the web service for testing,
there is no need for a separate Servlet container, web.xml etc. JDK 1.6.X provides for
a simple container that will allow you to test the service. One can bind the service using:

@WebService 
public class OrderWebService {
   .....
  public static void main(String args[]) {
     javax.xml.ws.EndPoint.publish("http://localhost:8080/ws", new OrderWebService());
  }
}


Viola, then you connect to the service, using SOAP UI or your favorite SOAP tool and test the service.

Now the client code. Unlike in my rest examples where one wrote the client code explicitly, in this example, the client is entirely generated from the service WSDL. There is no
reason to use auto generation. Seems easy to me though.

The wsimport goal can create a Web Service client when provided a WSDL. The client project is dependent on the webapp project. Once the webapp project
is built a WSDL is created in the above mentioned target directory. The plugin on the
client is configured to point to this WSDL in order to generate client side artifacts as shown
below:

 1 <plugin>
 2   <groupId>org.codehaus.mojo</groupId>

 3   <artifactId>jaxws-maven-plugin</artifactId>
 4   <executions>
 5     <execution>

 6       <goals>
 7         <goal>wsimport</goal>
 8       </goals>

 9       <configuration>
10         <wsdlUrls>
11    <wsdlUrl>       ../../jaxws-example/webapp/target/jaxws/wsgen/wsdl/OrderWebServiceService.wsdl
12    </wsdlUrl>

13  </wsdlUrls>
14        </configuration>
15     </execution>
16    </executions>
17 </plugin>

Now that we have the client, the integration test project can run which starts the
web service and issues requests from the client to the service.

The integration test creates a client via the following call:

URL wsdlLocation = new URL(INTEGRATION_TEST_URI + "?wsdl");
ORDER_CLIENT = new OrderWebServiceService(wsdlLocation,
      new QName("http://ws.soap.welflex.com/", "OrderWebServiceService"))
         .getOrderWebServicePort();

The default generated client internal URL points to our file system. The above the URL
the same to point to an actual server instance.
Running the Example:
You need to be on JDK 1.6.X, maven 2.0.8 installed. From the root of the project directory,
issue a "mvn install" and you should see the entire project being built with the
Integration Tests running. The Project itself can be downloaded from HERE.

Thoughts:
Some of my friends will echo my thoughts with the pains they have had with Web service
creation using certain vendor tools. The promise is always, "Watch while I create a
Web Service in just two minutes!". JAXWS also tends to address performance of SOAP
WS. Take a look at the article on implementing high performance web services with jaxws.

With JAXWS, I do not have to extend any particular class in order to expose a POJO as
a Web Service. The ability to instantly test my web service without the need for an
external Servlet container starting up, +1. The ease of generating the Client code,
another +1. Maven Plugin +1. Spring Integration +1. Easy deployment without a heavy
weight container +1. The plugin however did not seem to provide for easy mapping of
name spaces to custom java packages with wsimport.

JAXWS also provides for easier than before implementation of WS-* features if you need them.

I am specially interested in the performance characteristics of JAXWS using JAXB 2.0. I have always believed that one of the major performance penalties paid with SOAP Web service have been on the marshalling/unmarshalling fronts. I need to do some performance benchmarks I guess.
Enjoy!

Wednesday, September 17, 2008

One for the worker bee...

Ok! Time to confess. Nope, I am not gay , I am a simple worker bee and proud of the same.

The worker bee in nature or an organization is responsible for taking orders, innovations and direction from their superiors bee's and simply executing them, no questions asked. I think of the SS as am exterme parallel.

I must admit, I am a worker bee of sorts by birth and by nature. My younger bro inherited the brains of the family. I remember that for his 10th grade project the guy created a graphing libary in BASIC. I on the other hand had a simple video cassette libary where people could checkin/checkout. Sadly, for the world of CS, my bro decided to head toward management and sales. If only he were in CS, we would have seen a Rod Johnson or maybe ambitiously enough, a Gosling. Sadness!!!! He had the making of an uber bee.

Me, on the other hand, I have been a hard worker all my life. I work like a maniac at times to achieve a goal. I get it wrong, I get it more wrong, but then I get it right...yep, sometimes its trial and error, no logic, just trail and error. But heh, I don't quit till its done, the worker bee in me..aah, now I am rhyming...

Some folks want a pure worker bee, a non questioning, non confrontational worker bee, incapable of revolting :-)... I must leave the alleigance of the worker bees temporarily as I cannot classsify with the herd and instead prefer to make a faction. I am a questioning worker bee. (new genus not genius). If my general or queen bee says, "Just kill them", I would ask "Why? Whats the reason? What for? How do we benefit from the same? Is there an alternative and definitely, Show me the money!!!". I am still the bee so don't abandon me just as yet :-). But yes, I am also a watchful bee, I can guage properly whose "A...ss" is on the line. If it ain't mine, I ain't gonna make a loud noise. A bee who knows his boundaries I figure.

Worker bee's are extremely valuable for any endeavour, they ensure the end goal is met. Think teamwork. Einstein or Tesla might have come up with the best of inventions to faciliate time travel, but if Mr.Worker Bee does not get the bolt in correctly, the mechanism can expect quite contrasting results :-).

We worker bee's have a very important place in any endeavour. Don't ever think less of us. We might not be major decision makers but our minor decisions can make major decisions reflect as either good or bad. Not a threat but a call for respect. Also every bee other than the top bee is a worker bee. Right?

I sadly am not very gifted, mentally or physically. I have a brain that can execute some tasks; tasks that fall in my comprehension level at the moment. I am a bee that is growing with experience. It does not mean that I cannot rise to an occasion? Does not mean, I cannot exceed your expectations at any given time? Give me time, I work hard, I will succeed..I am a worker bee. I have been in the trenches, I have been on the clouds, I am a proven bee.

Reminds me of an ole saying "Kick an old man on the butt, u know what he is! Don't kick a young man on his butt, you don't know what he'll become!" I feel people have moments of greatness all through their lives, some more than others.

Worker bee signing off...watch out here I come :-)

Sunday, September 14, 2008

Java Instrumentation with JDK 1.6.X, Class Re-definition after loading

Revelation:
It was recently brought to my notice that classes can be redefined after the VM has started and the class has been loaded, thanks to a JDK 1.6.X feature introduction. In other words if we have class that is loaded whose simple method greet() returns "Hello Sanjay", it can be changed to return "Hello world". I knew that you could redefine the class during load time but that changing a class after it has loaded was news to me. We could do the same before with JPDA where classes where hot swapped but not with Instrumentation, at least that is my understanding. In a previous blog of mine, I had discussed the redefinition of classes to augment them with profiling information, the same was done during class load time using an agent and a byte code enhancement library, javaassist.

Research:
Armed with the ammunition from above, I explore the changes. With Java 6, one can load a java agent after the VM has started. This means, we have control on when the agent is loaded. Redefinition of classes is supported by the following changes to the instrumentation package in java 6 available on an instance of the java.lang.instrument.Instrumentation interface:

Instrumentation.retransformClasses(Class...)
Instrumentation.addTransformer(ClassFileTransformer, boolean)
Instrumentation.isModifiableClass(Class)
Instrumentation.isRetransformClassesSupported()

With the above, the research begs the following questions:

How do we enhance byte code of a Class?
A class byte code can be enhanced using a byte code engineering library. There are many popular libraries such as javassist, asm, bcel that can be used to alter a class.

How do we transform the class at run time with the enhanced byte code?
Consider the following example that describes the process,



// Obtain altered class byte code
byte bytes[] = addProfiling(Foo.class);

// Definition indicating class to be transformed and the new definition.
ClassDefinition definition = new ClassDefinition(Foo.class, bytes);

// Ask the instrumentation library to redefine the classes.
intrumentationInstance.redefineClasses(definition);



The addProfiling(Foo.class) method returns the bytecode of Foo.class.
NOTE: One thing we cannot do with redefining a class is change its "SCHEMA". In other words, we cannot add new methods/remove methods etc. When using JPDA for debugging, you might remember that athough you could change code inside an existing method, attempting to change the signature etc would not work.

How does one obtain a handle to the java.lang.instrument.Instrumentation instance to redefine the classes?

In my previous instrumentation example blog, the agent class was initialized via the following method public static void premain(String agentArgs, Instrumentation inst). With Java 6 in order to start the agent after the VM has started, the Agent class needs to define the method:

public static void agentmain(String agentArgs, Instrumentation inst)
Both the methods shown above provide an instance of the Instrumentation interface and thus a place to obtain a handle to the Instrumentation implementation.

An agent class with the following can facilitate the re-transformation of classes:



public class Agent {
private static Instrumentation instrumentation;

public static void agentmain(String agentArgs, Instrumentation inst) {
Agent.instrumentation = inst;
}

public static void redefineClasses(ClassDefinition ...defs) throws Exception {
Agent.instrumentation.redefineClasses(defs);
}
}




How does one load the agent?

Apart from having a class that defines the agentmain method, the jar that will contain the agent must have certain entries in its MANIFEST file as shown below:


Agent-Class: com.welflex.Agent
Can-Retransform-Classes: true
Can-Redefine-Classes: true


The Can-Redefine-Classes attribute describes the ability of the agent to re-transform classes.

With JDK 5.X the agent had to specified in the command line via: -javaagent:jarpath in order to start the agent.With JDK 6.X the same is not required. We could load an agent at runtime via the location of the jar file. The loading of the agent is faciliated by the VirtualMachine class present in the tools.jar of the JDK. An example of how an agent is loaded is shown below:



// Location of agent jar
String agentPath = "/home/sanjay/agent.jar";

// Attach to VM
VirtualMachine vm = VirtualMachine.attach(processIdOfJvm);

// Load Agent
vm.loadAgent(agentPath);
vm.detach();




The VirtualMachine class is part of the package com.sun.tools.attach which is present in the tools.jar file located in $JDK_HOME/lib. Thus if loading of an agent post JVM start is desired, it naturally follows that the tools.jar has to be available in the class path.

An Example:
For the sake of discussion, let us consider the following requirements of our application:

  • Select classes to profile. For example, display profiling information for class Foo but not for class Bar

  • Profiling information should be removable. For example, if class Application was profiled in one test, then on a subsequent test, the profiling information for class Application should not exist unless requested by the test.



I have a class called Profiler. The profiler class has the following definition:



public class Profiler {
public static void addClassesToProfile(Class ...classes) {
ClassDefintions defs[] = Profiler.buildProfiledClassDefintions(classes);
Profiler.loadAgent();
InstrumentationDelegate.redefineClasses(classDefintions);
}

private static Map<class<?>, byte[]> byteHolderMap = new HashMap..;

private static ClassDefinition[] buildProfiledClassDefinitions(Class classes) {
//1.Store original class byte code in byteHolderMap
//2.Enhance classes with profiling information and
// return back ClassDefinitions with enhanced code.
....
}

public static void restoreClasses() {
// For each entry in the byteHolderMap, redefine the class.
...
}
...
}




When the method addClassesToProfile(Class classes) is invoked, the Profiler will create ClassDefintions that contain profiling information. After this the agent (InstrumentationDelegate) is loaded thereby obtaining a handle to the java.lang.instrument.Instrumentation implementation. The call to redefineClasses() will restore the original class schema. The method buildProfiledClassDefinitions() creates a Map of [Class, byte[]], that stores the original byte code of the class. Thus, when the restoreClasses() method is invoked, the original classes are re-instated.

Shown below is a unit test that utilizes the profiling agent:
  • Runs a Test using the class without instrumentation
  • Enhances the code with profiling information and runs the tests
  • Restores the class under test to the one before the profiling information was added
  • Re-runs the test with no profiling information.




public void testApplication() throws Exception {
System.out.println("Test Profiling Application Class.");

// Run test before Profiling, class will not have profiling information
Application appInit = new Application(new Description().setDescription("Pre-Profiling"));
System.out.println("Class not augmented yet, so should not see profiling information.");
assertEquals("Pre-Profiling", appInit.getDescription());

System.out.println("Adding Profiling information and running test, should see profiling.");

// Redefine and add Profiling information, should see profiling info
// printed out
Profiler.addClassesToProfile(Application.class);

Application app = new Application(new Description().setDescription("Simple App"));
app.rest();
app.rest(3000);

assertEquals("Simple App", app.getDescription());

// Remove profiling information
Profiler.restoreClasses();

// Classes must not display profiling information.
System.out.println("Classes have been restored, should not see profiling information now..");
Application afterRestore = new Application(new Description().setDescription("After Restore"));
assertEquals("After Restore", afterRestore.getDescription());
}




Running com.welflex.profiletarget.ApplicationTest
Test Profiling Application Class.
Class not augmented yet, so should not see profiling information.
Adding Profiling information and running test, should see profiling.
-> Enter Method:com.welflex.profiletarget.Application.rest()
<- Exit Method:com.welflex.profiletarget.Application.rest() completed in 2112 nano secs
-> Enter Method:com.welflex.profiletarget.Application.rest()
<- Exit Method:com.welflex.profiletarget.Application.rest() completed in 1000060102 nano secs
-> Enter Method:com.welflex.profiletarget.Application.rest(long)
<- Exit Method:com.welflex.profiletarget.Application.rest(long) completed in 980 nano secs
-> Enter Method:com.welflex.profiletarget.Application.rest(long)
<- Exit Method:com.welflex.profiletarget.Application.rest(long) completed in 3000029988 nano secs
-> Enter Method:com.welflex.profiletarget.Application.getDescription()
<- Exit Method:com.welflex.profiletarget.Application.getDescription() completed in 29749 nano secs
Classes have been restored, should not see profiling information now..
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.927 sec
Running the Example:
The example can be downloaded from HERE. The example uses the byte code engineering library javassist to augment the classes. The project itself is a Maven 2 project and requires JDK 1.6.X to execute. You might need to tweak the pom.xml file of the profiler-instrumentation project to point to your locally installed tools.jar, i.e., change the line ${java.home}/../lib/tools.jar, to point to your local tool.jar via a fully qualified path. I have had some difficulty getting this correctly working and had to use an absolute path to the tools.jar. After this, issue a "mvn install" from the root project directory to see the profiling example in action. Note also that if you import the project into Eclipse via Q4E, then when running the example, ensure that the profiler-integration project is closed otherwise the profiler-target project will pick the eclipse project instead of looking for the deployed profiler-integration jar file.

If you have difficulties running the example, please do not hesitate to contact me.

Closing Thoughts:
One can instrument classes on a running VM quite easily with the above concept.
Mr.Jack Shirazi has a nice example of hotpatching a java application
I have personally not tried attaching to a different VM but should be pretty straight forward. The example I have demonstrated is quite largely based on the JMockit testing framework that utilizes byte code enhancement. It is quite a powerful testing framework and I need to spend some time playing with it. But right now, I am off to play with my XBox :-)

Sunday, September 7, 2008

Service Locator Pattern, How shall I find you?

I have worked in companies that have used the Service Locator pattern to obtain dependencies of an object. Service Locator is one way of promoting orthogonality, another is of course Dependency Injection. I am not going to delve into detail regarding pros/cons of each pattern but instead will focus on the Service Locator and some simple investigations into the same.

The primary purpose of a Service Locator IMO is to act as Registry of sorts to obtain managed objects. The Service Locator can also be augmented with functionality to lazily load the serviced object and also provide as a single place where a new copy of the serviced object can be obtained (think prototype pattern here).

One simple example of obtain an object using the Service locator is shown below:
public class ServiceLocator {
  private static final ServiceLocator instance = new ServiceLocator();
  private ServiceLocator() {}
  public static ServiceLocator getInstance() {
    return instance;
  }
  private OrderDAO dao = new OrderDAOImpl();

  public OrderDAO getOrderDAO() {
    return dao;
  }
} 

public class OrderServiceImpl {
  private final OrderDAO orderDAO;
  public OrderServiceImpl() {
    orderDAO = ServiceLocator.getInstance().getOrderDAO();
  }
}

In the above example, the OrderDAO implementation is obtained from the Service Locator via a strongly typed call to getOrderDAO(). This is quite nice. However, what if we had many different objects that we would like to locate? Adding multiple methods like getProductDAO(), getCartDAO(), getAuditService()...etc will tend to bloat the ServiceLocator with a lot of code. One could create separate Service Locator classes for example, ProductDAOLocator, CreditCardDAOLocator etc, etc. However, we again have a lot of classes to maintain and thus a penalty to pay.

What if we could provide an easy way to obtain the objects via a "key"? This method involves using a "key" into the Service Locator based of which the located examines its registry and provides back the corresponding object. Consider an altered version of the locator as shown below:
public class ServiceLocator {
  private static final ServiceLocator instance = new ServiceLocator();
  private Map locatorMap = new HashMap();

  private ServiceLocator() {
    locatorMap.put("orderDAO", new OrderDAOImpl());
    locatorMap.put("productDAO", new ProductDAOImpl());
    locatorMap.put("auditService", new AuditServiceImpl());
  }

  public static ServiceLocator getInstance() {
    return instance;
  }

  public void setServicedObject(String key, Object impl) {
    locatorMap.put(key, impl);
  }

  public Object getServicedObject(String key) {
    return locatorMap.get(key);
  }
} 

public class OrderServiceImpl {
  private final OrderDAO orderDAO;
  public OrderServiceImpl() {
   orderDAO = (OrderDAO) ServiceLocator.getInstance()
    .getServicedObject("orderDAO");
  }
}

public class OrderServiceTest {
  public void setUp() {
    ServiceLocator.getInstance().setServicedObject("orderDAO", 
      new MockOrderDAOImpl());
  }
  public void testService() {
   ....
  }
}
In the above example, the code is more dynamic and it becomes easy to add more locatable services. In addition, the above locator works very well when using a command/factory pattern as shown below:
public class ServiceLocator {
  private static final ServiceLocator instance = new ServiceLocator();
  private Map locatorMap = new HashMap();

  private ServiceLocator() {
    locatorMap.put("saveOrder", new OrderSaveCommand());
    locatorMap.put("saveProduct", new ProductSaveCommand());
     .....
  }

  public static ServiceLocator getInstance() {
    return instance;
  }

  public void setServicedObject(String key, Object impl) {
    locatorMap.put(key, impl);
  }

  public Object getServicedObject(String key) {
    return locatorMap.get(key);
  }
} 

public class Action {
  public void doAction(Request request) {
    String commandKey = request.getCommandKey();
    CommmandIF c = (CommandIF) ServiceLocator.getInstance()
      .getServicedObject(commandKey);
    c.executeCommand(request);
  } 
}

The ServiceLocator functions as a factory of sorts here providing concrete implementations of the Command interface at runtime based of the key.

The above examples of Service Locator do have a major short coming. The code is now not strongly typed. Compile time type checking is lost. In the example which explicitly maintained getter methods for each known service, a call to a getter guranteed compile time type checking and run time safety. In the above shown versions of the Locator, if the Registry had an accidentally mapping of ["orderDAO", new ProductDAOImpl()], at run time a ClassCastException can occur.


Clearly both the styles of using the Service Locator pattern shown above have their pro's and con's. Can we think of a third way that will provide compile time type checking of the first example while providing the ease of use of the second?

Yes, we have hope with Java 5.X+ and Generics. With Generics introduced with Java 5.X, compile time type safety has obtained a big boost. In order to explore the problem further, I must credit the work influenced by:


With the above examples, we have laid down the pattern for implementing a type safe version of the Service Locator. The key to lookup a class is now the interface. With the same said, the following snippet represents a new type safe version of the Service Locator:
public class ServiceLocator {
  private static Map<Class<?>, Object> infInstanceMap
    = new HashMap<Class<?>, Object>();

  public static <T, I extends T> void setService(Class<T> inter, I impl) {
    infInstanceMap.put(inter, impl);
  }

  public static <T> T getService(Class<T> ofClass) {
    Object instance = infInstanceMap.get(ofClass);

    return ofClass.cast(instance);
  }
}

public class BootStrap {
  public static void start() {
    ServiceLocator.setService(OrderDAO.class, new OrderDAOImpl());
    // Note the following will not compile
    // ServiceLocator.setService(ProductDAO.class, new OrderDAOImpl());
  }
}

public class OrderService {
  private final OrderDAO orderDAO;

  public OrderService() {
    orderDAO = ServiceLocator.getService(OrderDAO.class);
  }
}

public class OrderServiceTest {
  public void setUp() {
    ServiceLocator.setService(OrderDAO.class, new MockOrderDAOImpl());
  }

  public void testService() {
    ...
  }
}
The above example is type safe while promoting testability. So is there some functionality we have lost with this locator? The answer is 'yes'. The locator does not permit the use of multiple implementations of the same interface for location. Two values for the same key becomes a bit of a problem. How can we overcome the same?

Well, I do not know if this is good solution or not. However, what I propose is that if there are more than one implementations of a given interface that need to be located, then the implementors need to specifically identify themselves via unique identifiers (Think Spring annotations and @Qualifier here). So the solution is, if there is exactly one known implementation, then use the interface based lookup, if not and this should be known during development, lookup by a unique identifier. Applying these changes we have:
public class ServiceLocator {
// Code for interface based lookup is not show here but should be considered included
.......
.......

  private static Map<String, Object> idObjectMap 
   = new HashMap<String, Object>();

  public static <T, I extends T> void setIdBasedLookupService(String uid, 
    Class<T> infClass, I impl) {
   idObjectMap.put(uid, impl);
  }

  public static <T> T getServiceById(String uid, Class<T> infClass) {
   Object instance = idObjectMap.get(uid);

   return ifClass.cast(instance);
  }
}

public class BootStrap {
  public static void start() {
    ServiceLocator.setIdBasedLookupService("saveOrderCommand", CommandIF.class,
     new SaveOrderCommand());

    // Note the following will not compile
    // ServiceLocator.setService("saveProductCommand", CommandIF.class,
    // new OrderDAOImpl());
  }
}

public class Action {
  public execute(Request request) {
    String commandKey = request.getCommandKey();
    CommandIF command = ServiceLocator
     .getServiceById(commandKey, CommandIF.class)
    command.execute(); 
  }
}

public class RequestTester {
  public void setUp() {
   ServiceLocator.setIdBasedLookupService("saveOrderCommand",
     CommandIF.class,new MockSaveCommand());
  }

  public void testRequest() {
    ...
  }
}

Ok so are we done yet? I hear the shouts of "About time!" but beg for forgiveness and continue :-)

There are some things that I do not like about the above approach:
  • Explicit setting of the objects in the locator by the BootStrap. Every locatable interface and implementation needs to be specified.
  • No lazy loading option

I would like to lean on my favorite open source framework, Spring's classpath scanning feature of locating interfaces and implementations. In addition, Spring's feature of using the @Qualifier tag to denote the case where there are multiple implementations.

To facilitate the same, I define an Annotation called @Locatable which interfaces that will participate in the Service Location process will need to employ. If an interface has only one concrete implementation, then the same will be dicovered and bound in the locator. If there are multiple implementations, then those implementations must have an annotation called @ImplUid that contain's one attribute called uid. So as examples of the same, see below:
@Locatable
public interface OrderService {
  public void saveOrder(Order order);
}

public abstract class AbstractOrderServiceImpl implements OrderService {
....
}
// Single Concrete OrderService implementation
public class OrderServiceImpl extends AbstractOrderServiceImpl {
  public void saveOrder(Order order) {
  ....
  }
}

@Locatable
public interface Phone {
  public boolean dialNumber(int number);
}

// Analog implementation of Phone
@ImplUid(uid="analogPhone")
public class AnalogPhone implements Phone {
  public boolean dialNumber(int number) {
   ....
  }
}

// Digital implementation of phone.
@ImplUid(uid="digitalPhone")
public class DigitialPhone implements Phone {
  public boolean dialNumber(int number) {
   ...
  }
}
In the above example, note that for the OrderService there is only one concrete implementation, namely, the OrderServiceImpl. However, for the Phone interface, there are two implementation and each are tagged with the @ImplUid annotation to set them apart.

In order to perform the tie up between interface and implementation, I have defined a class called LocatorClassFinder. This class will scan the classpath, perform the tie up between interfaces and implementations and then return back two maps. The first map will map [interface name to class name] for cases where there is only one implementation of a given interface. The second map provides mapping for interfaces that have multiple implementation and the implementations are correspondingly tagged with @ImplUid. The ServiceLocator has a start() method that will ask the LocaterClassFinder to construct the maps. The actual class path scanning is performed using Spring frameworks adapters of the asm library. When the Service locater is asked for an object, it will lazily instantiate the same based of the information provided by the two maps. So summarizing via code:
public class ServiceLocator {
  ....
  // All look up/setter static methods discussed earlier.

  private IntefaceImplMaps maps;

  // Called once at startup
  public static void start() {
    // Package to scan
    LocatorClassFinder finder = new LocatorClassFinder("com/welflex");
    maps = finder.locateInterfaceImpls();
  }
}    

public class LocatorClassFinder {
   .....
   public InterfaceImplMaps locateInterfaceImpls() {
    ....
   }
}
With the above solution, we have an easy way to implement Service Location with support for the command/factory patterns as well. There is no explicit mapping of interface and impls required for the developer as these are calculated at run time.

In Summary:

I have an example of the implementation of the Service Locator available HERE for Download. It is a Maven 2 project. The example is purely for purposes of demonstration and learning. I have used code from the Spring project and am NOT redistributing the same anywhere or utilizing it in any project of sorts. It is purely an educational exercise.

  • The LocaterClassFinder implemented uses code from Spring Framework. Some clases that the Spring framework has scoped as Package protected, I have simply copied and used. There is no compelling reason to use Spring's asm support. I however found it to be "easy". In addition, some other Spring classes are being used to acheive the classpath scanning. The example of Service Locator does not depend on the Spring container in any way and only uses certain classes provided by Spring. It can easily be stripped of all dependencies to spring if required.
  • Although the example code hopes to facilitate lazy loading, issues with synchronicity have NOT been addressed with the same. Should not be hard to do so.
  • The code has not discussed creating objects via the Prototype pattern. However, that can easily be done with some tweaking
  • The LocaterClassFinder class is a very hacky and bad implementation. It was done just to test the theory and is by no means ready to stand the rigors of a code review :-)) I am ashamed of the same and blame it on a 3 drinks on a saturday night ;-)