Search This Blog

Monday, March 30, 2009

Java Closure(s) but no cigar?

It has now been a year since I have been working with Java 5+ features. During development, I have welcomed auto-boxing, the concise for loop, vargs, enums and Generics. In particular the last feature has been rather appreciated as type safe programming makes life for a developer so much easier. That said, I will also state that there have been many times that I have been troubled by the wild cards in generics and erasure. However, I find the take aways I get from generics far out weigh the few times I need to stress my brain with wild cards and erasure.

Like Generics, one equally important feature that the Java language has been wanting is Closures. A JSR for closures resulted in multiple proposals. Notably,
  1. Full Closures by the BGGA proposal
  2. CICE Simplified inner classes and
  3. FCM First Class methods.
As a note a poll taken to determine which proposal Java developers would welcome led to, BGGA taking the lead. Interestingly enough, a large percentage of Java developers felt that they do not want Closures in Java. My intuition tells me that a large percentage of the latter voters are saying "I do not want BGGA closures in Java" rather than "I do not want Closures at all".

A famous debate between Neal Gafter who has come forth as the major spokesperson for the BGGA closures proposal and Josh Bloch, a supporter of the CICE proposal has ensued via presentations and blogs. The same can be googled for.

A bit about my stance, I definitely see the power of Closures. However as the saying from Spiderman, the movie goes, "With great power comes great responsibility." I want the power that Closures offer in Java, and I want an implementation that I can understand and be productive with. I want intuitive compilation errors that will immediately alert me to the problem without straining the few @deprecated Grey cells that are still struggling to function in my brain.

I want conciseness, but not at the cost of readability and maintainability. In short, I want power with clarity and brevity. One cannot always get everything they want, sad but true, however, my stance is that if I can benefit considerably most of the time, I do not mind the pain for a little of the time. Think of it like a marriage ;-)))..luckily, the wife does not read my technical blogs lol! A better parallel; (Gain Factor >>>> Pain Factor == Adoption)

One can read about what a Closure is, what a Lambda expression is and I will not spend time on the same. Primary reason being, I cannot understand the same myself :-(

What I want to do in this Blog is to understand Closures by BGGA and try for myself how hard or how easy is it to work with them. I do not use an IDE but revert to using emacs and the command line for this exercise.

I downloaded the BGGA propopal and ran some of the examples from the Java Closures Tutorial so very well explained by Zdeněk Troníček. Then I tried to apply the same to cases I have encountered before or am working with now to see for myself how easy/difficult the proposal is and problems I faced, if any.

Function Types:
I think function types would be a great addition to the language. I understand Mr.Bloch's concern that they might tend to lead to an exotic style of programming, however, I feel exotic styles of programming can even be developed with the simplest of constructs should developers choose to. It is possible for a psychotic developer as myself to take the simplest of problems and provide the most convoluted but workable solutions for the same. If I were introduced to the book Java Puzzlers prior to learning Java, I might have fled the scene. Sure, there will be puzzling cases but thats what code reviews, documentation, etc are all for after all. I am not sure I buy the argument, that "BGGA closures lend themselves to an exotic programming style". One can easily be exotic with the simplest of language constructs IMHO.

Lets look at a couple of examples, I have an eachEntry() method that runs through an java.lang.Iterable and executes the block provided. I also a have a simple filter() that can be used to filter an Iterable which delegates to the eachEntry().



public static <T> Collection<T> filter(Iterable<T> input,
{T=>boolean} filterBlock) {
Collection<T> filteredItems = new ArrayList<T>();

eachEntry(input, {T item =>
if (filterBlock.invoke(item)) filteredItems.add(item);});


return matched;
}

public static for <T, throws E extends Exception>
void eachEntry(Iterable<T> items, {T => void throws E} block) throws E {

for (T item : items) {
block.invoke(item);

}
}



An example usage of the above where Employee objects are filtered:



// Static import the utility functions, filter and eachEntry
import static com.welflex.util.Utils.*;

List<Employee> emps = ...// Get employees

// A closure that uses the Function type of eachEntry to dump out the
// information about each employee
eachEntry(emps, {Employee e =>
System.out.println("Id:" + e.getId()

+ ", Manager:"
+ e.isManager()
+ ", Salary:" + e.getSalary());});

// Filter out only employees who are managers
Collection<Employee> managers = filter(emps, {Employee e => e.isManager()});

// Filter out only managers who earn gt 60000 using a filter of filters
Collection<Employee> mgrsGt = filter(filter(emps, {Employee e => e.isManager()}), {Employee e => e.getSalary() > 60000);

// The above one re-written
mgrsGt = filter(emps, {Employee e = > e.isManager() && e.getSalary() > 60000});



I find the above pretty concise and powerful. I also find the same rather easy to read as well. So no complaints from me there. For those reading, hey I never claimed I was invincible as far as programming goes, close to invincible, yes ;-)

Now the negative, some things I could however not understand,

a. for construct did not work for me:
"for" construct did not work for me. I might not have done it correctly, but the eachEntry() that uses the "for" construct compiled just fine. Compiling the following should have worked, I am not sure what I am missing:



for eachEntry(Employee e : emps) {

System.out.println("Id:" + e.getId());

}



A compilation error leading to:
"Filter.java:20: method forEach in class Filter cannot be applied to given types
required: java.lang.Iterable
,javax.lang.function.OO,java.lang.Class
found: java.util.Collection,{T => void},java.lang.Class
forEach(orig, {T item => if (block.invoke(item)) matched.add(item);},"

The above was not very instructive for me in addressing the issue :-(

b. Static Inner Class when used in the example caused compiler error:

When my Employee class was a static inner class, I ended up with the following exception when I tried to compile the examples class, leading me take it the Employee class to its own separate compilation unit:

exception has occurred in the compiler (1.6.0_11). Please file a bug at the Java Developer Connection (http://java.sun.com/webapps/bugreport) after checking the Bug Parade for duplicates. Include your program and the following diagnostic in your report. Thank you. java.lang.NullPointerException at com.sun.tools.javac.comp.Resolve.isAccessible(Resolve.java:140)

I would like to also provide another example where I am using an ARM block to obtain a JMS Object from JNDI and subsequently closing the context. Note that I am not using actual javax.naming.Context classes but have instead modeled my own for the sake of demonstration. Without closures and using anonymous classes, I have the following:



private <T> T executeContextTask(ContextTask<T> task) throws LookupException {

Context ctx = null;
try {
ctx = new ContextImpl();
return task.executeWithContext(ctx);

} finally {
if (ctx != null) {

try {
ctx.close();
} catch (Exception e) {

//log this
}
}
}
}

private static interface ContextTask<T> {

public T executeWithContext(Context ctx) throws LookupException;

}

public ConnectionFactory getConnectionFactory(final String name)
throws LookupException {

return executeContextTask(new ContextTask<ConnectionFactory>() {
public ConnectionFactory executeWithContext(Context ctx)
throws LookupException {

ConnectionFactory c = (ConnectionFactory) ctx.lookup(name);

// Do something with Connection Factory before returning
return c;
}
});

}
....
public Destination getDestination(final String name)
throws LookupException {

... // Similar to above..except returns a Destination..//
}




The same above example when done using BGGA closures is simplified to:



private static <T, throws E extends LookupException>
T withContext({Context=>T throws E} contextTaskBlock) throws E {

Context ctx = null;
try {
ctx = new ContextImpl();

return contextTaskBlock.invoke(ctx);
} finally {

if (ctx != null) {
try {

ctx.close();
} catch (Exception e) {

// Log error
}
}
}

// Note name is no longer final, neither is there a class called
// Context Task and the anonymous class has been replaced by the
// closure.

public ConnectionFactory getConnectionFactory(String name)
throws LookupException {

return withContext({Context ctx =>
ConnectionFactory c = (ConnectionFactory) ctx.lookup(name); c});

}

.. Similar code for Destination...



Closure Conversion:
I quite like Closure Conversion. For example, I have an interface called Read that reads in an Employee Record. Assigning a closure to the interface type we have the below:

I am liking the conciseness presented by the above over writing an anonymous inner class + method definition + body (although the conversion is translated to the same).



interface Read {
public Employee read(String criteria);

}

Read readRecord = {String criteria =>
System.out.println("Criteria:" + criteria);

Random r = new Random();
new Employee(r.nextInt(5),
r.nextBoolean(),
r.nextInt(10000))};


public void closureConversion() {
Employee e = readRecord.read("select * from Emp where id=\"20\"");

System.out.println("Employee Read:" + e);
}




Arrays:
Arrays have boggle my mind in Generics, Arrays continue to boggle my mind with BGGA closures.



// Will not compile
{ => int} closures[] = new { => int}[10];

// Will not compile
{=>int} closures[] = {{=>5}, {=>6}};

// Compiles
List<{=> int}> list = new ArrayList<{=>int}>();

// But no way of
{=>int} closures[] = list.toArray({=>int} closures[0]);




Non local returns:
When returning from a Closure, should the "return" result in returning from the method itself? That is not very intuitive to me. I am of the opinion that returns from a closure, should only be local and not result in exiting the method. That is room for accidental programing with disastrous results.

Consider the following Code, what do you thing will be the result of the invocation?



static boolean testBooleanLocal(boolean input) {

{boolean =>boolean} closure = {boolean args=>

args
};

boolean result = !closure.invoke(input);

System.out.println("Test Boolean Local Result: " + result);
return result;

}

static boolean testBooleanNonLocal(boolean input) {

{boolean ==> boolean} closure = { boolean arg ==>

return arg;
};

boolean result = !closure.invoke(input);

System.out.println("Test Boolean NonLocal Result:" + result);
return result;

}

public static void main(String args[]) {

System.out.println("Local:" + testBooleanLocal(true));

System.out.println("Non Local:" + testBooleanNonLocal(true));

}



The answer is:



Test Boolean Local Result: false
Local:false
Non Local:true



What just happened to the second println()? All because I had a ==> versus a => along with a return statement?

The Syntax:
Without the assistance of an IDE, I had a few growing pains where I was accidentally doing "= >" instead of "=>", i.e, a space in between. Surely any IDE ready to support BGGA will catch the same. What about "==> "versus "=>"? Will an IDE be able to warn regarding the difference? I sure hope so :-)

Running the Examples:
As always, I am sharing my playground. For the ones interested, I tried the same using:
Download the ...., install the same and then set your PATH to include the existing JDK 1.6. For example,
export PATH=/.../jdk1.6.12/bin:$PATH. Then compile the provided code using the "javac" from the BGGA proposal, for example, "$BGGA_HOME/bin/javac -classpath .:com/welflex/utils:com/welflex/model:com/welflex/example *.java" and run the same using the similar syntax.

Parting Thoughts:
Sadness that Java does not have closures. Visual Basic developers are benefitting and thriving with the same, something I heard. Closures are a construct that has been around since ages, Java compromised at the time of inception via Anonymous classes rather then deal with the complexity of Closures, maybe a good decision for the time. However, time is now neigh for change. If Java has to survive, Closures need to enter the language. One cannot wait between JDK7 to JDK8 without knowing closures will or will not find a place in Java. Java folk are tending to gravitate toward other languages like Scala, Clojure etc. IMHO we need to stop this abandonement. My rationale for the same is (based purely on sentiments), I love Java, its been the reason for my bread and butter over the years, I love developing with it and I want to continue to develop with it. More importantly, its represents Coffee, I cannot wake up without my shot of the same. Scala or Clojure doesn't inspire me as much, maybe an alternate VM language called Nicotine could cut it and spark my interest ;-)

BGGA is not perfect. Sadly, nothing ever is. In my own humble opinion, the warring factions of Gafter/Bloch need to join forces to improve the compiler warnings, syntax and any other ambiguities in the features of BGGA present and look toward introducing it as part of JDK 8.X. Come on guys, you are super smart chaps (Phd's don't come easy) that have collectively led to the creation/enhancement of this great language, make my life easy and do not let me leave my favorite language for some pretenders to the throne :-(. If not, I propose that Gosling take the place of a dictator and get this in..I will welcome the same! Maybe IBM buying Sun will change things... Got to cut em beers when I post, for all you know I might have made a case for "No closures ever in Java via my misadventures :-)", lol!


Sample Code that I have used can be downloaded from here.

Resources:

Monday, March 2, 2009

An Example of Caching with REST using Jersey JAX-RS

One of the constraints/benefits of a RESTful architecture is the use of Cache's where possible. REST architecture gains from the use of cache's by reducing network bandwidth and unnecessary I/O. In short, caching of information has a direct impact on the scalability of the RESTful architecture.

Service Side Cache:
When a request is made to retrieve a data set, if the data set information does not change over a fixed duration as determined via non-functional requirements, then caching the data set on the Server side has the benefit of not having to suffer for example, database I/O for every client request. Network bandwidth utilization from Client to Server is of course suffered with a Server only cache.

Client Side Cache:
HTTP has a very cool construct in terms of information that a server can provide to a client saying cache or do not cache the payload provided by the server via HTTP Header attributes. In addition, client's can also utilize Conditional GET's to only obtain payload if the data has changed on the server. With Conditional GET one could also only return the changes that have occurred and a Client could easily compute the delta and update its local cached copy of the data.

I wonder how many organizations utilize these features provided by HTTP. The Server side cache can easily be accommodated via using Squid or some other tool.

On the Client side, now thats a bit of discussion. As part of HTTP response, a server can let the client know whether or not to Cache the provided representation via the "Expires" HTTP header attribute. As an example, when a Product Client or Web Browser, requests a list of Products from the server, if the server knows that the Product data will not be refreshed for some time, it can inform the client to cache the payload representing Products for the duration until expiration. What I like about this control is that the Server and not the Client is instructing the duration for which the data is valid and can be cached. Client's in a Client-Server environment that decide to cache data based of non-functional requirements is a bad way to cache IMO. The duration logic should IMO emanate from the service or server.

Using the "Expires" header the server can tell the client to cache the data till a particular date or provide a time duration to cache. The former can be a problem if the client and server clocks are not in sync. For example, the server tells the client to cache a piece of data till Dec 20th, 2012. However, the Client clock is 10 mins behind the server. So although the data on the client has not expired, the server data has. For this reason, setting a duration for expiration via a time duration such as 10 mins will allow both Client/Server to be in sync regarding expiration of the cache.

What about a case when caching is recommended on the client but there is a certain amount of volatility involved with the data. For example, lets say we have an OrderClient that GETs order information about a particular order from the server. The Order information could potentially be updated by a user subsequently, for example, adding a new line item to the Order. In such a case one could avail the Conditional GET features of HTTP to obtain new payload only if the data cached by the Client is stale. The server determines whether the data has changed between the last time the client requested the payload and either provides the entire data or responds back with a HTTP status of 304, indicating UN-Modified payload. The client in turn can in turn as a result of a 304 returned from the server, respond the consumer with the data it has previously cached. This reduces the amount of data transferred between client and server and thus alleviates network bandwidth utilization. Conditional HTTP GET can be availed using either Etags or Last-Modified header attributes.

As an example of the above, let us look at a Jersey, JAX-RS example. In the example, we have two clients, a ProductClient that obtains information about Products and an OrderClient used to manage the life cycle of an Order. The Product Client will cache the Products until the time has come to re-fetch the products due to expiration while the OrderClient will cache the payload obtained an issue a Conditional GET to only obtain the payload if the data has changed on the server since its last request.

The ProductsResource as shown below for the sake of demonstration, sets the Products to expire 3 seconds after its invocation:
@GET 
@Produces("application/json") 
public Response getProducts() {
   ...
   ProductListDto productListDto = new ProductListDto(productDtos);
   Response.ResponseBuilder response = Response.ok(productListDto).type(MediaType.APPLICATION_JSON);

   // Expires 3 seconds from now..this would be ideally based 
   // of some pre-determined non-functional requirement.
   Date expirationDate = new Date(System.currentTimeMillis() + 3000);
   response.expires(expirationDate);

   return response.build();
}

The OrderResource on the other hand based of an etag determines if the order has been modified since the last GET request by the client and returns back a status of 304 or the entire order body as shown below:

@GET
@Produces("application/xml")
public Response getOrder(@Context HttpHeaders hh, @Context Request request) throws OrderNotFoundException {
 Order order = orderService.getOrder(orderId);
 
LOG.debug("Checking if there an Etag and whether there is a change in the order...");

 EntityTag etag = computeEtagForOrder(order);
 Response.ResponseBuilder responseBuilder = request.evaluatePreconditions(etag);

 if (responseBuilder != null) {
     // Etag match
    LOG.debug("Order has not changed..returning unmodified response code");
    return responseBuilder.build();
 }
 
 LOG.debug("Returning full Order to the Client");
 OrderDto orderDto = (OrderDto) beanMapper.map(order, OrderDto.class);

 responseBuilder = Response.ok(orderDto).tag(etag);

 return responseBuilder.build();
}



From the Perspective of the ProductClient, it looks to see whether the cached data has expired before issuing a new request to the server as shown below:

public ProductListDto getProducts() {
  // Key into the cache
  String path = resource.getURI().getPath();
  CacheEntry entry = CacheManager.get(path);
  ProductListDto productList = null;
  if (entry != null) {
    LOG.debug("Product Entry in cache is not null...checking expiration date..");

    Date cacheTillDate = entry.getCacheTillDate();
    Date now = new Date();

    if (now.before(cacheTillDate)) {
      LOG.debug("Product List is not stale..using cached value");

      productList =  (ProductListDto) entry.getObject();
    } 
    else {
      LOG.debug("Product List is stale..will request server for new Product List..");
    }
   }

   if (productList == null) {
     LOG.debug("Fetching Product List from Service...");
     ClientResponse response = resource.accept(MediaType.APPLICATION_JSON).get(ClientResponse.class);

     if (response.getResponseStatus().equals(Status.OK)) {
       productList = response.getEntity(ProductListDto.class);
       String cacheDate = response.getMetadata().getFirst("Expires");
       
       if (cacheDate != null) {
         Date ccDate;

         try {
           ccDate = DATE_FORMAT.parse(cacheDate);
           entry = new CacheEntry(productList, null, ccDate);
           CacheManager.cache(path, entry);
         }
         catch (ParseException e) {
           LOG.error("Error Parsing returned cache date..no caching will occur", e);
         }
       }
     } 
     else {
       throw new RuntimeException("Error Getting Products....");
     }
  }
  return productList;
}

The Order Client on the other hand uses the etag and sends that as part of every request to the server as shown below:

public OrderDto getOrder(Long orderId) throws OrderNotFoundException, IOException {
  try {
    String path = resource.path(orderId.toString()).getURI().getPath();

    CacheEntry entry = CacheManager.get(path);
    Builder wr = resource.path(orderId.toString()).accept(MediaType.APPLICATION_XML);

    if (entry != null && entry.getEtag() != null) {
     // Set the etag
      wr.header("If-None-Match", entry.getEtag().getValue());
    }

    ClientResponse response = wr.get(ClientResponse.class);

    if (response.getResponseStatus().equals(Status.NOT_MODIFIED)) {
      LOG.debug("Order has not been modified..returning Cached Order...");
      return (OrderDto) entry.getObject();
    }
    else if (response.getResponseStatus().equals(Status.OK)) {
      LOG.debug("Obtained full Order from Service...Caching it..");
      OrderDto dto = response.getEntity(OrderDto.class);
      CacheManager.cache(path, new CacheEntry(dto, response.getEntityTag(), null));

      return dto;
    }
   else {
     LOG.debug("Order not found on server...removing from cache");
     CacheManager.remove(path);
     throw new UniformInterfaceException(response);
   }
  }
  catch (UniformInterfaceException e) {
    if (e.getResponse().getStatus() == Status.NOT_FOUND.getStatusCode()) {
      throw new OrderNotFoundException(e.getResponse().getEntity(String.class));
    }
    throw new RuntimeException(e);
  }
}

Seeing the above in action, for the Products; we obtain Products in the first call, this should result in caching of the same, the second request executed immediately after the first should obtain the cached Products. Sleeping for sometime will allow the data to become stale and a subsequent request should re-fetch the data. The logs when the tests are run look like:

1 Request, cache Products:
20:37:33 DEBUG - com.welflex.client.CacheManager.cache(14) | Caching Object with key [/IntegrationTest/products]

2. Request, Product Cache still good:
20:37:33 DEBUG - com.welflex.client.CacheManager.get(19) | Getting Object from Cache for Key:/IntegrationTest/products
20:37:33 DEBUG - com.welflex.client.ProductClientImpl.getProducts(49) | Product Entry in cache is not null...checking cache till date
20:37:33 DEBUG - com.welflex.client.ProductClientImpl.getProducts(54) | Product List is not stale..using cached value

3. Request, Products have expired:
20:37:43 DEBUG - com.welflex.client.CacheManager.get(19) | Getting Object from Cache for Key:/IntegrationTest/products
20:37:43 DEBUG - com.welflex.client.CacheManager.get(21) | Object in Cache for Key [/IntegrationTest/products] is :com.welflex.client.CacheEntry@1bf3d87
20:37:43 DEBUG - com.welflex.client.ProductClientImpl.getProducts(49) | Product Entry in cache is not null...checking cache till date
20:37:43 DEBUG - com.welflex.client.ProductClientImpl.getProducts(57) | Product List is stale..will request server for new Product List..
20:37:43 DEBUG - com.welflex.client.ProductClientImpl.getProducts(62) | Fetching Product List from Service...


From the Order Client Perspective, the first request to obtain the order results in the Order being cached with the etag. When a subsequent request is sent, the server only responds back with a status of 304, i.e, un-modified and then the Order Client responds back with the cached copy. After this second request, the order is updated and the etag is no longer valid therefore a subsequent GET of the order results in the full order being fetched and re-cached as shown below:

1. First time Order is retreived, Order is cached:
Retrieving the order...
22:33:13 DEBUG - com.welflex.client.CacheManager.get(19) | Getting Object from Cache for Key:/IntegrationTest/orders/3443274629940897628
22:33:13 DEBUG - com.welflex.client.OrderClientImpl.getOrder(68) | Obtained full Order from Service...Caching it..

2. Second Request, Order not changed on Server, 304 returned to Client:
22:33:13 DEBUG - com.welflex.order.rest.resource.OrderResource.getOrder(68) | Order Resource 22:33:13 DEBUG - com.welflex.order.rest.resource.OrderResource.getOrder(79) | Order has not changed..returning unmodified response code
22:33:13 DEBUG - com.welflex.client.OrderClientImpl.getOrder(64) | Order has not been modified..returning Cached Order...

3. Issue a PUT to update the Order, thus changing it:
Updating the order...
22:33:13 DEBUG - com.welflex.order.rest.resource.OrderResource.updateOrder(106) | Enter Update Order, Id=3443274629940897628

4. Retrieve the Order the etag should no longer be valid:
Retrieving the order..should not obtained cached copy...
22:33:13 DEBUG - com.welflex.order.rest.resource.OrderResource.getOrder(73) | Checking if there an Etag and whether there is a change in the order...
22:33:13 DEBUG - com.welflex.order.rest.resource.OrderResource.getOrder(83) | Returning full Order to the Client
22:33:13 DEBUG - com.welflex.client.OrderClientImpl.getOrder(68) | Obtained full Order from Service...Caching it..


Attached HERE is the sample Maven Jersey JAX-RS sample that will allow you to witness the above. The caching implemented is primitive at best and the attached code is only an EXAMPLE. One could easily delegate the caching to some caching framework such as ehcache, oscache, jcs etc. One can even potentially get more exotic and think of Aspects that will intercept calls to GET and transparently provide the caching.

To execute the example, from the command line, at the root of the project, execute a "mvn install". Note that one needs JDK5.X+ in order to execute the code.

Caching is a very critical feature of REST. Without using the same is like saying one is doing RES without the T. As always, if a reader of this blog has any comments, I'd appreciate the same. If I am wrong, I would like comments on the same as that will help me improve..or else forever hold ur breath :-) If you cannot run the example, ping me...

Sub-Resources with Jersey,Spring, jax-rs

A simple example on how to use Sub-Resources with Jersey and Spring.

For the sake of discussion, consider a Resource /orders. One would typically POST to the resource to create an Order, update the Order at the Resource /orders/{id}, GET the Resource from the URI /orders/{id} and also delete the order at the Resource /orders/{id}.

One can therefore state that /orders/{id} is a Sub-Resource of /orders.

In jax-rs, sub-resources are obtained from the parent resource. A parent resource will not define a HTTP method for the sub-resource, but instead simple have a @Path annotation indicating a sub-resource.

For example,


@Path("/orders")
@Component

@Scope("singleton")
public class OrdersResource {
@Context

private ResourceContext resourceContext;

@Path("/{id}")
public OrderResource subResource(@PathParam("id") String id) {

OrderResource resource = resourceContext.getResource(OrderResource.class);

resource.setOrderId(Long.parseLong(id));

return resource;

}
...
}



In the above example, we have an OrdersResource that services the URI pattern of /orders. Calls to /orders/{id} are delegated to a sub-resource of the OrdersResource called OrderResource as shown below:


@PerRequest
@Component
@Scope("prototype")

public class OrderResource {
private Long orderId;
// Jersey injected

@Context
private ResourceContext resourceContext;

// Spring injected
@Autowired

private OrderService orderService;
@Autowired
private MapperIF beanMapper;

....
public void setOrderId(Long orderId) {
this.orderId = orderId;

}

// @Context are Jersey injected

@GET
@Produces("application/xml")

public Response getOrder(@Context HttpHeaders hh,
@Context Request request) { ... }

@Consumes("application/xml")
@PUT
public void updateOrder(@PathParam("id") String id, OrderDto orderDto) {

....
}

@DELETE
public void deleteOrder(@PathParam("id") String id) {

....
}

@Path("/lineItems")
public LineItemListDto getLineItems() throws OrderNotFoundException {

.....
}
}



Both the above mentioned Resources are beautifully managed by Spring and Jersey interoperability. Jersey Resources are injected by Jersey and Spring resources are injected by Spring auto-wiring. So cool!!!!

As shown above the OrderResource is following a Prototype pattern and is obtained via a called to resourceContext.getResource(OrderResource.class).

I quite like the fact that the OrderResource is declared as sub-resource of the OrdersResource and the creation of the Sub-Resource via the prototype pattern matches so beautifully within jax-rs. It becomes a trivial exercise to trace the flow of a uri-call from the parent resource to sub-resources.