tag:blogger.com,1999:blog-46671219874706963592024-03-15T19:10:14.812-06:00Sleepless in Salt Lake CitySanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.comBlogger81125tag:blogger.com,1999:blog-4667121987470696359.post-74003136910286025232018-11-27T19:38:00.002-07:002018-11-27T19:38:42.273-07:00API First Services with Open API 3.0 and Spring Boot<div dir="ltr" style="text-align: left;" trbidi="on">
<h2 style="text-align: left;">
API First Approach</h2>
<br />
<div>
The term API First design seems to be quite prevalent with Microservices and especially for those on the cloud. <a href="https://swagger.io/resources/articles/adopting-an-api-first-approach/">Swagger.io</a> defines API first as: "<i>An API-first approach means that for any given development project, your APIs are treated as “<b>first-class citizens.</b>” That everything about a project revolves around the idea that the end product will be consumed by mobile devices, and that APIs will be consumed by client applications.</i>". API first approach involves giving more thought ahead of time toward the design of the API. It involves planning across stake holders and feedback on the API before the API is translated to code.<br />
<br />
They seem to state some characteristics of API that are developed using an API First approach:<br />
- That they are <b>consistent</b>. This makes sense. In an ecosystem of API's, one would like the API's to have a level of consistency<br />
- That can be developed using an API definition language<br />
- That they are re-usable<br />
<br />
An API first approach seems to want to <b>focus first on the API</b> rather than building versions of the API for mobile and web etc.<br />
<br />
Kevin Hoffmann in his BLOG, <a href="https://www.oreilly.com/ideas/an-api-first-approach-for-cloud-native-app-development">An API-first approach for cloud-native app development</a> describes a cloud Microservice ecosystem where there are multiple services in play and that without discipline, represents a nightmarish scenario of integration failures. He goes on to state: "<i>To avoid these integration failures, and to formally recognize your API as a first-class artifact of the development process, API first gives teams the ability to work against each other’s public contracts without interfering with internal development processes</i>." Kevin also stresses that designing your API first enables one to have <b>a discussion with stakeholders with some tangible material </b>(the API), leading to the creation of user stories, mocked API and documentation that socialize the scope of the service.<br />
<br />
Swagger defines the following as benefits of an API First Approach to creating services:<br />
<br />
<ul style="text-align: left;">
<li><b>Parallel </b>development among teams as they work with the interface or contract as they lend themselves to mocking and stubbing.</li>
<li>Reduces <b>cost </b>of developing applications as API can code can be re-used across multiple applications. Focusing on API design and establishment allows for the problem to thought through well enough before the code is even invested in.</li>
<li><b>Speed to Market</b> due to the automation support around generating of code from the API specification in place</li>
<li>Ensures a good developer experience as the API is consistent, well documented and well designed.</li>
<li><b>Reduces the risk of failure</b> for the business as API first approach ensures that the APIs are reliable, consistent and easy for developers to use</li>
</ul>
So is this just old wine in a new bottle here? Taking an example of SOAP based services. They satisfied the parallel development point from above because one could create the <a href="https://en.wikipedia.org/wiki/Interface_description_language">IDL</a> and then generate service and client stubs. XML schemas used in the IDL's were re-usable. The service developed would be re-usable across multiple applications. The generation of code from the API meant speed to market. Ensured good developer experience as the API was well documented with first class RPC type operations. API's are consistent and developers simply used the generated artifacts and focused on building the logic backing them.<br />
<br />
I think the beauty of the API first development is not as much about the technology but is about the methodology and principles advocated by the approach where the API is worked on ahead of developing the code and is bought into by the stakeholders before investment is done on the effort. The API in such as ecosystem is considered as a first class citizen of the business. The simplicity of the IDL along with the tooling made available makes the job of writing the IDL pretty easy. This coupled with the fact that the IDL itself is very readable helps make it easy for people to understand what the capabilities of the API will be. Code generation tools also allows the generation of the service/client stubs in a multitude of languages which makes polyglot programming easy.<br />
<ul style="text-align: left;">
</ul>
</div>
<h2 style="text-align: left;">
Open API and Swagger</h2>
<br />
<div>
The <a href="https://github.com/OAI/OpenAPI-Specification">Open API Specification</a> is a community driven one that defines a programming language agnostic IDL for REST API. The IDL is defined in a way that is intuitive for humans to read and author without requiring additional code. Open API documents are generated using <a href="http://yaml.org/">YAML</a> or JSON. The Open API specification evolved from Swagger 2.0 specification with input from the community.<br />
<br />
The specification allows authoring of API including end points, authentication mechanisms, operation parameters, request/response types and other licensing information.<br />
<br />
Swagger is a <a href="https://swagger.io/tools/open-source/open-source-integrations/">bunch of a tools</a> that are used for the implementation of the specification like Editor, Code Generator etc.<br />
<br />
<h2 style="text-align: left;">
Open API Example</h2>
<br />
The full source code for this example can be found at: <a href="https://github.com/sanjayvacharya/sleeplessinslc/tree/master/openapi-example">open-api-example</a>
<br />
For the Open API example, I chose to use my favorite simplistic <b>notes service</b>. I then placed the following requirements on it:<br />
<br />
<ul style="text-align: left;">
<li>Needed to generate the <b>API definition first </b>and have a simple way to make it available for service and service consumers to use</li>
<li>Have the Open API definition support <b>basic authentication</b></li>
<li>Use the API definition to generate the <b>service stubs</b></li>
<li>Use the API definitions to generate <b>client stubs</b></li>
<li>Use the client stubs to invoke the service in a simple integration test</li>
</ul>
<br />
For the first requirement, the <a href="https://github.com/sanjayvacharya/sleeplessinslc/blob/master/openapi-example/notes-api/src/main/resources/api.yaml">notes api.yaml </a>file that describes the API is being bundled in a notes-api maven jar. This jar only has the API file, it does not bundle in any classes. The YAML file is set up to use <b>Basic Authentication</b> of all its resources. Part of the API specification is shown below:<br />
<br />
<pre class="shell" name="code">openapi: "3.0.0"
info:
version: '1.0'
title: Notes Service
description: Demo project of open api 3 using API first development
termsOfService: http://localhost:8080/terms-of-service
contact:
name: Sanjay Acharya
email: foo@bar.com
security:
- BasicAuth: []
paths:
/notes:
get:
summary: Get all notes
operationId: getNotes
tags:
- notes
parameters:
- name: page
in: query
description: Page Number
required: false
schema:
type: integer
format: int32
- name: page-size
in: query
description: Number of items to return at one time
required: false
schema:
type: integer
format: int32
responses:
'200':
description: A paged array of notes
content:
application/json:
schema:
$ref: "#/components/schemas/NotesPage"
'404':
$ref: "#/components/responses/NotFound"
default:
$ref: "#/components/responses/UnexpectedError"
....
....
</pre>
<br />
To generate the server stubs, I used the <a href="https://github.com/swagger-api/swagger-codegen/tree/master/modules/swagger-codegen-maven-plugin">swagger-codegen-maven-plugin</a> configured as shown:<br />
<br />
<pre class="xml" name="code"> <plugin>
<groupId>io.swagger.codegen.v3</groupId>
<artifactId>swagger-codegen-maven-plugin</artifactId>
<version>3.0.2</version>
<dependencies>
<dependency>
<groupId>com.welflex.example</groupId> // Dependency to notes-api in classpath during plugin execution
<artifactId>notes-api</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
</dependencies>
<executions>
<execution>
<id>server</id>
<goals>
<goal>generate</goal>
</goals>
<configuration>
<inputSpec>/api.yaml</inputSpec>
<language>spring</language>
<library>spring-boot</library> // Generate SpringBoot based stubs
<output>${project.build.directory}/generated-sources</output>
<invokerPackage>com.welflex.notes</invokerPackage>
<apiPackage>com.welflex.notes.api.generated</apiPackage>
....
</configuration>
</execution>
</executions>
</plugin>
</pre>
<br />
The plugin uses the<i> notes-api</i> as a JAR dependency to build the stub. Think of the <i>notes-api</i> JAR as being an artifact that the service creators have authored and published as a JAR for clients to use in some maven repository. Typically these artifacts would be authored and published in some API registry but for the scope of this example, the simpler jar method works. Of importance on the generated code is the NotesApi that defines the different operations and documents it with content from the swagger file and the NotesApiDelegate that defines the API stubs like shown below:<br />
<br />
<pre class="java" name="code">@Api(value = "notes", description = "the notes API")
public interface NotesApi {
NotesApiDelegate getDelegate();
@ApiOperation(value = "Creates a new note", nickname = "createNote", notes = "", response = Note.class, authorizations = {
@Authorization(value = "BasicAuth")
}, tags={ "notes", })
@ApiResponses(value = {
@ApiResponse(code = 201, message = "A simple string response", response = Note.class),
@ApiResponse(code = 200, message = "Unexpected Error", response = Error.class) })
@RequestMapping(value = "/notes",
produces = { "application/json" },
consumes = { "application/json" },
method = RequestMethod.POST)
default ResponseEntity<Note> createNote(@ApiParam(value = "The note contents" ,required=true ) @Valid @RequestBody Note body) {
return getDelegate().createNote(body);
}
@ApiOperation(value = "Gets a Note", nickname = "getNote", notes = "", response = Note.class, authorizations = {
@Authorization(value = "BasicAuth")
}, tags={ "notes", })
@ApiResponses(value = {
@ApiResponse(code = 200, message = "The note requested", response = Note.class),
@ApiResponse(code = 404, message = "The specified resource was not found", response = Error.class),
@ApiResponse(code = 200, message = "Unexpected Error", response = Error.class) })
@RequestMapping(value = "/notes/{id}",
produces = { "application/json" },
method = RequestMethod.GET)
default ResponseEntity<Note> getNote(@ApiParam(value = "The id of the note to retrieve",required=true) @PathVariable("id") Integer id) {
return getDelegate().getNote(id);
}
....
}
public interface NotesApiDelegate {
....
/**
* @see NotesApi#createNote
*/
default ResponseEntity<Note> createNote( Note body) {
if(getObjectMapper().isPresent() && getAcceptHeader().isPresent()) {
} else {
log.warn("ObjectMapper or HttpServletRequest not configured in default NotesApi interface so no example is generated");
}
return new ResponseEntity<>(HttpStatus.NOT_IMPLEMENTED);
}
/**
* @see NotesApi#getNote
*/
default ResponseEntity<Note> getNote( Integer id) {
if(getObjectMapper().isPresent() && getAcceptHeader().isPresent()) {
} else {
log.warn("ObjectMapper or HttpServletRequest not configured in default NotesApi interface so no example is generated");
}
return new ResponseEntity<>(HttpStatus.NOT_IMPLEMENTED);
}
..
}
</pre>
<br />
The generated stub has a bunch of default signatures that need to be implemented. The code generated does not have any 'security' level annotations or the stub code for security. It leaves that to the implementor. Shown below is the <a href="https://github.com/sanjayvacharya/sleeplessinslc/blob/master/openapi-example/notes-service/src/main/java/com/welflex/notes/rest/NotesApiDelegateImpl.java">NotesApiDelegateImpl</a> that a developer would author which implements the <i>NotesApiDelegate</i> and uses to a Spring Data JPA <a href="https://github.com/sanjayvacharya/sleeplessinslc/blob/master/openapi-example/notes-service/src/main/java/com/welflex/notes/repository/NotesRepository.java">NotesRepository</a> for the CRUD operations:
<br />
<pre class="java" name="code">public class NotesApiDelegateImpl implements NotesApiDelegate {
private final NotesRepository notesRepository; // Yes, Yes, we could delegate to a service and then to a repository, it is an example :-)
@Autowired
public NotesApiDelegateImpl(NotesRepository notesRepository) {
this.notesRepository = notesRepository;
}
/**
* @see NotesApi#getNote
*/
public ResponseEntity<Note> getNote( Integer id) {
Optional<NoteModel> noteModel = notesRepository.findById(id);
if (noteModel.isPresent()) {
return ResponseEntity.ok(toNote(noteModel.get()));
}
throw new NoteNotFoundException(id);
}
public ResponseEntity<NotesPage> getNotes( Integer page,
Integer pageSize) {
int pageRequest = page == null? 0 : page;
int limitRequested = pageSize == null ? 100 : pageSize;
Page<NoteModel> dataPage = notesRepository.findAll(PageRequest.of(pageRequest, limitRequested));
PageMetadata pageMetadata = new PageMetadata().pageNumber(pageRequest).pageSize(limitRequested).resultCount(dataPage.getNumberOfElements())
.totalResults(dataPage.getTotalElements());
Notes notes = new Notes();
notes.addAll(dataPage.stream().map(t -> toNote(t)).collect(Collectors.toList()));
NotesPage notesPage = new NotesPage().items(notes).metadata(pageMetadata);
return ResponseEntity.<NotesPage>ok(notesPage);
}
...
}
</pre>
<br />
For the Basic Authentication, the example uses <a href="https://spring.io/projects/spring-security">Spring Security</a> which is configured as shown below with support for only one demo user having <i><b>user name 'user' and password 'password'</b></i>:
<br />
<pre class="java" name="code">@Configuration
@EnableWebSecurity
public class SecurityConfiguration extends WebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http) throws Exception {
http.csrf().disable().authorizeRequests().antMatchers("/index.html", "/api.yaml", "/api-docs")
.permitAll().anyRequest().authenticated().and().httpBasic();
super.configure(http);
}
@Autowired
public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception {
PasswordEncoder encoder = PasswordEncoderFactories.createDelegatingPasswordEncoder();
auth.inMemoryAuthentication().withUser("user").password(encoder.encode("password"))
.roles("USER");
}
}
</pre>
<br />
There service is configured to serve the <i>api.yaml</i> file via a rest endpoint as well.
<br />
A maven module exists called<i> notes-client </i>that uses the same plugin to generate the client side artifacts including the model artifacts. The service maintainers would typically not distribute such a client (See my blog around:<a href="https://sleeplessinslc.blogspot.com/2017/08/shared-client-libraries-microservice.html">Shared Client Libraries</a>) but would expect the client to be generated in similar manner using the <i>notes-api </i>maven dependency in the consuming application directly.<br />
<pre class="xml" name="code"> <plugin>
<groupId>io.swagger.codegen.v3</groupId>
<artifactId>swagger-codegen-maven-plugin</artifactId>
<version>3.0.2</version>
<dependencies>
<dependency> // Dependency to notes-api in classpath during plugin execution
<groupId>com.welflex.example</groupId>
<artifactId>notes-api</artifactId>
<version>${project.version}</version>
</dependency>
</dependencies>
<executions>
<execution>
<id>client</id>
<goals>
<goal>generate</goal>
</goals>
<configuration>
<inputSpec>/api.yaml</inputSpec> // API from classpath
<language>java</language>
<library>resttemplate</library> // Generate for RestTemplate
<output>${project.build.directory}/generated-sources</output>
<ignoreFileOverride>${project.basedir}/src/main/resources/.openapi-codegen-ingore</ignoreFileOverride>
<apiPackage>com.welflex.notes.api.generated</apiPackage>
</configuration>
</execution>
</executions>
</plugin>
</pre>
<br />
The generated client stubs utilize <i>RestTemplate</i> and provide a neat client abstraction to communicate with the service. The client stubs pleasantly include basic authentication support built in that challenge as needed to invoke the API. The<a href="https://github.com/sanjayvacharya/sleeplessinslc/blob/master/openapi-example/notes-integration/src/test/java/com/weflex/example/notes/NotesIntegrationTest.java"> sample integration test</a> demonstrates the use of the generated Notes client stubs.<br />
<br />
When the server module is built, a<b> docker image </b>is created. For this reason, you need to have docker installed on your computer prior to playing with this.<br />
<br />
The integration test module launches the service container using the <a href="https://www.testcontainers.org/">TestContainers</a> framework directly from a JUnit test and subsequently uses the notes-client to invoke the API methods.<br />
<br />
The full source code for this project can be found at <a href="https://github.com/sanjayvacharya/sleeplessinslc/tree/master/openapi-example">open-api-example</a>.<br />
<br />
<h2 style="text-align: left;">
Parting Thoughts</h2>
<br />
I really liked the Open API specification as I felt that defining the API is pretty intuitive. If you use the <a href="https://editor.swagger.io/">Swagger Editor UI</a>, then authoring the API definition is pretty straightforward where the tool assists in making corrections. Using something like<a href="https://swagger.io/tools/swaggerhub/"> Swagger Hub</a> to collaboratively author the API where different stake holders are participating in the creation/evolution of the API is valuable. There are also some really good eclipse plugins that help with authoring of the api as well if you don't want to use Swagger UI.
<br />
<br />
The <a href="https://github.com/swagger-api/swagger-codegen/tree/master/modules/swagger-codegen-maven-plugin">swagger-code-generator </a>was pretty good to generate the <i>Spring Boot</i> service and the client that uses <a href="https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/web/client/RestTemplate.html">RestTemplate</a>. It however did nothing for <b>spring-security</b> and the Basic Authentication pieces. Maybe a future version would allow a tag for security-framework to generate security classes. One hard point I had was that the generated service code did not support alternate execution paths well. For example, there was no easy way to respond with a different entity type in the generated code for an error condition like a 404. This could lead to drift between what is documented and what is actually returned. I used a hack where I defined a Spring Exception handler to interrogate the annotation on the NotesApi and pull the response message from it as shown below:<br />
<pre class="java" name="code">@ControllerAdvice
public class ExceptionMapper extends ResponseEntityExceptionHandler {
@ExceptionHandler(value = {NoteNotFoundException.class})
public ResponseEntity<?> handleException(Exception e, HandlerMethod handlerMethod) {
if (e instanceof NoteNotFoundException) {
return handleNoteNotFoundException(NoteNotFoundException.class.cast(e), handlerMethod);
}
else {
Error error = new Error().message("Unexpected Error:" + e.getMessage());
return new ResponseEntity<Error>(error, HttpStatus.INTERNAL_SERVER_ERROR);
}
}
private ResponseEntity<?> handleNoteNotFoundException(NoteNotFoundException e,
HandlerMethod handlerMethod) {
ApiResponse apiResponse = getApiResponseForErrorCode(404, handlerMethod);
Error error = new Error().message(apiResponse != null ? apiResponse.message() : "Note not found:" + e.getNoteId()); // Use the message as defined in the annotation
return new ResponseEntity<Error>(error, HttpStatus.NOT_FOUND);
}
private ApiResponse getApiResponseForErrorCode(int errorCode, HandlerMethod method) {
...
return method.getMethodAnnotation(ApiResponse.class);
}
}
</pre>
<br />
Another example that I found hard to do on the client is to respond with a <b>Location</b> header to a POST request with the location of a created note. The generated client did not provide any way to expose that to the consumer of the generated code.<br />
Overall I really liked the Open API specification and the collaborative definition of an API. I also liked the idea that teams spend time thinking of what their API should be before they get down to coding. Service consumers should use the API to generate their own consumer clients.
</div>
</div>
Sanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.com0tag:blogger.com,1999:blog-4667121987470696359.post-66643413078530154032018-02-07T07:46:00.000-07:002018-02-09T19:50:33.231-07:00Inventory Microservice example with Kafka Streams and CQRS<div dir="ltr" style="text-align: left;" trbidi="on">
<h2 style="text-align: left;">
Introduction</h2>
<br />
<div>
I recently had a chance to play with <a href="https://kafka.apache.org/documentation/streams/">Kafka Streams</a> and <a href="https://martinfowler.com/bliki/CQRS.html">CQRS</a> and wanted to share my learnings via an example. For the example, I have selected a domain that represents Sellable Inventory, i.e, a computation of inventory that denotes what you can sell based of what you have on-hand and what has been reserved.</div>
<h2 style="text-align: left;">
Kafka Streams</h2>
<br />
<div>
Many people have used Kafka in their workplace simply by virtue of it being the choice of technology for large throughput messaging. Kafka Streams builds upon Kafka to provide:</div>
<div>
<ul style="text-align: left;">
<li>Fault Tolerant processing of Streams with support for joins, windowing and aggregations</li>
<li>Support for exactly once semantics</li>
<li>Supports a rich <a href="https://docs.confluent.io/current/streams/concepts.html#time">notion of time of event </a>for processing with support for: Event Time, Processing Time and Ingestion Time.</li>
</ul>
<div>
For more on Kafka Streams this video is a good introduction:</div>
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/Z3JKCLG3VP4/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/Z3JKCLG3VP4?feature=player_embedded" width="320"></iframe></div>
<div>
<br /></div>
<div>
<br /></div>
<h2 style="text-align: left;">
Architecture and Domain for the Example</h2>
</div>
<div>
Neha Narkhede has a nice post on the confluent site: <a href="https://www.confluent.io/blog/event-sourcing-cqrs-stream-processing-apache-kafka-whats-connection/">Event Sourcing, CQRS, stream processing and Kafka: What's the connection?</a> In that post, she discusses the concepts of Event Sourcing, CQRS and Stream processing and eludes to an example of a Retail Inventory solution that is implemented using the concepts but does not provide the source code. The domain for this BLOG is loosely based of the example she mentions there and loosely based of the GCP recipe on <a href="https://cloud.google.com/solutions/building-real-time-inventory-systems-retail">Building Real-Time Inventory Systems for Retail.</a></div>
<br />
<div>
The application has the following architecture:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNUK2SLhY2VrglaVnscUeB0g9ztLmUymyU3yQgWFMpqaAkZO-FPXJCI0_1hQpzgWwS_Jmaf5BEJsdr6jDkPJ-yrXcUl-B5SMQXVq3Nv-5dZjlVNLSiLSl6YCxC-1WieB3WzDgR__zy6S9M/s1600/SellableInventory.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="745" data-original-width="923" height="516" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNUK2SLhY2VrglaVnscUeB0g9ztLmUymyU3yQgWFMpqaAkZO-FPXJCI0_1hQpzgWwS_Jmaf5BEJsdr6jDkPJ-yrXcUl-B5SMQXVq3Nv-5dZjlVNLSiLSl6YCxC-1WieB3WzDgR__zy6S9M/s640/SellableInventory.png" width="640" /></a></div>
</div>
<div>
<br /></div>
<div>
<b>inventory-mutator</b></div>
<div>
This module is responsible for inventory <b>mutation</b> actions. Consider this the <b>command</b> part of the CQRS pattern. There are two operations that it handles:</div>
<div>
<ul style="text-align: left;">
<li><b>Adjustments </b>- Warehouse updates of inventory quantities. These are <b>increments</b> to availability of inventory</li>
<li><b>Reservations </b>- Decrements to inventory as it gets sold on a Retail website</li>
</ul>
<div>
Both the end points are part of the same application but emit mutations to<b> separate Kafka topics</b> as shown in the figure,<b> inventory_adjustments</b> and <b>inventory_reservations</b>. One might choose to separate both these operations, adjustments and reservations, into different Microservices in the real world in the interest of separation of concerns and scale but this example keeps it simple.</div>
</div>
<div>
<br />
<b>sellable-inventory-calculator</b></div>
<div>
This is the stream processor that <b>combines</b> the adjustments and reservation streams to provide a stream of sellable inventory.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCzaF86VaOx4lORW_yFtI59OMqe4cOE3N8K-7uX06K1EDiVePzqNdRjMtOnNIm3vXPJf0GFCdIx_GJkSOqlAe8Y_9wSPzxxVhpxGcV8qqVRHyb2CXgH1t1O7-xkmkenWALoZVldG51xW2I/s1600/Stream+Flow.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="600" data-original-width="880" height="436" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCzaF86VaOx4lORW_yFtI59OMqe4cOE3N8K-7uX06K1EDiVePzqNdRjMtOnNIm3vXPJf0GFCdIx_GJkSOqlAe8Y_9wSPzxxVhpxGcV8qqVRHyb2CXgH1t1O7-xkmkenWALoZVldG51xW2I/s640/Stream+Flow.png" width="640" /></a></div>
<br />
In the example, the <i>sellable_inventory_calculator</i> application is also a Microservice that serves up the sellable inventory at a REST endpoint. The Stream processor stores the <b>partitioned</b> sellable inventory data in a local <a href="https://simplydistributed.wordpress.com/2017/03/21/kafka-streams-state-stores/">State store</a>. Every instance of the <i>sellable-inventory-calculator</i> application that embeds the Kafka Streams library, hosts a <b>subset </b>of the application state thus partitioning the data across the different instances. Fault tolerance for the Local State is provided by Kafka Streams by transparently <b>logging</b> all the updates to the Local State store to a separate Kafka Topic.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWzP30_woe_zOSHzH4Npjym-nuNNLa4czG_ekGh510Udff1Jen1cbREtMnDHdmvYDCQ_30Uz3zZSkRSG3mW6fZ6ZkOeBeLatfpcFwtOPGLwc3ujaW7boxRuprn1WCq4pP4IXDuNXOVS6UO/s1600/Partitioned+Store.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="410" data-original-width="748" height="350" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWzP30_woe_zOSHzH4Npjym-nuNNLa4czG_ekGh510Udff1Jen1cbREtMnDHdmvYDCQ_30Uz3zZSkRSG3mW6fZ6ZkOeBeLatfpcFwtOPGLwc3ujaW7boxRuprn1WCq4pP4IXDuNXOVS6UO/s640/Partitioned+Store.png" width="640" /></a></div>
<br />
The code that is representative of the above is shown below:
<br />
<pre class="java" name="code"> StreamsBuilder kafkaStreamBuilder = new StreamsBuilder();
// KTable of adjustments
KTable<LocationSku, Integer> adjustments = kafkaStreamBuilder
.stream(Topics.INVENTORY_ADJUSTMENT.name(),
Consumed.with(Topics.INVENTORY_ADJUSTMENT.keySerde(), Topics.INVENTORY_ADJUSTMENT.valueSerde()))
.groupByKey(Serialized.<LocationSku, Integer>with(Topics.INVENTORY_ADJUSTMENT.keySerde(),
Topics.INVENTORY_ADJUSTMENT.valueSerde()))
.reduce((value1, value2) -> {
return (value1 + value2);
});
// KTable of reservations
KTable<LocationSku, Integer> inventoryReservations = kafkaStreamBuilder
.stream(Topics.INVENTORY_RESERVATION.name(),
Consumed.with(Topics.INVENTORY_RESERVATION.keySerde(), Topics.INVENTORY_RESERVATION.valueSerde()))
.groupByKey(Serialized.<LocationSku, Integer>with(Topics.INVENTORY_RESERVATION.keySerde(),
Topics.INVENTORY_RESERVATION.valueSerde()))
.reduce((value1, value2) -> {
return (value1 + value2);
});
// KTable of sellable inventory
KTable<LocationSku, Integer> sellableInventory = adjustments.leftJoin(inventoryReservations,
(adjustment, reservation) -> {
LOGGER.info("Adjustment:{} Reservation {}", adjustment, reservation);
return (adjustment - (reservation == null ? 0 : reservation));
}, Materialized.<LocationSku, Integer>as(Stores.persistentKeyValueStore(SELLABLE_INVENTORY_STORE))
.withKeySerde(Topics.SELLABLE_INVENTORY.keySerde()).withValueSerde(Topics.SELLABLE_INVENTORY.valueSerde()));
// Send the sellable inventory to the sellable_inventory topic
sellableInventory.toStream().map((key, value) -> {
LOGGER.info("Streaming Key: {} Value: {}", key, value);
return new KeyValue<LocationSku, Integer>(key, value);
}).to(Topics.SELLABLE_INVENTORY.name(), Produced.<LocationSku, Integer>with(Topics.SELLABLE_INVENTORY.keySerde(),
Topics.SELLABLE_INVENTORY.valueSerde()));
sellableInventoryStream = new KafkaStreams(kafkaStreamBuilder.build(), streamsConfig);
sellableInventoryStream.start();
</pre>
<div class="separator" style="clear: both; text-align: center;">
</div>
When an instance of the sellable-inventory-service instance starts, it sets the the Stream configuration of APPLICATION_SERVER to its <b>host:port </b>as shown below:
<br />
<pre class="java" name="code"> @Bean
public StreamsConfig streamConfig() {
Map<String, Object> props = new HashMap<>();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "sellableInventoryCalculator");
props.put(StreamsConfig.APPLICATION_SERVER_CONFIG, getAppHostPort().getHost() + ":" + serverPort); // Server host/port
...
return new StreamsConfig(props);
}
</pre>
When a request arrives to a particular instance of the <i>sellable-inventory-service</i>, if it has the <i>[location, SKU]</i> combination, it will serve the same up, else it will forward the request to an instance of the service that has the data.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiruP6xDb7PkgXDhZmUNyZXqrJGMmAhH6YFR5YO_5wRYP7uG3KKMxhyphenhyphenYjdgPZIp_Bt5hQJPUWLyfcUexaiOwmiONqSE_hwguqitoYBC0gT_mv4NRNlm3bSGg8BOn5S7RTCnP0eDqXkkjJQK/s1600/SellableInventoryCalculator-service.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="352" data-original-width="721" height="312" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiruP6xDb7PkgXDhZmUNyZXqrJGMmAhH6YFR5YO_5wRYP7uG3KKMxhyphenhyphenYjdgPZIp_Bt5hQJPUWLyfcUexaiOwmiONqSE_hwguqitoYBC0gT_mv4NRNlm3bSGg8BOn5S7RTCnP0eDqXkkjJQK/s640/SellableInventoryCalculator-service.png" width="640" /></a></div>
</div>
This host/port computation is done by using the stream as shown below:
<br />
<pre class="java" name="code">public HostPort hostPortForKey(LocationSku locationSku) throws NotFoundException {
StreamsMetadata metadataForKey = sellableInventoryStream.metadataForKey(SELLABLE_INVENTORY_STORE, locationSku,
Topics.SELLABLE_INVENTORY.keySerde().serializer());
return new HostPort(metadataForKey.host(), metadataForKey.port()); // HOST:Port that has the sellable inventory for location-sku
}
</pre>
<div>
<br /></div>
<div>
This application serves as the <b>query</b> part of the CQRS architecture if one were to choose the above architectural style. The part about an instance having to do the forwarding to the right server if it does not have the data seems like an extra hop but don't distributed database systems actually do that behind the scenes?<br />
<br />
The <i>sellable-inventory-calculator</i> also fires the Sellable Inventory to a Kafka Topic for downstream consumers to pick up.</div>
<br />
<div>
<b>sellable-inventory-service</b></div>
<br />
<div>
The <i>sellable-inventory-service</i> is the<b> query side</b> of the CQRS architecture of the Retail inventory example. The Microservice (or a separate process) takes the sellable inventory events from the sellable inventory topic and <b>persists</b> them into its Cassandra store. The Microservice has a REST endpoint that serves up the sellable inventory for a <i>[location, SKU]</i> tuple. Wait up, didn't the <i>sellable-inventory-calculator</i> already have a query side, so why this additional service? The only purpose is to <b>demonstrate</b> the different approaches to the <b>query side</b>, one with Kafka Streams and a Local storage and the second with a dedicated Cassandra instance servicing the Query.<br />
<br />
<b>schema-registry</b><br />
<br />
The schema-registry is included in the example for completeness. It stores a versioned history of <a href="https://avro.apache.org/">Avro</a> schemas. The example could easily have been done with JSON rather than Avro. For more on why Schema registries are useful read the Confluent article, <a href="https://www.confluent.io/blog/schema-registry-kafka-stream-processing-yes-virginia-you-really-need-one/">Yes Virginia, You Really Do Need a Schema Registry</a>. Note that the example shares a schema jar that contains Avro generated classes across producers and consumers of topics. This is only for demonstration. In a real world, one would generate the schema classes from the definition in the registry using the Schema Registry maven plugin for example.<br />
<br />
<br />
<h2 style="text-align: left;">
Running the Example</h2>
<br />
You can obtain the full example from here: <a href="https://github.com/sanjayvacharya/sleeplessinslc/tree/master/inventory-example">https://github.com/sanjayvacharya/sleeplessinslc/tree/master/inventory-example</a><br />
<br />
The different components of this architecture are Dockerized and using <a href="https://docs.docker.com/compose/">docker-compose</a> we can locally run the containers linking them as needed. Before proceeding ensure that you have <a href="https://www.docker.com/">Docker</a> and <a href="https://docs.docker.com/compose/">docker-compose</a> installed. From the root level of the project, issue the following to build all the modules:<br />
<br />
<pre class="shell" name="code">>mvn clean install
</pre>
<br />
The build should result in docker images being created for the different artifacts. The <i>docker-compose</i> file has been tried on any other platform other than Mac OS. It is quite likely that you might need some tweaking on other platforms to get this to work. On a Mac, issue the following:<br />
<br />
<pre class="shell" name="code">>export DOCKER_KAFKA_HOST=$(ipconfig getifaddr en0)
</pre>
<br />
If you would like to run the individual Microservices in your IDE, then start run the following command to start docker containers for Kafka, Schema Registry and Cassandra.
<br />
<pre class="shell" name="code">>docker-compose -f docker-compose-dev.yml up
</pre>
<br />
After this you should be able to start the individual Microservices by invoking their individual <b>Main</b> classes as you would do any Spring Boot application. Ensure that you pass the following VM argument for each service '<i><b>-Dspring.kafka.bootstrap.servers=${DOCKER_KAFKA_HOST}:9092</b></i>'' where you replace the DOCKER_KAFKA_HOST with the computed value earlier.
<br />
Alternatively, if you would prefer to have all the services started as docker containers, start docker-compose using (make sure you have done a <b>mvn install</b> prior):
<br />
<pre class="shell" name="code">>docker-compose up
</pre>
<br />
The above will result in the starting of a three node Kafka Cluster, Cassandra, Schema Registry, <i>inventory-mutator</i>, <i>sellable-inventory-service</i> and two instances of <i>sellable-inventory-calculator</i>. Why two instances of the latter? This helps demonstrate the forwarding of a request from an instance of<i> sellable-inventory-calculator</i> whose local stream store does NOT have the requested <i>{LOCATION, SKU}</i> to an instance that does have it. Note that when you run <i>docker-compose</i>, you might see services failing to start and then retrying. This is normal as docker-compose does not have a way to ensure deterministic startup dependencies (at least in my findings).
<br />
To send an inventory adjustment issue the following for which you would get a response like '<i>Updated Inventory [inventory_adjustment-2@1]</i>' indicating a success:
<br />
<pre class="shell" name="code">>curl -H "Content-Type: application/json" -X POST -d '{"locationId":"STL","sku":"123123", "count": "100"}' http://localhost:9091/inventory/adjustment
</pre>
<br />
To issue a reservation, you can send the following:
<br />
<pre class="shell" name="code">curl -H "Content-Type: application/json" -X POST -d '{"locationId":"STL","sku":"123123", "count": "10"}' http://localhost:9091/inventory/reservation
</pre>
<br />
You can obtain the sellable inventory value by either querying the sellable-inventory-service or one of the two instances of sellable-inventory-calculator. For the call to the sellable inventory service use PORT <b>9093</b> and for the two instances of sellable-inventory-calculator, use PORTs <b>9094</b> and <b>9095</b>. A successful response should show something like: `<i>{"locationId":"STL","sku":"123123","count":190}</i>' and should be the same answer from all three components.
<br />
<pre class="shell" name="code">>curl -s http://localhost:{PORT}/inventory/locations/STL/skus/123123
</pre>
<br />
<div>
There is an<i> integration-test </i>module as well but it is not set up to build as a child of the parent pom. That test needs all services to be up using '<i>docker-compose up</i>' mentioned earlier. The test class in that module will execute adjustments, reservations and queries for sellable inventory.<br />
<br />
<h2 style="text-align: left;">
Conclusion</h2>
<div style="text-align: left;">
<br />
<br />
I have barely scratched the surface of the potential of Kafka Streams with this article. The process of doing low latency transformations on a stream of data is powerful. Combine this with the ability of having local state stores that are fault tolerant by virtue of them being backed by Kafka, well, you have an architecture that can rock. If you are looking for <a href="https://docs.oracle.com/cd/E17276_01/html/gsg_db_rep/JAVA/rywc.html">Read what you Wrote consistency</a>, then while this solution approaches it, you will not have the comfort that you would have of an ACID data store like Oracle etc. If you embrace eventual consistency, KStreams opens a splendorous world....</div>
</div>
</div>
<div>
</div>
</div>
Sanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.com1tag:blogger.com,1999:blog-4667121987470696359.post-32546048992889520572017-09-09T09:42:00.001-06:002017-09-21T19:24:27.191-06:00Service mesh examples of Istio and Linkerd using Spring Boot and Kubernetes<div dir="ltr" style="text-align: left;" trbidi="on">
<h1>
Introduction
</h1>
When working with Microservice Architectures, one has to deal with concerns like Service Registration and <a href="http://microservices.io/patterns/client-side-discovery.html">Discovery</a>, Resilience, Invocation Retries, Dynamic Request Routing and Observability. Netflix pioneered a lot of frameworks like <a href="https://github.com/Netflix/eureka">Eureka</a>, <a href="https://github.com/Netflix/ribbon">Ribbon</a> and <a href="https://github.com/Netflix/Hystrix">Hystrix</a> that enabled Microservices to function at scale addressing the mentioned concerns. Below is an example communication between two services written in <b>Spring Boot </b>that utilizes<i> Netflix Hystrix</i> and other components for resiliency, observability, service discovery, load balancing and other concerns.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPPg-EsIkep_rHZxygjdzRsdAHk0fmAxLfVXFQf5RwQxf-IBUyUr43Y5j8vihQ-uujqHiPNDSatnwYOQ58JWpK-9RYndtB7S5pKyJfZTxcMESnwmCH5Ss7gMhAOBafm6Ke3YpBR1v4s7_M/s1600/service-service-springboot.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="474" data-original-width="608" height="496" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPPg-EsIkep_rHZxygjdzRsdAHk0fmAxLfVXFQf5RwQxf-IBUyUr43Y5j8vihQ-uujqHiPNDSatnwYOQ58JWpK-9RYndtB7S5pKyJfZTxcMESnwmCH5Ss7gMhAOBafm6Ke3YpBR1v4s7_M/s640/service-service-springboot.png" width="640" /></a></div>
<br />
<br />
While the above architecture <b>does solve</b> some problems around resiliency and observability it still is not ideal:<br />
<br />
<b>a.</b> The solution does not apply for a <b>Polyglot </b>service environment as the libraries mentioned are catering to a Java stack. For example, one would need to find/build libraries for <i>Node.js</i> to participate in service discovery and support observability.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnWWKZcuzNJQv20ONcxbiar5qyJL6BA8vCtQ8dng-afRuYmk6Iaem1pdiMLBO49Q9bYuPaAZ5am9mMid9dFNI1FbPA1Q-iG3pfHlKK1DutYsSKWTMoWWei1Wq3hegB98dMtC0O0kkNbPy2/s1600/node-to-service.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="473" data-original-width="601" height="502" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnWWKZcuzNJQv20ONcxbiar5qyJL6BA8vCtQ8dng-afRuYmk6Iaem1pdiMLBO49Q9bYuPaAZ5am9mMid9dFNI1FbPA1Q-iG3pfHlKK1DutYsSKWTMoWWei1Wq3hegB98dMtC0O0kkNbPy2/s640/node-to-service.png" width="640" /></a></div>
<br />
<b>b. </b>The service is burdened with '<b>communication specific concerns</b>' that it really should not have to deal with like service registration/discovery, client side load balancing, network retries and resiliency.<br />
<br />
<b>c.</b> It introduces a t<b>ight coupling</b> with the said technologies around resiliency, load balancing, discovery etc making it very difficult to change in the future.<br />
<br />
<b>d.</b> Due to the many dependencies in the libraries like Ribbon, Eureka and Hystrix, there is an invasion of your application stack. If interested, I discussed this in my BLOG around: <a href="https://sleeplessinslc.blogspot.com/2017/08/shared-client-libraries-microservice.html">Shared Service Clients</a><br />
<h1>
Sidecar Proxy</h1>
Instead of the above, what if you could do the following where a <b>Side Car</b> is introduced that takes on all the responsibilities around Resiliency, Registration, Discovery, Load Balancing, Reporting client metrics and Telemetry etc?:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhw2-vkqzQhUNvr3RvFszAjGwsVyq-AQxnCCpVjQQkFug4ntF5O93AoeMaqQDKnfnzt73ap3tNHlgeJqihQ2rebJq0S8eoOWWqMll970PTEhyR5sDRzp1DGRhkBVZlgNcni23kMJwimuIiN/s1600/service-car-car-service.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="487" data-original-width="940" height="330" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhw2-vkqzQhUNvr3RvFszAjGwsVyq-AQxnCCpVjQQkFug4ntF5O93AoeMaqQDKnfnzt73ap3tNHlgeJqihQ2rebJq0S8eoOWWqMll970PTEhyR5sDRzp1DGRhkBVZlgNcni23kMJwimuIiN/s640/service-car-car-service.png" width="640" /></a></div>
<br />
The benefit of the above architecture is that it facilitates the existence of a <b>Polyglot</b> Service environment with low friction as a majority of the concerns around networking between services are <b>abstracted away to a side car</b> thus allowing the service to <b>focus on business functionality.</b> <a href="https://lyft.github.io/envoy/">Lyft Envoy</a> is a great example of a Side car Proxy (or <a href="https://www.techopedia.com/definition/20338/layer-7">Layer 7</a> Proxy) that provides resiliency and observability to a Microservice Architecture.<br />
<h1>
Service Mesh</h1>
"<span style="background-color: white; color: #434d4f; font-family: "whitney ssm a" , "whitney ssm b" , "helvetica" , "arial" , sans-serif; font-size: 16px;"><i>A service mesh is a <b>dedicated infrastructure layer</b> for handling service-to-service communication. It’s responsible for the reliable delivery of requests through the complex topology of services that comprise a modern, cloud native application. In practice, the service mesh is typically implemented as an array of lightweight network proxies that are deployed alongside application code, without the application needing to be aware. (But there are variations to this idea, as we’ll see.)</i>" - <a href="https://buoyant.io/2017/04/25/whats-a-service-mesh-and-why-do-i-need-one/">Buoyant.io</a></span><br />
<br />
In a Microservice Architecture deployed on a <b>Cloud Native </b>Model, one would deal with 100s of services each running multiple instances of them with instances being created and destroyed by the orchestrator. Having the common cross cutting functionalities like Circuit Breaking, Dynamic routing, Service Discovery, Distributed Tracing, Telemetry being managed by an abstracted into a Fabric with which services communicate appears the way forward.<br />
<br />
Istio and Linkerd are two Service meshes that I have played with and will share a bit about them.<br />
<br />
<h3 style="text-align: left;">
Linkerd</h3>
<br />
<div>
Linkerd is an open source service mesh by Buoyant developed primarily using <a href="https://twitter.github.io/finagle/">Finagle</a> and <a href="https://netty.io/">netty</a>. It can run on Kubernetes, DC/OS and also a simple set of machines.<br />
<br />
Linkerd service mesh, offers a <a href="https://linkerd.io/features/">number of features</a> like:<br />
<ul style="text-align: left;">
<li>Load Balancing</li>
<li>Circuit Breaking</li>
<li>Retries and Deadlines</li>
<li>Request Routing</li>
</ul>
<div>
It instruments top line service metrics like Request Volume, Success Rates and Latency Distribution. With its Dynamic Request Routing, it enables Staging Services, Canaries, Blue Green Deploys with minimal configuration with a powerful language called <i><a href="https://linkerd.io/advanced/dtabs/">DTABs</a></i>.</div>
</div>
<div>
There are a few ways that<a href="https://linkerd.io/advanced/deployment/"> Linkerd can be deployed</a> in Kubernetes. This blog will focus on how Linkerd is deployed as a Kubernetes Daemon Set, running a pod on each node of the cluster. </div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBdDL0TCDCfC-rB0fJ1VHxwIKYTbbFeLCSUjtF6iGISpJdeyzwPDzJijtftmOpNEGH5JVsfP2XE89J7cuXqMBrkyL5gs5fBxR5B24qGk6j-S4Xn1fZDqsBazKbdwCRNhqmV93NSH_zHhBa/s1600/linkerd-deployment.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="273" data-original-width="774" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBdDL0TCDCfC-rB0fJ1VHxwIKYTbbFeLCSUjtF6iGISpJdeyzwPDzJijtftmOpNEGH5JVsfP2XE89J7cuXqMBrkyL5gs5fBxR5B24qGk6j-S4Xn1fZDqsBazKbdwCRNhqmV93NSH_zHhBa/s640/linkerd-deployment.png" width="640" /></a></div>
<div>
<br />
<br />
<h3 style="text-align: left;">
Istio</h3>
<br />
Istio (Greek for Sail) is an open platform sponsored by <b>IBM, Google and Lyft</b> that provides a uniform way to connect, secure, manage and monitor Microservices. It supports Traffic Shaping between micro services while providing rich telemetry.<br />
<br />
Of note:<br />
<ul style="text-align: left;">
<li>Fine grained control of traffic behavior with routing rules, retires, failover and fault injection</li>
<li>Access Control, Rate Limits and Quota provisioning</li>
<li>Metrics and Telemetry</li>
</ul>
At this point, Istio currently supports only Kubernetes but their goal in the future is to support additional platforms as well. An Istio service mesh can be considered of logically consisting of:
<br />
<ul style="text-align: left;">
<li>A <b>Data Plane</b> of Envoy Sidecars that mediate all traffic between services</li>
<li>A <b>Control Plane</b> whose purpose is to manage and configure proxies to route and enforce traffic policies.</li>
</ul>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://tctechcrunch2011.files.wordpress.com/2017/05/2017-05-24_0908.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="497" data-original-width="800" height="397" src="https://tctechcrunch2011.files.wordpress.com/2017/05/2017-05-24_0908.png" width="640" /></a></div>
</div>
<h1>
Product-Gateway Example</h1>
The Product Gateway that I have used in many previous posts is used here as well to demonstrate Service Mesh. The one major difference is that the service uses <b><a href="https://grpc.io/">GRPC</a></b> instead of REST. From a protocol perspective a HTTP 1.X REST call is made to<span style="color: blue;"> /product/{id}</span> of the Product gateway service. The <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhimMzj4oM-MZk9lULmCws7eZn0CtQNE88dkk0U5oLn0eI4Qq1jpxLHSM33wNuJLqFdb6nRDqCsDFuJc_RPkTzUzDVVmJCHKp6PD0j4Z6yp5HIfmTnm3-UmhHGTGgE89CbrvV9naf3IT1Fl/s1600/productgatewayclasses.png">Product Gateway service</a> then fans to <i>base product</i>, <i>inventory</i>, <i>price</i> and <i>reviews</i> using <b>GRPC</b> to obtain the different data points that represents the end Product. <i>Proto</i> schema elements from the individual services are used to compose the resulting Product <i>proto</i>. The gateway example is used for the Linkerd and Isitio examples. The project provided <b>does not explore all the features of the service mesh</b> but instead gives you enough of an <b>example to try Istio and Linkerd with GRPC services using Spring Boot.</b><br />
<br />
The code for this example is available at: <a href="https://github.com/sanjayvacharya/sleeplessinslc/tree/master/product-gateway-service-mesh">https://github.com/sanjayvacharya/sleeplessinslc/tree/master/product-gateway-service-mesh</a>
<br />
<br />
You can import the project into your favorite IDE's and run each of the Spring Boot services. Invoking <a href="http://localhost:9999/product/931030.html">http://localhost:9999/product/931030.html</a>.will call the Product gateway which will then invoke the other service using GRPC to finally return back a minimalistic product page.<br />
<br />
For running the project on Kubernetes, I had installed <a href="https://github.com/kubernetes/minikube">Minikube</a> on my Mac. Ensure that you can dedicate sufficient memory to your Minikube instance. I did not set up a local Docker registry but chose to use my <i>local docker images</i>. <a href="https://stackoverflow.com/questions/42564058/how-to-use-local-docker-images-in-kubernetes">Stack Overflow has a good posting on how to use local docker images</a>. In the Kubernetes deployment descriptors, you will notice that for the product gateway images, the settings for <b>imagePullPolicy</b> are set to <b>Never</b>. Before you proceed, to be able to use the Docker Daemon, ensure that you have executed:<br />
<br />
<pre class="shell" name="code">>eval $(minikube docker-env)
</pre>
<br />
Install the Docker images for the different services of the project by executing the below from the root folder of the project:
<br />
<pre class="shell" name="code">>mvn install
</pre>
<br />
<h3 style="text-align: left;">
Linkerd</h3>
<br />
<div>
In the sub-folder titled <i>linkerd</i> there are a few yaml files that are available. The<a href="https://github.com/sanjayvacharya/sleeplessinslc/blob/master/product-gateway-service-mesh/linkerd/linkerd-config.yaml"> linkerd-config.yaml </a>will set up linkerd and define the routing rules:</div>
<pre class="shell" name="code">>kubctl apply -f ./linkerd-config.yaml
</pre>
<br />
Once the above is completed you can access the Linkerd Admin application by doing the following:
<br />
<pre class="shell" name="code">>ADMIN_PORT=$(kubectl get svc l5d -o jsonpath='{.spec.ports[?(@.name=="admin")].nodePort}')
>open http://$(minikube ip):$ADMIN_PORT
</pre>
<br />
The next step is to install the different product gateway services. The Kubernetes definitions for these services are present in the <i><a href="https://github.com/sanjayvacharya/sleeplessinslc/blob/master/product-gateway-service-mesh/linkerd/product-linkerd-grpc.yaml">product-linkerd-grpc.yaml</a></i> file.
<br />
<pre class="shell" name="code">>kubectl apply -f ./product-linkerd-grpc.yaml
</pre>
<br />
Wait for a bit for Kubernetes to spin up the different pods and services. You should be able to execute '<i>kubectl get svc</i>' and see something like the below showing the different services up:
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfhsUCpQL9Xo8qB1TZOmTM8E-JBb7wtwZ9-MIZypokyheJaHCvM1rwQPY6a4_5nI2XZRLpqxJAzROP_b4xZPooQvvR6gbSKYHJtsjEZ4cri91uzGPBvROhl9waOwHxvPFXEQpQ4DRQfEzi/s1600/linkerd-svcs.png" imageanchor="1"><img border="0" data-original-height="302" data-original-width="1554" height="124" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfhsUCpQL9Xo8qB1TZOmTM8E-JBb7wtwZ9-MIZypokyheJaHCvM1rwQPY6a4_5nI2XZRLpqxJAzROP_b4xZPooQvvR6gbSKYHJtsjEZ4cri91uzGPBvROhl9waOwHxvPFXEQpQ4DRQfEzi/s640/linkerd-svcs.png" width="640" /></a>
<br />
We can now execute a call on the <i>product-gateway</i> and see it invoke the other services via the service mesh. Execute the following:<br />
<br />
<pre class="shell" name="code">>HOST_IP=$(kubectl get po -l app=l5d -o jsonpath="{.items[0].status.hostIP}")
>INGRESS_LB=$HOST_IP:$(kubectl get svc l5d -o 'jsonpath={.spec.ports[2].nodePort}')
>http_proxy=$INGRESS_LB curl -s http://product-gateway/product/9310301
</pre>
<br />
The above is also demonstrating <a href="https://linkerd.io/features/http-proxy/">linkerd as a HTTP proxy</a>. With the <i>http_proxy</i> set, curl sends the proxy request to linkerd which will then lookup product-gateway via service discovery and route the request to an instance.
<br />
The above call should result in a JSON representation of the Product as shown below:
<br />
<pre class="shell" name="code">{"productId":9310301,"description":"Brio Milano Men's Blue and Grey Plaid Button-down Fashion Shirt","imageUrl":"http://ak1.ostkcdn.com/images/products/9310301/Brio-Milano-Mens-Blue-GrayWhite-Black-Plaid-Button-Down-Fashion-Shirt-6ace5a36-0663-4ec6-9f7d-b6cb4e0065ba_600.jpg","options":[{"productId":1,"optionDescription":"Large","inventory":20,"price":39.99},{"productId":2,"optionDescription":"Medium","inventory":32,"price":32.99},{"productId":3,"optionDescription":"Small","inventory":41,"price":39.99},{"productId":4,"optionDescription":"XLarge","inventory":0,"price":39.99}],"reviews":["Very Beautiful","My husband loved it","This color is horrible for a work place"],"price":38.99,"inventory":93}
</pre>
You can subsequently install <a href="https://github.com/linkerd/linkerd-viz">linkerd-viz</a>, a monitoring application based of <a href="https://prometheus.io/">Prometheus</a> and <a href="https://grafana.com/">Grafana</a> that will provide metrics from Linkerd. There are three main categories of metrics that are visible on the dashboard, namely, Top Line (Cluster Wide success rate and request volume), Service Metrics (A section for each service deployed on success rate, request volume and latency) and Per-instance metrics (Success rate, request volume and latency for every node in the cluster).
<br />
<pre class="shell" name="code">>kubectl apply -f ./linkerd-viz.yaml
</pre>
Wait for a bit for the pods to come up and then you can view the Dashboard by executing:
<br />
<pre class="shell" name="code">>open http://$HOST_IP:$(kubectl get svc linkerd-viz -o jsonpath='{.spec.ports[0].nodePort}')
</pre>
<br />
The dashboard will show you metrics around Top line service metrics and also metrics around individual monitored services as shown in the screen shots below:
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiROG2ToZlqH_mDal2aJKd1Ld5XQxocCpwFkuB9gFt3mCEjUdMYeYxgE5eK_XsR5D31TOCGmc8eFreuGYESMvgRZWsL3sxMcZJu1CfyaN8nf6N_VCTiKCZaHzG9QVdauAI0i03QIvnMGjNw/s1600/topline-linkerd.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="667" data-original-width="1600" height="266" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiROG2ToZlqH_mDal2aJKd1Ld5XQxocCpwFkuB9gFt3mCEjUdMYeYxgE5eK_XsR5D31TOCGmc8eFreuGYESMvgRZWsL3sxMcZJu1CfyaN8nf6N_VCTiKCZaHzG9QVdauAI0i03QIvnMGjNw/s640/topline-linkerd.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTcI_7dLIB65y1pmpHLL5WhwpT7PhdAtxTIUoqiK3iG1W7HSqFwW4wk6HQDGqjEB3CFeiSiQ_Tu0Jb-TAt6GRGt1hXQyHHMatGmJFWn6yusCkSkSxOxym8V28RCnQCiwldrIHktEPF3Hz3/s1600/linkerd-metric.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="678" data-original-width="1600" height="270" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTcI_7dLIB65y1pmpHLL5WhwpT7PhdAtxTIUoqiK3iG1W7HSqFwW4wk6HQDGqjEB3CFeiSiQ_Tu0Jb-TAt6GRGt1hXQyHHMatGmJFWn6yusCkSkSxOxym8V28RCnQCiwldrIHktEPF3Hz3/s640/linkerd-metric.png" width="640" /></a></div>
<br />
You could also go ahead and install <a href="https://github.com/linkerd/linkerd-zipkin">linkerd-zipkin</a> to capture tracing data.<br />
<br />
<h3 style="text-align: left;">
Istio</h3>
<br />
The <a href="https://istio.io/docs/tasks/installing-istio.html">Istio page on installation</a> is pretty thorough. There are a few options you are presented with for installation on the installation page. You should select which ones make sense for your deployment. For my demonstration case, I selected the following
<br />
<pre class="shell" name="code">>kubectl apply -f $ISTIO_HOME/install/kubernetes/istio-rbac-beta.yaml
>kubectl apply -f $ISTIO_HOME/install/kubernetes/istio.yaml
</pre>
<br />
You can install metrics support by installing Prometheus, Grafana and Service Graph.<br />
<pre class="shell" name="code">>kubectl apply -f $ISTIO_HOME/install/kubernetes/addons/prometheus.yaml
>kubectl apply -f $ISTIO_HOME/install/kubernetes/addons/grafana.yaml
>kubectl apply -f $ISTIO_HOME/install/kubernetes/addons/servicegraph.yaml
</pre>
<br />
Install the product-gateway artifacts using the below command which uses <a href="https://istio.io/docs/reference/commands/istioctl.html#istioctl-kube-inject">istioctl kube-inject</a> to automatically inject Envoy Containers in the different pods:
<br />
<pre class="shell" name="code">>kubectl apply -f <(istioctl kube-inject -f product-grpc-istio.yaml)
</pre>
<br />
<pre class="shell" name="code">>export GATEWAY_URL=$(kubectl get po -l istio=ingress -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc istio-ingress -o 'jsonpath={.spec.ports[0].nodePort}')
</pre>
<br />
If your configuration is deployed correctly, it should resemble something like the below:
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhniKIReFg2hmdmHtn7jDpIZ4lRJ6vKfCAKu5ZT7De9JXHS89v0-XY2ngarq6KR-Q1CRuBUrt4_qAraU5qvkCHUfK7ln2rrqM5V4YDMe4JFzr64sTP-xJA7ghs5wTSq90X3rXvyw91QPnfN/s1600/Screen+Shot+2017-09-08+at+8.19.56+AM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="540" data-original-width="1276" height="270" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhniKIReFg2hmdmHtn7jDpIZ4lRJ6vKfCAKu5ZT7De9JXHS89v0-XY2ngarq6KR-Q1CRuBUrt4_qAraU5qvkCHUfK7ln2rrqM5V4YDMe4JFzr64sTP-xJA7ghs5wTSq90X3rXvyw91QPnfN/s640/Screen+Shot+2017-09-08+at+8.19.56+AM.png" width="640" /></a></div>
<br />
You can now view a HTML version of the product by going to <span style="color: blue;">http://${GATEWAY_URL}/product/931030.html</span> to see a <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLvVYaHIs8v83omXBdSCGRJclznEHki3yRtP9_r0OAR5cpHsMyXccoGO0Tzl82KZUQwxTmtd_QXMF3V2QQ_eliZG-qIG9z9Bzlq5r0eZTLFMMETVsePuKK0QRybcGoiQlAFb7QFBnNbpMQ/s1600/product-page.png">primitive product page</a> or access a JSON representation by going to <span style="color: blue;">http://${GATEWAY_URL}/product/931030</span><br />
<br />
Ensure you have set the ServiceGraph and Grafana port-forwarding set as described in the <a href="https://istio.io/docs/tasks/installing-istio.html">Installation instructions</a> and also shown below:<br />
<br />
<pre class="shell" name="code">>kubectl port-forward $(kubectl get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000 &
>kubectl port-forward $(kubectl get pod -l app=servicegraph -o jsonpath='{.items[0].metadata.name}') 8088:8088 &
</pre>
<br />
To see the service graph and Istio Viz dashboard, you might want to send some traffic to the service.<br />
<br />
<pre class="shell" name="code">>for i in {1..1000}; do echo -n .; curl -s http://${GATEWAY_URL}/product/9310301 > /dev/null; done
</pre>
<br />
You should be able to access the Istio-Viz dashboard for Topline and detailed service metrics like shown below at <a href="http://localhost:3000/dashboard/db/istio-dashboard">http://localhost:3000/dashboard/db/istio-dashboard</a>:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj15xTtyIRoMHXdHTc2VF73fXVW2r024UDmdnsGSdJrAzIw8INjg3dNA9ZbvQD6Rr1SQR0SDQbCF-5KjjECO1Ypnsfjjf5zkcjTn0mNgXj9d_aDfuVLQDCc7SO6YI0evXHw3s1unNiD-FDy/s1600/isto-topline.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="821" data-original-width="1600" height="328" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj15xTtyIRoMHXdHTc2VF73fXVW2r024UDmdnsGSdJrAzIw8INjg3dNA9ZbvQD6Rr1SQR0SDQbCF-5KjjECO1Ypnsfjjf5zkcjTn0mNgXj9d_aDfuVLQDCc7SO6YI0evXHw3s1unNiD-FDy/s640/isto-topline.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhoSrD4NsRGsHNIVPOQWutY_RWqb9nyi5yjEk4Wf3TazCgr_5YNwkUGPIbF7YL4zABtXxeDYYCnpcfIplQ6FoPL9HLYhV3BtAX5b2xLLqlZuADnYhBOjhIgG7mRknP8PginpG0OQIGFevGP/s1600/istio-individual-services.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="695" data-original-width="1600" height="276" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhoSrD4NsRGsHNIVPOQWutY_RWqb9nyi5yjEk4Wf3TazCgr_5YNwkUGPIbF7YL4zABtXxeDYYCnpcfIplQ6FoPL9HLYhV3BtAX5b2xLLqlZuADnYhBOjhIgG7mRknP8PginpG0OQIGFevGP/s640/istio-individual-services.png" width="640" /></a></div>
<br />
Similarly you can access the Service graph via at the address: <a href="http://localhost:8088/dotviz">http://localhost:8088/dotviz</a> to see a graph similar to the one shown below depicting service interaction:
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjT2-0sEQDK91cjQdz0EvqKCxCqF6bq960-_eg77rMv4pGJNUq2uJl2zxaC9aWb-TAgrgGX6SKCvMN4OwK3JNGtxqS2818e9StcvW_pYBvLO1VGUUxXuy3A4fLhJvZiajxy_DYDu9nmVuf2/s1600/graphviz.png" imageanchor="1"><img border="0" data-original-height="517" data-original-width="1600" height="206" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjT2-0sEQDK91cjQdz0EvqKCxCqF6bq960-_eg77rMv4pGJNUq2uJl2zxaC9aWb-TAgrgGX6SKCvMN4OwK3JNGtxqS2818e9StcvW_pYBvLO1VGUUxXuy3A4fLhJvZiajxy_DYDu9nmVuf2/s640/graphviz.png" width="640" /></a>
<br />
<h1>
Conclusion & Resources</h1>
<br />
A service mesh architecture appears the way forward for Cloud Native deployments. The benefits provided by the mesh does not need any re-iteration. Linkerd is more mature when compared to Istio but Istio although newer, has the strong backing of IBM, Google and Lyft to take it foward. How do they compare with each other? The <a href="https://abhishek-tiwari.com/a-sidecar-for-your-service-mesh/">BLOG post by Abhishek Tiwari comparing Linkerd and Istio features</a> is a great read on the topic of service mesh and comparisons.<a href="https://www.youtube.com/watch?v=38DilGa3_Gs"> Alex Leong has a nice youtube presentation on Linkerd </a>that is a good watch. Kelsey Hightower has a nice example driven <a href="https://www.youtube.com/watch?v=s4qasWn_mFc">presentation on Istio</a>. </div>
Sanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.com2tag:blogger.com,1999:blog-4667121987470696359.post-52045471824744023292017-08-16T18:26:00.005-06:002017-08-18T20:35:37.733-06:00SpringBoot Microservice Example using Docker,Kubernetes and fabric8<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-size: x-large;">Introduction</span><br />
<br />
<a href="https://kubernetes.io/">Kubernetes</a> is container orchestration and scaling system from google. It provides you the ability to deploy, scale and manage container based applications and provides mechanism such as service discovery, resource sizing, self healing etc. <a href="https://fabric8.io/">fabric8</a> is a platform for creating, deploying and scaling <a href="https://martinfowler.com/articles/microservices.html">Microservices</a> based of Kubernetes and Docker. As with all frameworks nowadays, it follows an '<b>opinionated</b>' view around how it does these things. The platform also has support for Java Based (<a href="https://projects.spring.io/spring-boot/">Spring Boot</a>) micro services. <a href="https://www.openshift.com/">OpenShift</a>, the <b>Hosted</b> Container Platform by RedHat is based of fabric8. fabric8 promotes the concept of Micro 'S<b>ervice teams</b>' and independent<a href="https://en.wikipedia.org/wiki/CI/CD"> CI/CD </a>pipelines if required by providing Microservice supporting tools like Logging, Monitoring and Alerting as readily integrateable services.
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxlavzGHH7ABBFtKvDSPnLeZMAtfLsocfzQYHtCfIyQXMyap0c6DlEZZAeFAIwQLxxyQ4vKEq1lqVncwEQu14hMtfXW1djMafqzuEbrqvH46YRvrzSaQfUhIbfnSVRyvjhWtKoF9NyE6rU/s1600/fabric8pipeline.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="774" data-original-width="1060" height="466" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxlavzGHH7ABBFtKvDSPnLeZMAtfLsocfzQYHtCfIyQXMyap0c6DlEZZAeFAIwQLxxyQ4vKEq1lqVncwEQu14hMtfXW1djMafqzuEbrqvH46YRvrzSaQfUhIbfnSVRyvjhWtKoF9NyE6rU/s640/fabric8pipeline.png" width="640" /></a></div>
From <a href="https://gogs.io/">Gogs</a> (git), <a href="https://jenkins.io/">Jenkins</a> (build/CI/CD pipeline) to <a href="https://www.elastic.co/products">ELK</a> (Logging)/<a href="https://grafana.com/">Graphana</a>(Monitoring) you get them all as add ons. There are many other add-ons that are available apart from the ones mentioned.<br />
<br />
There is a very nice presentation on <a href="https://www.youtube.com/watch?v=CYrCW1SjpOI">youtube around creating a Service using fabric8</a>. This post looks at providing a similar tutorial by using the <a href="https://sleeplessinslc.blogspot.com/2015/12/maven-integration-testing-of-spring.html">product-gateway</a> example I had shared in a previous blog.<br />
<br />
<span style="font-size: x-large;">SetUp</span>
<br />
<br />
For the sake of this example, I used <a href="https://github.com/kubernetes/minikube">Minikube</a> version v0.20.0. and as my demo was done on a Mac, I used the <a href="https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#xhyve-driver">xhyve</a> driver as I found it the most resource friendly of the options available. The fabric8 version used was: 0.4.133. I first installed and started Minikube and ensured that everything was functional by issuing the following commands:
<br />
<pre class="shell" name="code">>minkube status
minikube: Running
localkube: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.64.19
</pre>
I then proceeded to install fabric8 by <a href="https://fabric8.io/guide/getStarted/gofabric8.html">installing gofabric8</a>. If for whatever reason the latest version of fabric8 does not work for you, you can try to install the version I used by going to the <a href="https://github.com/fabric8io/gofabric8/releases">releases site</a>.
Start fabric8 by issuing the command
<br />
<pre class="shell" name="code">>gofabric8 start
</pre>
The above will result in the downloading of a number of package and then result in the launching of the fabric8 console. Be patient as this takes some time. You could issue the following command on a different window to see the status of fabric8 set up:
<br />
<pre class="shell" name="code">>kubectl get pods -w
NAME READY STATUS RESTARTS AGE
configmapcontroller-4273343753-2g03x 1/1 Running 2 6d
exposecontroller-2031404732-lz7xs 1/1 Running 2 6d
fabric8-3873669821-7gftx 2/2 Running 3 6d
fabric8-docker-registry-125311296-pm0hq 1/1 Running 1 6d
fabric8-forge-1088523184-3f5v4 1/1 Running 1 6d
gogs-1594149129-wgsh3 1/1 Running 1 6d
jenkins-56914896-x9t58 1/1 Running 2 6d
nexus-2230784709-ccrdx 1/1 Running 2 6d
</pre>
Once fabric8 has started successfully, you can also issue a command to validate fabric8 by issuing:
<br />
<pre class="shell" name="code">>gofabric8 validate
</pre>
If all is good you should see something like the below:
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBuPqt3VZgiBX9JDo0XwGd0AdcW8i1sWRNAUweXTIKyfU4IsfC4wyfnFSACwJ6QtclfWQ6o4AGPMeWNmMbtBgjmW_z9WK16DwqX-Ei2fFXjTO-WYxcVn6J7kI9WMSSbuXTxEJQ9gI0WcWe/s1600/fabric8validate.png" imageanchor="1"><img border="0" data-original-height="770" data-original-width="1472" height="335" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBuPqt3VZgiBX9JDo0XwGd0AdcW8i1sWRNAUweXTIKyfU4IsfC4wyfnFSACwJ6QtclfWQ6o4AGPMeWNmMbtBgjmW_z9WK16DwqX-Ei2fFXjTO-WYxcVn6J7kI9WMSSbuXTxEJQ9gI0WcWe/s640/fabric8validate.png" width="640" /></a>
<br />
Also note that if your fabric8 console did not launch, you could also launch it by issuing the following command which will result the console being opened in a new window:
<br />
<pre class="shell" name="code">>gofabric8 console
</pre>
<br />
At this point, you should be presented with a view of the default team:
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9KWPkZBOGlO-4p7d70pQVmYrwLMReMdVfgBbK3OkqWAAVnKR4KXzVKcUhg0VvHLBbFZkpVYWv4kbdQVR3kP-wx0HHd6T-UQpbTTgEJmLN-AOZzrYhwumCsZ1q3emZaQqbSJdqGOkieO7q/s1600/defaultScreen.png" imageanchor="1"><img border="0" data-original-height="906" data-original-width="1302" height="445" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9KWPkZBOGlO-4p7d70pQVmYrwLMReMdVfgBbK3OkqWAAVnKR4KXzVKcUhg0VvHLBbFZkpVYWv4kbdQVR3kP-wx0HHd6T-UQpbTTgEJmLN-AOZzrYhwumCsZ1q3emZaQqbSJdqGOkieO7q/s640/defaultScreen.png" width="640" /></a>
<br />
<span style="font-size: x-large;">Creating a fabric8 Microservice</span>
<br />
<br />
There are a few ways you can generate a fabric8 micro service using Spring Boot. The fabric8 console has an option to generate a project that pretty much does what <a href="https://start.spring.io/">Spring Initializer</a> does. Following that wizard will take you to through completion of your service. In my case, I was porting existing<i> product-gateway</i> example over so I followed a slightly different route. I did the following for each of the services:
<br />
<br />
<h2>
Setup Pom</h2>
<br />
Run the following command on the base of your project:
<br />
<pre class="shell" name="code">>mvn io.fabric8:fabric8-maven-plugin:3.5.19:setup
</pre>
This will result in your pom being augmented with <a href="https://github.com/fabric8io/fabric8-maven-plugin">fabric8 plugin</a> (f8-m-p). The plugin itself is primarily focussed on Building Docker images and creating Kubernetes resource descriptors. If you are simply using the provided <b>product-gateway</b> as an example, you don't need to run the setup step as it has been pre-configured with the necessary plugin.
<br />
<br />
<h2>
Import Project</h2>
<br />
From the root level of your project, issue the following:
<br />
<pre class="shell" name="code">>mvn fabric8:import
</pre>
You will be asked for the git <i>user name and password</i>. You can provide <i>gogsadmin/RedHat$1</i>. Your project should now be imported into fabric8. You can access the Gogs(git) repository from the fabric8 console or by issuing the following on the command line:
<br />
<pre class="shell" name="code">>gofabric8 service gogs
</pre>
You can login using the same credentials mentioned above and see that your project has been imported into git.
<br />
<br />
<h2>
Configuring your Build</h2>
<br />
At this point on the fabric8 console if you click on Team Dashboard, you should see your project listed.
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgs0391Np5DINLOkaOc1u1MHfdOabNrg0vJ9WcqFQ-blv6sSqmHjriYtmgqRGMrNd848qTrALuxJxSQUOMe6xJdv2U39hV5dMFEkPuvyTOgbJITToJaSc_gKEHSdobiGwmB147sFuM-tyz0/s1600/ready-for-import.png" imageanchor="1"><img border="0" data-original-height="377" data-original-width="1600" height="151" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgs0391Np5DINLOkaOc1u1MHfdOabNrg0vJ9WcqFQ-blv6sSqmHjriYtmgqRGMrNd848qTrALuxJxSQUOMe6xJdv2U39hV5dMFEkPuvyTOgbJITToJaSc_gKEHSdobiGwmB147sFuM-tyz0/s640/ready-for-import.png" width="640" /></a>
<br />
Click on the hyperlink showing the project to open the tab where you are asked to configure secrets.
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzAtVxZ34138u-wlE7vwQ965vNXnJUZ613ccyBgzXLtKsT5aWtpRyBWdHa0TB0B-C2va-kjIAiyHVPAFc92jZqB1bi8x56RGVchw2DHNsC8WDeWszxIHR8EwsrP4cRhCYBbpJDwayFbmN-/s1600/secretsetup.png" imageanchor="1"><img border="0" data-original-height="429" data-original-width="1600" height="172" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzAtVxZ34138u-wlE7vwQ965vNXnJUZ613ccyBgzXLtKsT5aWtpRyBWdHa0TB0B-C2va-kjIAiyHVPAFc92jZqB1bi8x56RGVchw2DHNsC8WDeWszxIHR8EwsrP4cRhCYBbpJDwayFbmN-/s640/secretsetup.png" width="640" /></a>
<br />
For the scope of this example select the <i>default-gogs-git</i> and click <b>Save Selection</b> on the top right. You will then be presented with a page that allows you to select the build pipeline for this project. Wait for a bit for this page to load.
<br/>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUTuWElNwaXgQ-xks9V_NsKwLMCepAgK_YA3uyGuC13MnHW558QQED8PGQXUnXPQ_E24FmUVr_BqtYL1TSi-dItS3jxy_kAWe81iOu7Qbxal7OCcdRtv6_oiknwfWEeqx5jPkQ69AQAvV2/s1600/pipelineoptions.png" imageanchor="1"><img border="0" data-original-height="646" data-original-width="1600" height="258" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUTuWElNwaXgQ-xks9V_NsKwLMCepAgK_YA3uyGuC13MnHW558QQED8PGQXUnXPQ_E24FmUVr_BqtYL1TSi-dItS3jxy_kAWe81iOu7Qbxal7OCcdRtv6_oiknwfWEeqx5jPkQ69AQAvV2/s640/pipelineoptions.png" width="640" /></a>
<br />
Select the last option with Canary test and click Next. The build should kick off and during the build process, you should be prompted to approve the next step to goto production, something like the below:
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgYqkj5tBY0LpFqnk7rgq7TKB7nqC-028-SRIRtw2xYFmdWufkqY2KdP2G2H9LTd0RGf5u-S-bVYxpIH-TaLzchLa8E1AZEibCXSBxVEy0iGfUJyaUz2TaBLpzQ8AurLjIxZqx8Nzg_Kko1/s1600/approval.png" imageanchor="1"><img border="0" data-original-height="855" data-original-width="1600" height="342" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgYqkj5tBY0LpFqnk7rgq7TKB7nqC-028-SRIRtw2xYFmdWufkqY2KdP2G2H9LTd0RGf5u-S-bVYxpIH-TaLzchLa8E1AZEibCXSBxVEy0iGfUJyaUz2TaBLpzQ8AurLjIxZqx8Nzg_Kko1/s640/approval.png" width="640" /></a>
<br />
Once you select '<i>Proceed</i>' your build will move to production. At this point what you have is a project out in production ready to received traffic. Once you import all the applications in the product-gateway example, you should a view like the below showing the deployments across environments:<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLfvVxBh8peCDkgli9JGaCmXGhXGP9JMiN0ir7OaFid8JNXoed0we_DhHc7-qCQ2xC6kQ3vLIGFz8mVSRWT55B7kkxD0r3SgrIj0VZg6b76L1dEfMZGXV_l9MRlZB57uRJttVC4lVyhPf2/s1600/deployed-apps.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="916" data-original-width="1600" height="366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLfvVxBh8peCDkgli9JGaCmXGhXGP9JMiN0ir7OaFid8JNXoed0we_DhHc7-qCQ2xC6kQ3vLIGFz8mVSRWT55B7kkxD0r3SgrIj0VZg6b76L1dEfMZGXV_l9MRlZB57uRJttVC4lVyhPf2/s640/deployed-apps.png" width="640" /></a>
<br />
<br />
Your production environment should look something like the below showing your application and other supporting applications:<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhGC4GTe19gBgOBVrxIBlWkKFVRGQjMbG9NxkVb8CN8zCGPKOo4ApUaFGaB2mqP0eWaBcZVfLg3snaUeN1b1vGtmX-QsdDg36n8AIVZQCvI3TDzdVkviUcqEizQAKVJ0JQAqtMlsAeJVY41/s1600/fabric8-prod.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="867" data-original-width="1600" height="346" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhGC4GTe19gBgOBVrxIBlWkKFVRGQjMbG9NxkVb8CN8zCGPKOo4ApUaFGaB2mqP0eWaBcZVfLg3snaUeN1b1vGtmX-QsdDg36n8AIVZQCvI3TDzdVkviUcqEizQAKVJ0JQAqtMlsAeJVY41/s640/fabric8-prod.png" width="640" /></a>
<br />
<br />
If your product-gateway application is deployed successfully, you should be able to open it up and access the product resource of the one product that the application supports at : <br />
<a href="http://192.168.64.19:31067/product/9310301">http://192.168.64.19:31067/product/9310301</a>
The above call will result in the invoking of the base product, reviews, inventory and pricing services to provide a combined view of the product.
<br />
So how is the product-gateway discovering the base product and other services to invoke?
<br />
<a href="https://fabric8.io/guide/develop/serviceDiscovery.html">Kubernetes provides for service discovery natively using DNS or via Environment variables</a>. For the sake of this example I used Enviroment variables but there is no reason you could not use the former. The following is the code snippet of the product-gateway showing the different configurations being injected:
<br />
<pre class="java" name="code">@RestController
public class ProductResource {
@Value("${BASEPRODUCT_SERVICE_HOST:localhost}")
private String baseProductServiceHost;
@Value("${BASEPRODUCT_SERVICE_PORT:9090}")
private int baseProductServicePort;
@Value("${INVENTORY_SERVICE_HOST:localhost}")
private String inventoryServiceHost;
@Value("${INVENTORY_SERVICE_PORT:9091}")
private String inventoryServicePort;
@Value("${PRICE_SERVICE_HOST:localhost}")
private String priceServiceHost;
@Value("${PRICE_SERVICE_PORT:9094}")
private String priceServicePort;
@Value("${REVIEWS_SERVICE_HOST:localhost}")
private String reviewsServiceHost;
@Value("${REVIEWS_SERVICE_PORT:9093}")
private String reviewsServicePort;
@RequestMapping(value = "/product/{id}", method = RequestMethod.GET, produces = MediaType.APPLICATION_JSON_VALUE)
@ResponseBody
.....
}
</pre>
At this point what you have is a working pipe line. What about Logging, Monitoring etc etc. These are available for you as add-ons to your environment. For Logging, I went ahead and installed the Logging Template that put fluentd + ELK stack for searching through logs. You should then be able to search for events through Kibana.<br />
<br />
<span style="font-size: x-large;">Conclusion</span><br />
<br />
The example should have given you an good start into of how dev pipe-lines could work with Kubernetes and containers and how fabric8's opinionated approach facilitates the same. The ability to push out immutable architectures using a Dev Ops pipeline like fabric8 is great for a team developing Microservices with Service Teams. I ran this on my laptop and must admit, it was not always smooth sailing. Seeing the forums around fabric8, the folk seem to be really responsive to the different issues surfaced. The documentation is quite extensive around the product as well. Kudos to the team behind it. I wish I had the time to install Chaos Monkey and the Chat support to see those Add-In's in action. I am also interested in understanding what Openshift software adds on top of fabric8 for their managed solution..
The code backing this example is <a href="https://github.com/sanjayvacharya/sleeplessinslc/tree/master/product-gateway-fabric8">available to clone from git hub</a>.<br />
<br /></div>
Sanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.com0tag:blogger.com,1999:blog-4667121987470696359.post-45293162875688126582017-08-11T21:19:00.002-06:002017-08-12T08:50:04.531-06:00Shared Client Libraries - A Microservices Anti-Pattern, a study into why?<div dir="ltr" style="text-align: left;" trbidi="on">
<h1>
Introduction</h1>
<br />
As I continue my adventures with demystifying Microservices for myself, I keep hearing of this anti-pattern, '<b>Shared Service Client</b>' or '<b>Shared Binary Client</b>' or even '<b>Client Library</b>'. Sam Newman in his book about Microservices says, "<i>I've spoken to more than one team who has insisted that creating libraries for your services is an essential part of creating services in the first place</i>".<b> I must admit that I have seen and implemented this pattern right through my career.</b> The driver for generating a client has been <b>DRY </b>(Don't repeat yourself). The creation of Service Clients and making them available for consumption by other applications and services is a prevalent pattern. An example is shown below where a shareable <b>Notes-client</b> is created by a Team as part of the project for the Notes Web Service. They might also provide a DTO (Data Transfer Object) library representing exchanged payload.
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmLDg23elMSzx0fJJFR8a_NMBKwybg9Ps3OX6IOOdRh_JauJZYO-mVAZv0wWmzkyHcmWvO6e-v5IOxbb0QLuIadeZfzBqXTzZHp2hhsVJBKBNYaNh0yMeNbOakTWSaCnPR9QJNLWXXW2Tl/s1600/Notes.png" imageanchor="1"><img border="0" data-original-height="272" data-original-width="680" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmLDg23elMSzx0fJJFR8a_NMBKwybg9Ps3OX6IOOdRh_JauJZYO-mVAZv0wWmzkyHcmWvO6e-v5IOxbb0QLuIadeZfzBqXTzZHp2hhsVJBKBNYaNh0yMeNbOakTWSaCnPR9QJNLWXXW2Tl/s1600/Notes.png" /></a>
<br />
This BLOG is my investigation into why this concept of a Service Client or a Client library causes heart burn to Microservice advocates.
<br />
<h1>
Service Clients over time</h1>
<br />
Feel free to skip this section totally, it's primarily me reminiscing.<br />
<br />
In my university days, when we used to work on network assignments, the interaction between a server and client would primarily involve creating a <a href="https://en.wikipedia.org/wiki/Network_socket">Socket</a>, connecting to the server and sending/receiving data. There really was no explicit contract that someone wishing to communicate with the server could use to generate a client. The remote invocation however was portable across language/platform. Any client that can open a socket could invoke the server method.
<br />
<br />
I subsequently worked with <a href="https://www.javatpoint.com/RMI">Java and RMI</a>. One would take a stub, then generate client and server code from it. Callers higher in the stack would invoke 'methods' using the client which would then run across the wire and be invoked on the server. From a client's perspective, it was as if the call was happening in the<b> local JVM.</b> Magical! A contract of sorts was established. Heterogeneous clients did not really have a place in this universe unless you got dirty with <a href="https://en.wikipedia.org/wiki/Java_Native_Interface">JNI</a>. Another model existed at roughly the same time with <a href="https://en.wikipedia.org/wiki/Common_Object_Request_Broker_Architecture">CORBA and ORBs</a>. The <a href="https://en.wikipedia.org/wiki/Interface_description_language">IDL </a>or Interface Definition Language served to provide a contract by which client and server could be generated for a language and platform that could generate corresponding code from the IDL. Heterogeneous Clients could thrive in this ecosystem. Later down the chain came WS*, SOAP, Axis,<i> Auto WSDL to Whatever</i> generation tools like RAD (Rational Application Developer) that created a Client for a SOAP service.I had used a tool called Rational Application Developer which was a 4 C/D install which would fail often but when it worked, you could click on a WSDL and see it generate these magical code snippets for the client.
<br />
<br />
The WS* bubble was followed by the emergence of an Architecture style known as REST and its poster child, HTTP. Multiple representation types were a major selling point. To define Resources and their supported operations, there was WADL. Think of WADL as a poor cousin of WSDL that even the W3C preferred to not standardize. REST like it not, many a time is used for making remote procedure calls than walking a Hypermedia graph via Links as prescribed by Richardson Maturity model and <a href="https://en.wikipedia.org/wiki/HATEOAS">HATEOS</a>. Many developers of a REST service create a Client library that communicates via HTTP to the REST resources and due to the simplicity of REST, writing the client code was not arduous and did not require generation tools. REST itself was a contract. WADL if you want to stretch it and HATEOS if you want to argue about it made their ways to provide something like an IDL. Heterogeneous clients thrived with REST. The point to note is IDL seems to have taken a back seat here. Enter GRPC, a high performance open source framework from google. Has an IDL via <i>.proto</i> files. Supports Heterogeneous/Polyglot clients.<i> <b>It's just fascinating how IDL's are here to stay</b></i><b>.
</b><br />
<h1>
Reasons why Service Clients get a bad name</h1>
<br />
To facilitate the discussion of Service Clients and why they do not work, consider the following Customer Service Application written in Java that uses Service clients for Notes, Order management, Product Search and Credit Card. These Clients are provided by individual service maintainers.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhs1-NP4Uj4E9DMmu6BiPxpXSmmGolU9bkKOlrLPbKDfZ1oY9K8YVJmIMnlYHdhMRi5y4e0VJ47VJ6gEyfM13Mw74hLq3rQR-qE8yGb6ekYVLNMnjc4t3uwKgqsXzq2An_3T18t6vsKIBWU/s1600/CustomerServiceApp.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="272" data-original-width="900" height="192" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhs1-NP4Uj4E9DMmu6BiPxpXSmmGolU9bkKOlrLPbKDfZ1oY9K8YVJmIMnlYHdhMRi5y4e0VJ47VJ6gEyfM13Mw74hLq3rQR-qE8yGb6ekYVLNMnjc4t3uwKgqsXzq2An_3T18t6vsKIBWU/s640/CustomerServiceApp.png" width="640" /></a></div>
<h3 style="text-align: left;">
The Not-So-Thin Service Client or The Very "Opinionated Client"</h3>
<br />
Service Clients many times start serving a <b>higher purpose</b> than what they were originally designed for. Consider for example the following hypothetical Notes Client:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicmDhdY-qWcwOVwzPZRrW3w8j1ew8fANrRJwOE3HCvDcsoW5oWf0ACeDm-cTVnmfa_KxUB21MjXXzDB66wwBBEBVwz97V00P5sUe7M8JE-sdoVgBMeDzb_SQLNpVtJ0lXNFQotSzC_iR0C/s1600/OverloadedNotesClient.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="460" data-original-width="800" height="368" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicmDhdY-qWcwOVwzPZRrW3w8j1ew8fANrRJwOE3HCvDcsoW5oWf0ACeDm-cTVnmfa_KxUB21MjXXzDB66wwBBEBVwz97V00P5sUe7M8JE-sdoVgBMeDzb_SQLNpVtJ0lXNFQotSzC_iR0C/s640/OverloadedNotesClient.png" width="640" /></a></div>
<br />
The Notes Client seems to do <b>a lot of things</b>:<br />
<ul style="text-align: left;">
<li>Determining strategy around whether to call the Notes Service or fall back to the Database</li>
<li>Determining whether to invoke calls in parallel or serial</li>
<li>Providing a Request Cache and caching data</li>
<li>Translating the response from the service into something the client will understand and absorb</li>
<li>Providing functionality to Audit note creation (stretching here:-))</li>
<li>Providing Configuration defaults around Connection pools, Read Timeouts and other client properties</li>
</ul>
<div>
The argument is "<i>Who but the service owner knows best how to interact with the service and the controls to provide?</i>". There are a few things with this design that are happening:</div>
<div>
<ul style="text-align: left;">
<li>It does not account for non-Java clients</li>
<li>Bundles in a bunch of logic into the client around Notes (yes stretching it here with a Notes example :-))</li>
<li>Imposes a Request Cache on the Consumer. Resource imposition.</li>
<li>Assumes that it knows best regarding how you would like to invoke the service from a serial/parallel perspective</li>
<li>Makes resource decisions for you regarding threads etc</li>
<li>Provides defaults that it thinks are best without making you even think of your SLAs</li>
</ul>
</div>
<h4 style="text-align: left;">
Logic Creep into the Client</h4>
I have seen this happen a few times where business logic creeps into the binary client. This is just bad design that really negates the benefits of the services architecture. Sometimes the logic is around things like fall backs, counters, request caching etc. While these make sense to abstract away from a DRY perspective, they introduce a coupling or dependency on the client from the consumer's perspective that making changes might could require all consumers having to update. There are some clients that have 'modes' of operation and logic around what will be done in a given mode. These can be painful if they needed to change.<br />
<h4 style="text-align: left;">
Resource Usage</h4>
The library provides a Request Cache to cache request data. It will use threads, memory, connection pools and many other resources while performing its job. The Service consumer might not need a multi-threaded connection pool, it might only need to obtain Notes every once in a while and does not even need to keep the connection alive. The Service consumer might not even care about paralleling calls to obtain data, it might be fine with serial. The consumer is however automatically <b>opted-in to these invading resource usages</b>.<br />
<h4 style="text-align: left;">
Black Box</h4>
The Service Client is not something the Service Consumer wrote. On one hand service consumers care grateful that they have a library that takes away the effort around remote communication but on the other hand, they have absolutely no control over what the library is doing. When something goes wrong, the consumer team needs to start debugging the Service Client logic and other details, something they never owned or have in depth knowledge about. Phew!<br />
<h4 style="text-align: left;">
Overloaded Responsibility</h4>
The Service owners perspective is really to generate a service that fulfills the business purpose. Their job is not to provide clients for the multitude of consumers that might want to talk to them. Writing client code to provide resiliency, monitoring and logging on communicating with the service just does not appear to be the responsibility for the service owner.<br />
<br />
<h3 style="text-align: left;">
Hydra Client and Stack Invasion</h3>
<br />
Consider the following figure for this discussion:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQ0-OL9aL1lWtlQXxUKUM97mN0NnneG0jQutO4uwlwg0o0qC79_hBAkZjtH2N7nhG7_cLITZrCAKIbfo9MZJOGc-coQbO9gP0ZeEUHiheG8iALh6x37LUtO1hB6ykfg2_bZ6UQrFaNRN-m/s1600/NotesAndProductSearchHydra.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="1256" height="212" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQ0-OL9aL1lWtlQXxUKUM97mN0NnneG0jQutO4uwlwg0o0qC79_hBAkZjtH2N7nhG7_cLITZrCAKIbfo9MZJOGc-coQbO9gP0ZeEUHiheG8iALh6x37LUtO1hB6ykfg2_bZ6UQrFaNRN-m/s640/NotesAndProductSearchHydra.png" width="640" /></a></div>
A service client often brings along with it dependencies on libraries that the service maintainers have chosen for their stack. A consumer of the Service client are often forced to deal with managing out dependencies that would probably have <b>never wanted</b> in their stack but are forced to absorb them. As an example, consider a team that is invested in Spring and using Spring MVC and RestTemplate as their stack but have to suffer dependency management hell because they need to interface with a team whose Service Client is written using <a href="https://github.com/jersey/jersey">Jersey </a>and need to deal with dependencies that might not be compatible. Multiply this with other Service Clients and the dependencies they bring in or go ahead and run with a <i>Jersey 1 and Jersey 2 Client</i> dependent libraries in your classpath to experience first hand how it would feel :-).<br />
<br />
<h3 style="text-align: left;">
Heterogeneity and Polyglot Clients</h3>
<br />
"<i>First you lose true technology heterogeneity. The library typically has to be in the same language, or at the very least run the same platform</i>" - Sam Newman<br />
<br />
Expecting the team generating a service to have to maintain service clients for different languages and platforms is a maintenance hell and simply not scalable. If they do not provide a client, then the ability to support a polyglot environment is stifled. When service clients start having business logic in them, it becomes really hard for service consumers not using the service team provided client to generate their own due to the complexity of logic involved. This can be a serious problem as it curbs innovation and technological advances.<br />
<br />
<h3 style="text-align: left;">
Strong Coupling</h3>
<br />
The Service Client when done wrong can lead to a strong coupling between the service and its consumer. This coupling would mean that making even the smallest of changes to the service would involve significant upgrade effort from the consumers. What this does is stymie innovation on the service and taxes the service consumers into some upgrade cycle advocated by the service providers.<br />
<br />
<h2 style="text-align: left;">
Benefits of the Service Client</h2>
<br />
The primary benefits that Service clients provide is<b> DRY</b> (Don't repeat yourself). A team that develops a service might quite well develop a client to<b> test their service</b>. They might also provide integration tests with multiple versions of binary clients to ensure backward compatibility of their APIs. Their responsibility though ends there really, they really should have<b> no obligation to distribute this client to others </b>but the question we find a service consumer asking is;<i> If this client is already created by the service owners and is available for free, why should I need to build one?</i><br />
<br />
Remember that the Service creators are knowledgeable around the expected SLA of their service end points and the client they provide has <b>meaningful defaults and optimizations for</b> the same. A Service owner might augment the client library with things like Circuit Breakers, Bulkheading, Metrics, Caching and Defaults that will make the Client work really well with the service.<br />
<br />
If a companies stack is very <b>opinionated, </b>for example, a<b> non polyglot</b> java and Spring Boot stack where every team uses <i>RestTemplate </i>for HTTP invocation with some discipline around approved versions of third party jars considering backward compatibility etc, then this model could very well hum along.<br />
<h2 style="text-align: left;">
Client Generation</h2>
<br />
To facilitate the choice for service consumers to generate their own client, an IDL needs to be made available. Tools like <a href="https://github.com/swagger-api/swagger-codegen">Swagger Code Gen </a>and <a href="https://github.com/grpc/grpc-java/tree/master/compiler">GRPC gen</a> are great examples where these could be made available. The auto-generated clients are good but as a word of caution, they aren't always optimized for high load and/or might need customization for sending custom company specific data anyways, diminishing their value as generated clients. Examples of company specific data might be things like custom trace identifiers, security info etc etc. Generated client code also will not have Circuit Breaking, Bulkheading and Monitoring features available on it effectively requiring each service consumer to write their own and with the tools available for them.<br />
<br />
<h1 style="text-align: left;">
Parting Thoughts and Resources</h1>
<br />
We want to promote a loosely coupled architecture with distributed systems, i.e., between services and their consumers. This enables changes on the service to evolve without impacting consumers. A Service itself must <b>NOT </b>be platform/language biased and support a polyglot consumer ecosystem.<br />
<br />
<div>
If Service owners were to design their services with the mentality that they have no control around the stack of the consumer apart from a transport protocol, it promotes better design of the service where logic does not creep into the client.<br />
<br />
Service Clients, if created by Service owners must be '<b>thin</b>' libraries <b>devoid </b>of any business logic that can safely be absorbed by consumers. If the client libraries are doing too much, you ought to re-think your <b>service responsibilities and if needed consider introducing an intermediary service</b>. I would also recommend that if a Service client is created for consumption by consumers, then limit the third party dependencies used by the client to <b>prevent the Hydra Client effect.</b><br />
<br />
It is also important to note that if a Services team <b>creates a Client, that they do not force the consumers</b> to use it. If the choice to use a different stack exists, then the Consumer is empowered to use the client generated by the Service maintainer or develop one independently. To a great degree service consumer teams maintaining their own clients keeps the service maintainers in check regarding compatibility and prevents logic creeping into client tier.<br />
<div>
<ul style="text-align: left;">
<li>Ben Christiansen has a great presentation on the<a href="https://www.microservices.com/talks/dont-build-a-distributed-monolith/"> Distributed Monolith</a> that is must see for anyone interested in this subject. </li>
<li>Read the Microservices book by Sam Newman. Pretty good data!</li>
</ul>
<div>
Keep following these postings as we walk into the<b> next generation of services and tools </b>to enable them!<br />
<br />
If you have thoughts for or against Service Clients, please do share in the comments. </div>
</div>
</div>
</div>
Sanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.com0tag:blogger.com,1999:blog-4667121987470696359.post-50252481275785090862015-12-06T21:18:00.002-07:002015-12-06T21:27:37.658-07:00Maven Integration Testing of Spring Boot Cloud Netflix Eureka Services with Docker<div dir="ltr" style="text-align: left;" trbidi="on">
<h2 style="text-align: left;">
Introduction</h2>
<br />
<div>
My previous blog was about using <a href="http://sleeplessinslc.blogspot.com/2015/11/spring-cloud-netflix-eureka-exampleits.html">Spring Cloud Netflix</a> and my experiences with it. In this BLOG, I will share how one could perform a maven integration test that involves multiple Spring Boot services that use Netflix Eureka as a service registry. I have utilized a similar example as the one I used in my BLOG about <a href="http://sleeplessinslc.blogspot.com/2015/04/reactive-programming-with-jersey-and.html">Reactive Programming with Jersey and RxJava</a> where a product is assembled from different JAX-RS microservices. For this BLOG though, each of the microservices are written in <a href="http://projects.spring.io/spring-boot/">Spring Boot</a> using <a href="https://github.com/spring-cloud/spring-cloud-netflix">Spring Cloud Netflix Eureka</a> for service discovery. A service which I am calling the <i>Product Gateway</i>, assembles data from the independent product based microservices to hydrate a Product.<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2FTI64cgdCAqneLw9Y2Y7ZPY7xIR2lAow-X-k_nCSf7GIgwpv73QNYwdREmt7D3cFbyq5CdG4XQmQ5ijPIU9dtS35umi5DJddbl2rkt6lwR18p5X4FncG8WGnLw_ObIU4-x6go8TneC78/s1600/ProductGateway.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="396" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2FTI64cgdCAqneLw9Y2Y7ZPY7xIR2lAow-X-k_nCSf7GIgwpv73QNYwdREmt7D3cFbyq5CdG4XQmQ5ijPIU9dtS35umi5DJddbl2rkt6lwR18p5X4FncG8WGnLw_ObIU4-x6go8TneC78/s640/ProductGateway.png" width="640" /></a></div>
<h2 style="text-align: left;">
Product Gateway</h2>
<br />
<div>
A mention of the Product Gateway is warranted for completeness. The following represents the ProductGateway classes where the <i>ObservableProductResource</i> uses a <i>ProductService</i> which in turn utilizes the REST clients of the different services and <a href="https://github.com/ReactiveX/RxJava">RxJava</a> to hydrate a Product.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhimMzj4oM-MZk9lULmCws7eZn0CtQNE88dkk0U5oLn0eI4Qq1jpxLHSM33wNuJLqFdb6nRDqCsDFuJc_RPkTzUzDVVmJCHKp6PD0j4Z6yp5HIfmTnm3-UmhHGTGgE89CbrvV9naf3IT1Fl/s1600/productgatewayclasses.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhimMzj4oM-MZk9lULmCws7eZn0CtQNE88dkk0U5oLn0eI4Qq1jpxLHSM33wNuJLqFdb6nRDqCsDFuJc_RPkTzUzDVVmJCHKp6PD0j4Z6yp5HIfmTnm3-UmhHGTGgE89CbrvV9naf3IT1Fl/s1600/productgatewayclasses.png" /></a></div>
<br />
<br />
<br />
The <i>ObservableProductResource</i> Spring Controller class is shown below which uses a DeferredResult for Asynchronous processing:</div>
<pre class="java" name="code">@RestController
public class ObservableProductResource {
@Inject
private ProductService productService;
@RequestMapping("/products/{productId}")
public DeferredResult<Product> get(@PathVariable Long productId) {
DeferredResult<Product> deferredResult = new DeferredResult<Product>();
Observable<Product> productObservable = productService.getProduct(productId);
productObservable.observeOn(Schedulers.io())
.subscribe(productToSet -> deferredResult.setResult(productToSet), t1 -> deferredResult.setErrorResult(t1));
return deferredResult;
}
}
</pre>
Each Micro Service client is declaratively created using <a href="https://github.com/Netflix/feign">Netflix Feign</a>. As an example of one such client, the <i>BaseProductClient</i>, is shown below:
<br />
<pre class="java" name="code" style="text-align: left;">@FeignClient("baseproduct")
public interface BaseProductClient {
@RequestMapping(method = RequestMethod.GET, value = "/baseProduct/{id}", consumes = "application/json")
BaseProduct getBaseProduct(@PathVariable("id") Long id);
}
</pre>
<h2 style="text-align: left;">
What does the Integration Test do?</h2>
</div>
<div>
The primary purpose is to test the actual end to end integration of the Product Gateway Service. As a maven integration test, the expectancy is that it would entail:</div>
<div>
<ul style="text-align: left;">
<li>Starting an instance of <a href="https://github.com/Netflix/eureka">Eureka</a></li>
<li>Starting Product related services registering them with Eureka</li>
<li>Starting Product Gateway Service and registering it with Eureka</li>
<li>Issuing a call to Product Gateway Service to obtain said Product</li>
<li>Product Gateway Service discovering instances of Product microservices like Inventory, Reviews and Price from Eureka</li>
<li>Product Gateway issuing calls to each of the services using RxJava and hydrating a Product</li>
<li>Asserting the retrieval of the Product and shutting down the different services</li>
</ul>
The test itself would be a maven JUnit integration test.</div>
<div>
As the services are bundled as JARs with embedded containers, they present a challenge to start up and tear down during an integration test. </div>
<div>
One option is to create equivalent WAR based artifacts for testing purposes only and use the <a href="https://codehaus-cargo.github.io/cargo/Maven2+plugin.html">maven-cargo plugin</a> to deploy each of them under a separate context of the container and test the gateway. That however does mean creating a WAR that might never really be used apart from testing purposes.</div>
<div>
Another option to start the different services is using the exec maven plugin and/or some flavor(hack) to launch external JVMs.</div>
<div>
Yet another option is write custom class loader logic [to prevent stomping of properties and classes of individual microservices] and launch the different services in the same integration test JVM.</div>
<div>
All these are options but what appealed to me was to use <a href="https://www.docker.com/">Docker</a> containers to start each of these microservice JVMs and run the integration test. So why Docker? Docker seems a natural fit to compose an application and distribute it across a development environment as a consistent artifact. The benefits during micro service based integration testing where one can simply pull in different docker images such as services, data stores etc of specific versions without dealing with environment based conflicts is what I find appealing.<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFKQDLBO3uGdGtxIwUBB89Err0Czfy4go11PwmMJ8HT1OHaHNS-fY2hJjJiZDyjJj6EORD0U4jw6NnviObtPhlqlqll4mCJptDO9BEatnA8X3i1qCFBkK9yGteROQBIq1cftRnUMXZeebz/s1600/dockercontainerintegrationtest.png" imageanchor="1"><img border="0" height="494" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFKQDLBO3uGdGtxIwUBB89Err0Czfy4go11PwmMJ8HT1OHaHNS-fY2hJjJiZDyjJj6EORD0U4jw6NnviObtPhlqlqll4mCJptDO9BEatnA8X3i1qCFBkK9yGteROQBIq1cftRnUMXZeebz/s640/dockercontainerintegrationtest.png" width="640" /></a></div>
<h2 style="text-align: left;">
Creating Docker Images </h2>
<br />
<div>
As part of building each of the web services, it would be ideal to create a Docker image. There are many maven plugins out there to create Docker images [actually too many]. In the example, we have used the one from Spotify. The building of the Docker image using the <a href="https://github.com/spotify/docker-maven-plugin">Spotify</a> plugin for Spring Boot applications is nicely explained in the BLOG from spring.io, <a href="https://spring.io/guides/gs/spring-boot-docker/">Spring Boot with Docker</a>.</div>
<div>
What I would see happening is that as part of the build process, the Docker image would be published to a docker repository which is internal to an organization and then made available for other consumers.</div>
<div>
<br /></div>
<h2 style="text-align: left;">
Integration Test
</h2>
<br />
As part of the <i>pre-integration</i> test phase of maven, we would like to start up the Docker containers representing the different services. In order for the gateway container to work with the other service containers, we need to be able to <a href="https://docs.docker.com/v1.8/userguide/dockerlinks/">link</a> the Docker containers. I was not able to find a way to do that using the Spotify plugin. What I instead found myself doing is utilizing another <a href="https://github.com/rhuss/docker-maven-plugin">maven plugin for Docker by Roland HuB</a> which has much better documentation and more features. Shown below is the plugin configuration for the integration test.<br />
<br />
<pre class="xml" name="code"><plugin>
<groupId>org.jolokia</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.13.6</version>
<configuration>
<logDate>default</logDate>
<autoPull>true</autoPull>
<images>


<!--....Other service containers like price, review, inventory-->

</images>
</configuration>
<executions>
<execution>
<id>start</id>
<phase>pre-integration-test</phase>
<goals>
<goal>start</goal>
</goals>
</execution>
<execution>
<id>stop</id>
<phase>post-integration-test</phase>
<goals>
<goal>stop</goal>
</goals>
</execution>
</executions>
</plugin>
</pre>
One of the nice features is the syntax color prefix of each containers messages, this gives one a sense of visual separation among the multitude of containers that are started.
The Integration Test itself is shown below:
<br />
<pre class="java" name="code">@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = { ProductGatewayIntegrationTest.IntegrationTestConfig.class })
public class ProductGatewayIntegrationTest {
private static final Logger LOGGER = Logger.getLogger(ProductGatewayIntegrationTest.class);
/**
* A Feign Client to obtain a Product
*/
@FeignClient("product-gateway")
public static interface ProductClient {
@RequestMapping(method = RequestMethod.GET, value = "/products/{productId}", consumes = "application/json")
Product getProduct(@PathVariable("productId") Long productId);
}
@EnableFeignClients
@EnableDiscoveryClient
@EnableAutoConfiguration
@ComponentScan
@Configuration
public static class IntegrationTestConfig {}
// Ribbon Load Balancer Client used for testing to ensure an instance is available before invoking call
@Autowired
LoadBalancerClient loadBalancerClient;
@Inject
private ProductClient productClient;
static final Long PRODUCT_ID = 9310301L;
@Test(timeout = 30000)
public void getProduct() throws InterruptedException {
waitForGatewayDiscovery();
Product product = productClient.getProduct(PRODUCT_ID);
assertNotNull(product);
}
/**
* Waits for the product gateway service to register with Eureka
* and be available on the client.
*/
private void waitForGatewayDiscovery() {
while (!Thread.currentThread().isInterrupted()) {
LOGGER.debug("Checking to see if an instance of product-gateway is available..");
ServiceInstance choose = loadBalancerClient.choose("product-gateway");
if (choose != null) {
LOGGER.debug("An instance of product-gateway was found. Test can proceed.");
break;
}
try {
LOGGER.debug("Sleeping for a second waiting for service discovery to catch up");
Thread.sleep(1000);
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
}
</pre>
The test uses the LoadBalancerClient client from Ribbon to ensure an instance of 'product-gateway' can be discovered from Eureka prior to using the Product client to invoke the gateway service to obtain back a product.
<br />
<br />
<h2 style="text-align: left;">
Running the Example
</h2>
<br />
<div>
The first thing you need to do is make sure you have <a href="https://docs.docker.com/engine/installation/">Docker installed</a> on your machine. Once you have Docker installed, clone the example from github (<a href="https://github.com/sanjayvacharya/sleeplessinslc/tree/master/product-gateway-docker">https://github.com/sanjayvacharya/sleeplessinslc/tree/master/product-gateway-docker</a>) and then execute a mvn install from the root level of the project. This will result in the creation of Docker images and the running of the Docker based integration tests of the product gateway. Cheers!</div>
</div>
Sanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.com0tag:blogger.com,1999:blog-4667121987470696359.post-67245868181436324862015-11-06T19:55:00.002-07:002015-11-07T02:39:20.446-07:00Spring Cloud Netflix Eureka Example...It's so Bootiful<div dir="ltr" style="text-align: left;" trbidi="on">
<h2 style="text-align: left;">
Introduction</h2>
<br />
<div>
Been looking at Netflix <a href="https://github.com/Netflix/eureka">Eureka </a>for Service Registry/Discovery and thought I'd share some thoughts via an example. My interest was more around <a href="https://github.com/Netflix/eureka/wiki/Eureka-2.0-Architecture-Overview">Eureka 2.0</a> and I was eagerly awaiting their release in <a href="https://groups.google.com/forum/?fromgroups#!topic/eureka_netflix/_MrKYJjkGnM">early 2015</a> per swags(?) but that did not materialize with the team stating <a href="https://groups.google.com/forum/#!topic/eureka_netflix/_MrKYJjkGnM">its better to adopt Eureka 1.0</a>. Not very thrilled I must say :-(.<br />
<br />
Looking at Eureka 1.0, there appears two routes that one could go:<br />
<ol style="text-align: left;">
<li>Use Stock Netflix Eureka</li>
<li>Use <a href="https://github.com/spring-cloud/spring-cloud-netflix">Spring Cloud Netflix</a> which is a Netflix Eureka and garnished with other <a href="https://netflix.github.io/">Netflix OSS</a> components, heated to a Spring temperature and presented as a <a href="http://projects.spring.io/spring-boot/">Spring Boot</a> Dish.</li>
</ol>
This Blog will utilize Spring Cloud Netflix and my favorite<i> Notes Web Service</i> to demonstrate a workable project that one can check out play with to understand Service Discovery with Eureka. The example will register an instance of the Notes Service with Eureka Server (the registry) and a Notes Client will discover the service instance and issue a request to it simulating a full integration. Additional Technologies used:<br />
<ul style="text-align: left;">
<li><a href="https://code.google.com/p/rest-assured/">RestAssured</a> - Rest Assured is used a fluent testing of the service API </li>
<li><a href="https://github.com/springfox/springfox">Spring Fox</a> - Used to expose Swagger UI for service documentation and testing</li>
<li><a href="https://github.com/Netflix/ocelli">Netlflix Ocelli </a>- Client Side Load Balancing</li>
</ul>
For the Client side of the application, I choose to use <a href="https://github.com/Netflix/ocelli">Netflix Ocelli</a> for Client side load balancing along with Spring Frameworks <a href="https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/web/client/RestTemplate.html">RestTemplate</a>. Why Ocelli over <a href="https://github.com/Netflix/ribbon">Ribbon </a>for Client Side Load Balancing? Well, Netflix seems to be looking at Ocelli as the new Ribbon, at least that is what this <a href="https://groups.google.com/forum/#!topic/ribbon-users/-Q75e5dNkp0">email chain</a> seems to indicate.
<br />
<br />
<h2 style="text-align: left;">
Eureka Components</h2>
<br />
<div>
The following are some components one deals with when working with Eureka</div>
<h4 style="text-align: left;">
EurekaClientConfig</h4>
<div>
<a href="http://javadox.com/com.netflix.eureka/eureka-client/1.1.136/com/netflix/discovery/EurekaClientConfig.html">EurekaClientConfig </a>provides the configuration required by eureka clients to register an instance with Eureka. Netflix Eureka provides an implementation <a href="http://javadox.com/com.netflix.eureka/eureka-client/1.1.136/com/netflix/discovery/DefaultEurekaClientConfig.html">DefaultEurekaClientConfig </a>that is configured via a property file <i>eureka.client.props</i>. Spring Cloud Netflix extends <i>EurekaClientConfig </i>and provides a <a href="https://github.com/spring-cloud/spring-cloud-netflix/blob/master/spring-cloud-netflix-core/src/main/java/org/springframework/cloud/netflix/eureka/EurekaClientConfigBean.java">EurekaClientConfigBean </a>which implements and <i>EurekaClientConfig </i>and works with Spring properties. Properties prefixed by <i>eureka.client </i>will map into the EurekaClientConfigBean via <a href="https://projectlombok.org/">Project Lombok</a> magic.</div>
<h4 style="text-align: left;">
EurekaServerConfig</h4>
<div>
Follows a similar approach like the <i>ClientConfig </i>but this is used for Eureka <b>Server </b>properties</div>
<h4 style="text-align: left;">
EurekaInstanceConfig</h4>
<div>
Provides the configuration required by an instance to register with Eureka server. If building a service, you might want to tweak some of knobs here. You can control properties like:</div>
<div>
<ul style="text-align: left;">
<li>Name of Service</li>
<li>Secure/Non-secure ports of operation</li>
<li>Lease renewal and expiration</li>
<li>Home page URL and Status page URL</li>
</ul>
If working with vanilla Netflix Eureka, you could use <i>MyDataCenterInstanceConfig </i>(for non-AWS data center or the <i>CloudInstanceConfig </i>(AWS). Spring Cloud Netflix users get a <a href="https://github.com/spring-cloud/spring-cloud-netflix/blob/master/spring-cloud-netflix-core/src/main/java/org/springframework/cloud/netflix/eureka/EurekaInstanceConfigBean.java">EurekaInstanceConfigBean </a>spruced with Lombok magic to control their properties. The properties are prefixed by <i>eureka.instance</i>.</div>
<h4 style="text-align: left;">
Discovery Client</h4>
<div>
<a href="https://github.com/Netflix/eureka/blob/master/eureka-client/src/main/java/com/netflix/discovery/DiscoveryClient.java">Discovery Client</a> uses a '<b>jersey 1</b>' client to communicate with Eureka Server to:</div>
<div>
<ul style="text-align: left;">
<li>Register a service instance</li>
<li>Renew the lease of a Service instance</li>
<li>Cancel of a lease of service instance</li>
<li>Query Eureka Server for registered instances</li>
</ul>
DiscoveryClient is configured by the <a href="https://github.com/Netflix/eureka/blob/master/eureka-client/src/main/java/com/netflix/discovery/DiscoveryManager.java">DiscoveryManager </a>with has information of the <i>EurekaClientConfig</i> and the <i>EurekaInstanceConfig</i><br />
<br />
<h2 style="text-align: left;">
The Notes Example</h2>
</div>
<div>
The Notes example defines a Server side component that will register with Netflix Eureka, a Client side library, the Notes Client which discovers a Notes Service Instance from Eureka and an integration test that demonstrates this lifecycle.</div>
<h3 style="text-align: left;">
Notes Service Code</h3>
<br />
The Heart of the Service code is the declaration of the <a href="http://projects.spring.io/spring-boot/">Spring Boot</a> Application as shown below:<br />
<br />
<pre class="java" name="code">@SpringBootApplication // Spring Boot
@EnableEurekaClient // Enable Eureka Client
@RestController
@EnableSwagger2 // Swagger
public class Application {
@RequestMapping("/")
public String home() {
return "This is the Notes App";
}
@Bean
public Docket swaggerSpringMvcPlugin() { // Minimalistic setting for Swagger Support
return new Docket(DocumentationType.SWAGGER_2)
.select()
.paths(PathSelectors.any())
.build().pathMapping("/");
}
public static void main(String args[]) {
SpringApplication.run(Application.class, args);
}
}
@RestController
@Api(basePath = "/notes", value = "Notes", description = "Note Creation and Retrieval", produces = "application/xml")
public class NotesController {
@Inject
private NotesService notesService;
@RequestMapping(value = "/notes", method = RequestMethod.POST)
public Long create(@RequestBody Note note) {
return notesService.createNote(note);
}
@RequestMapping(value = "/notes/{id}", method = RequestMethod.GET, produces = MediaType.APPLICATION_XML_VALUE)
public Note get(@PathVariable Long id) {
return notesService.getNote(id);
}
}
</pre>
In the above Boot application, we enabled Eureka Client via the<i> @EnableEurekaClient</i> annotation and also set up swagger support by creating a Docket and adding the <i>@EnableSwagger2 </i>annotation. The JSON Swagger resource is available at<i> http://localhost:9090/v2/api-docs</i> and the Swagger HTML is available at<i> http://localhost:9090/swagger-ui.html</i>. The configuration for the application is driven by a simple <a href="http://yaml.org/">YAML </a>file as shown below where the port, location of Eureka server instance and a custom instance Id are defined:
<br />
<pre class="xml" name="code">spring:
application:
name: Notes
server.port: 9090
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/v2/
instance:
metadataMap:
instanceId: ${spring.application.name}:${spring.application.instance_id:${random.value}}
</pre>
If one wished to integration test (exercise DB and other calls) in the Service application without having to worry about the Eureka registration and discovery, you could simply turn of the eureka client and use, say <a href="https://code.google.com/p/rest-assured/">RestAssured </a>to validate your service as shown below.
<br />
<pre class="java" name="code">@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = Application.class)
@WebAppConfiguration
@IntegrationTest({"server.port=0", // Pick an ephemeral port
"eureka.client.enabled=false" // Prevent registering/discovering with Eureka
})
public class NotesControllerTest {
@Value("${local.server.port}")
private int port;
@Before
public void setUp() {
RestAssured.port = port;
}
@Test
public void createAndGetNote() {
given().contentType(MediaType.APPLICATION_XML)
.body(new Note("Test"))
.when()
.post("http://localhost:" + port + "/notes")
.then()
.statusCode(200).body(equalTo("1"));
when().get("/notes/{id}", 1).then()
.assertThat()
.statusCode(200)
.body("note.content", equalTo("Test"));
}
}
</pre>
<br /></div>
<h3 style="text-align: left;">
Notes Client Code
</h3>
<br />
<div>
For the <a href="https://github.com/sanjayvacharya/sleeplessinslc/blob/master/spring-cloud-netflix-notes/Notes-client/src/main/java/com/welflex/notes/client/NotesClientImpl.java">NoteClient</a>, as mentioned, <i>Ocelli </i>is used as a Load Balancer with <i>RestTemplate </i>used to make the REST call.<br />
<br />
<pre class="java" name="code">public class NotesClientImpl implements NotesClient, Closeable {
private final RestTemplate restTemplate;
private final Subscription subscription;
private final AtomicReference<List<InstanceInfo>> result;
private final RoundRobinLoadBalancer<InstanceInfo> instanceInfoLoadBalancer;
public NotesClientImpl(RestTemplate restTemplate, DiscoveryClient discoveryClient,
long refreshInterval, TimeUnit refreshIntervalTimeUnit) {
this.restTemplate = restTemplate;
this.result = new AtomicReference<List<InstanceInfo>>(new ArrayList<>());
this.subscription = new EurekaInterestManager(discoveryClient).newInterest()
.forApplication("Notes") // Notes Service
.withRefreshInterval(refreshInterval, refreshIntervalTimeUnit) // Ocelli Refresh Interval
.asObservable()
.compose(InstanceCollector.<InstanceInfo>create())
.subscribe(RxUtil.set(result));
instanceInfoLoadBalancer = RoundRobinLoadBalancer.<InstanceInfo>create(); // Round Robin
}
private String getServiceInstanceUrl() {
InstanceInfo instanceInfo = instanceInfoLoadBalancer.choose(result.get()); // Choose an instance
if (instanceInfo != null) {
return "http://" + instanceInfo.getIPAddr() + ":" + instanceInfo.getPort();
}
throw new RuntimeException("Service Not available");
}
@Override
public Note getNote(Long noteId) {
return restTemplate.getForEntity(getServiceInstanceUrl() + "/notes/{id}", Note.class, noteId).getBody();
}
...
}
</pre>
Ocelli uses an 'interest' manager to register a subscription for changes to Notes Service. A round robin load balancer is used in the example. The NotesClient is set up in the <a href="https://github.com/sanjayvacharya/sleeplessinslc/blob/master/spring-cloud-netflix-notes/Notes-client/src/main/java/com/welflex/notes/client/NotesClientConfig.java">NotesClientConfig</a>.
<br />
<br />
<h3 style="text-align: left;">
Notes Integration Test</h3>
<br />
<a href="https://github.com/sanjayvacharya/sleeplessinslc/blob/master/spring-cloud-netflix-notes/Notes-integration-test/src/test/java/com/welflex/notes/NotesIntegrationTest.java">NotesIntegrationTest </a> is where things get interesting. The integration test will -
<br />
<ul style="text-align: left;">
<li>Start a Eureka Server in the pre-integration-test phase of maven using Cargo. Note that we are using the <b>stock </b>Netflix eureka WAR for this. One could also use the Spring Cloud version via an exec maven plugin.</li>
<li>Have Notes Service start on a random port and register with the Eureka Server</li>
<li>Have the Notes Client create and obtain a 'Note' by discovering the Notes Service from Eureka</li>
</ul>
Eureka operations are usually set in range of 20 to 30 seconds for things like registration, heartbeart, renewal and registry fetching. That works in a production environment but in an integration test waiting for so long is not an option. Thankfully, the Spring Boot test framework allows for easy overriding of these properties.<br />
<pre class="java" name="code" style="text-align: left;">@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = {Application.class, NotesClientConfig.class}) // Note the Import of Server and Client
@WebIntegrationTest(value = {"eureka.instance.leaseRenewalIntervalInSeconds=10", // Lease Interval
"eureka.client.instanceInfoReplicationIntervalSeconds=1",
"eureka.client.initialInstanceInfoReplicationIntervalSeconds=1",
"eureka.client.registryFetchIntervalSeconds=1",
"eureka.client.serviceUrl.defaultZone=http://localhost:9000/eureka/v2/"}, // Overriding eureka location
randomPort = true)
public class NotesIntegrationTest {
@Inject
private NotesClient notesClient; // Notes Client injected
@Value("${ocelliRefreshIntervalSeconds}")
private Long ocelliRefreshIntervalSeconds; // How often Ocelli Client has been set up to refresh
@Test
public void createAndGetNote() throws InterruptedException {
// Wait for Notes App to register
waitForNotesRegistration(); // Waits for Notes Service to register before proceeding
Thread.sleep((ocelliRefreshIntervalSeconds * 1000) + 1000L); // Ocelli might not be in sync
Note note = new Note("Test");
Long noteId = notesClient.createNote(note); // Create a note
assertEquals(note, notesClient.getNote(noteId)); // Read the node
}
@Inject
private DiscoveryClient discoveryClient;
private void waitForNotesRegistration() throws InterruptedException {
/* Discovery client is used to make sure service is registered before the Notes Client gets a chance to discover and exercise API */
}
}
</pre>
The important thing here to note is that I am using the 'war' version of the Netflix Eureka server to start up before the integration test runs. It is equally possible to launch the Spring Boot 'jar' version of the eureka server via the exec maven plugin prior to the start of the integration test.
<br />
<br />
<h2 style="text-align: left;">
Parting Thoughts
</h2>
<br />
For the curious, this Notes Spring Cloud Netflix Eureka example can be found at<a href="https://github.com/sanjayvacharya/sleeplessinslc/tree/master/spring-cloud-netflix-notes"> sleeplessinslc github</a>.<br />
<br />
I would also like to share the below as my personal 'opinion' only<br />
<h4 style="text-align: left;">
Netflix Eureka</h4>
<div>
<b>The good - </b><br />
<b><br /></b></div>
<div>
Awesome project, a Service Registry for the Web. Apps in any language can participate. REST + Jersey, preaching to the choir. More importantly, developed with a profound understanding of failure scenarios around a Microservice '<i>service registry</i>'.<a href="https://www.youtube.com/watch?v=27ynM2tbNXM"> Nitesh Kants presentation is a must see</a>.<br />
<br />
Good starter documentation and responsive folks on the mailing list. </div>
<div>
<br /></div>
<div>
<b>The bad - </b><br />
<b><br /></b></div>
<div>
I understand Eureka was designed for AWS usage primarily by Netflix. Maybe a more modular (plugin) approach would have been nicer ensuring a clean separation between AWS and non-AWS code, especially considering it is a fine candidate for private cloud.<br />
<br />
Discovery client fetches entire registry rather than information about specific services. Eureka 2 architecture looks better in that regard where one only gets data about services they are interested in.<br />
<br />
Jersey 1 - Lets update to Jersey 2.X folks! </div>
<div>
<br /></div>
<div>
Eureka UI looks pretty horrid and that is coming from a 'server side' only programmer :-). </div>
<div>
<br /></div>
<div>
There is an <a href="https://groups.google.com/forum/?fromgroups#!searchin/eureka_netflix/johnw/eureka_netflix/ULvUqYIa_qA/bePCce0vBwAJ">issue </a>where shutting down a service instance leads to shutting down all other services on that host which is concerning if hosting multiple services on a node.<br />
<br />
Ocelli vs. Ribbon. Can we have clear direction? Ocelli has not had a release since July 10th, Ribbon though has a more recent release. So is Ocelli really the new Ribbon? </div>
<h4 style="text-align: left;">
Spring Cloud Netflix</h4>
<div>
<b>The good -</b><br />
<b><br /></b></div>
<div>
If using a Spring ecosystem, a no brainer to adopt. Meshes really well with Spring properties and enables Hystrix and other Netflix OSS.<br />
<br />
Spring Boot is pretty sweet but you need to work in a 'controlled' environment in the 'Spring Boot Way' to enjoy full benefit. </div>
<div>
<br /></div>
<div>
Spring Cloud Netflix re-bundled Eureka server is much more visually pleasing than the core Eureka UI. I believe they also have a workaround with reference to the Shutdown bug I mentioned.</div>
<div>
<b><br /></b></div>
<div>
<b>The bad - </b><br />
<b><br /></b></div>
<div>
You sold yourself to the <b>Bootiful devil :-)</b><br />
<ul style="text-align: left;">
<li>Custom extension classes of Netflix interfaces (EurekaClientConfigBean..etc), rather than bridging</li>
<li>Version of Eureka is older</li>
<li>Documentation does not match up with implementation. Case in point the property "<i>spring.cloud.client.hostname</i>" defined in their <a href="http://projects.spring.io/spring-cloud/spring-cloud.html">documentation </a>is not even part of current release. I spent some time chasing this</li>
<li>Do we really need Lombok?</li>
</ul>
So, what I want to do next?<br />
<br />
Leaning back on my <a href="http://sleeplessinslc.blogspot.com/2015/04/reactive-programming-with-jersey-and.html">reactive example</a>, lets say I have an aggregator Product Service that I want to test that utilizes a Base Product Service, a Price Service and an Inventory Service, all developed with Spring Cloud Eureka, how would I write a maven integration test for this? Off to Docker land my friends, onward and upward!</div>
<br /></div>
</div>
Sanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.com0tag:blogger.com,1999:blog-4667121987470696359.post-1572256426871540252015-09-29T19:06:00.002-06:002015-09-29T19:06:19.717-06:00RxJava Presentation<div dir="ltr" style="text-align: left;" trbidi="on">
I have been an employee of<a href="http://www.overstock.com/"> Overstock.com</a> for close to 8 years now. Every year Overstock has a Technology Day where employees present on technology and simply have fun. This year I presented on <a href="https://github.com/ReactiveX/RxJava">RxJava</a> with a <a href="http://digitaljoel.nerd-herders.com/">colleague </a>of mine. We had a lot of fun preparing for the presentation while learning as well. We also won a small reward for being a being a popular presentation :-)<div>
<br /></div>
<div>
The following are the slides that we are sharing from our presentation. I would recommend you check the RxAir Demo by my colleague, it's pretty sweet.</div>
<div>
<br /></div>
<div>
<a href="http://www.slideshare.net/sanjayvacharya/an-introduction-to-rxjava">http://www.slideshare.net/sanjayvacharya/an-introduction-to-rxjava</a></div>
<div>
<br /></div>
<div>
Enjoy!</div>
</div>
Sanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.com0tag:blogger.com,1999:blog-4667121987470696359.post-68383825107917532952015-05-08T06:40:00.001-06:002015-05-08T06:40:55.802-06:00RxNetty and Jersey 2.0<div dir="ltr" style="text-align: left;" trbidi="on">
<h2 style="text-align: left;">
Introduction</h2>
<div>
<br /></div>
<div>
I have been interested in <a href="http://reactivex.io/tutorials.html">reactive</a> programming and <a href="http://sleeplessinslc.blogspot.com/2015/04/reactive-programming-with-jersey-and.html">micro-services</a> of recent and a few things have caught my attention:</div>
<div>
<ul style="text-align: left;">
<li><a href="http://netty.io/">Netty</a> - An asynchronous event-driven framework for high performance protocol servers and clients </li>
<li>Netflix <a href="https://github.com/ReactiveX/RxNetty">RxNetty</a> - Reactive Extension Adapter for Netty</li>
<li>Netflix <a href="https://github.com/Netflix/karyon">Karyon</a> - A blueprint for a cloud ready application</li>
</ul>
</div>
<div>
Netty became my primary interest from a performance perspective and some reading demonstrated that Netflix had done some benchmarks between RxNetty and Tomcat and found results in favor of RxNetty. Take a look at the following links -<br />
<br />
<ul style="text-align: left;">
<li><a href="http://www.slideshare.net/RuslanMeshenberg/netflixoss-season-2-episode-2-reactive-asyn">http://www.slideshare.net/RuslanMeshenberg/netflixoss-season-2-episode-2-reactive-async</a></li>
<li><a href="https://github.com/Netflix-Skunkworks/WSPerfLab/pull/34/files?diff=unified">https://github.com/Netflix-Skunkworks/WSPerfLab/pull/34/files?diff=unified</a></li>
</ul>
</div>
<div>
So I wanted to see how I could use RxNetty for HTTP. One problem that surfaced is when using RxNetty and HTTP, one is limited to a basic HTTP framework, i.e, something not as rich as <a href="https://jersey.java.net/">Jersey</a> JAX-RS. The same led me to Karyon which has 'blocking' Jersey support. However, <b>Karyon uses Jersey 1.X </b>and the Karyon team did not look like they would be making a move to Jersey 2.X anytime soon . A post from Netflix on the same <a href="https://github.com/Netflix/karyon/issues/56">https://github.com/Netflix/karyon/issues/56</a></div>
<div>
<br /></div>
<div>
Searching to see if Jersey 2.X had support for Netty, I found the following two items -</div>
<div>
<ul style="text-align: left;">
<li>An email thread indicating Jersey has no intent of building support for Netty as a container, especially RxNetty - <a href="https://java.net/projects/jersey/lists/users/archive/2015-01/message/48">https://java.net/projects/jersey/lists/users/archive/2015-01/message/48</a></li>
<li>A nice git hub project with a Netty Container for Jersey - <a href="https://github.com/Graylog2/jersey-netty">https://github.com/Graylog2/jersey-netty</a>. I hope this project makes it to Jersey as a container. </li>
</ul>
The above project is a Netty-Jersey bridge but does not use <b>RxNetty</b>. So I set out to make my own RxNetty Jersey Container to see how it would work.</div>
<div>
<br /></div>
<h2 style="text-align: left;">
RxNettyHttpContainer</h2>
<div>
<br /></div>
<div>
The following code demonstrates a RxNettyHttpContainer. It has been stripped of some methods. The container has been modeled after <a href="https://github.com/jersey/jersey/blob/master/containers/grizzly2-http/src/main/java/org/glassfish/jersey/grizzly2/httpserver/GrizzlyHttpContainer.java">GrizzlyHttpContainer</a> and code from Karyon.<br />
<br /></div>
<pre class="java" name="code">public class RxNettyHttpContainer implements Container, RequestHandler<ByteBuf, ByteBuf> {
private volatile ApplicationHandler appHandler;
private final Type RequestTYPE = (new TypeLiteral<Ref<HttpServerRequest<ByteBuf>>>() {
}).getType();
private final Type ResponseTYPE = (new TypeLiteral<Ref<HttpServerResponse<ByteBuf>>>() {
}).getType();
private volatile ContainerLifecycleListener containerListener;
/**
* Referencing factory for RxNetty HttpServerRequest.
*/
private static class RxNettyRequestReferencingFactory extends ReferencingFactory<HttpServerRequest<ByteBuf>> {
}
/**
* Referencing factory for RxNetty HttpServerResponse.
*/
private static class RxNettyResponseReferencingFactory extends ReferencingFactory<HttpServerResponse<ByteBuf>> {
}
/**
* An internal binder to enable RxNetty HTTP container specific types injection.
*/
static class RxNettyBinder extends AbstractBinder {
@Override
protected void configure() {
// Bind RxNetty HttpServerRequest and HttpServerResponse so that they are available for injection...
}
}
public RxNettyHttpContainer(Application application) {
this.appHandler = new ApplicationHandler(application , new RxNettyBinder());
this.containerListener = ConfigHelper.getContainerLifecycleListener(appHandler);
}
...
@Override
public void reload(ResourceConfig configuration) {
containerListener.onShutdown(this);
appHandler = new ApplicationHandler(configuration, new RxNettyBinder());
containerListener = ConfigHelper.getContainerLifecycleListener(appHandler);
containerListener.onReload(this);
containerListener.onStartup(this);
}
// This is the meat of the code where a request is handled
public Observable<Void> handle(HttpServerRequest<ByteBuf> request,
HttpServerResponse<ByteBuf> response) {
InputStream requestData = new HttpContentInputStream(response.getAllocator(),
request.getContent());
URI baseUri = toUri("/");
URI requestUri = toUri(request.getUri());
ContainerRequest containerRequest = new ContainerRequest(baseUri, requestUri, request
.getHttpMethod().name(), getSecurityContext(requestUri, request),
new MapPropertiesDelegate());
containerRequest.setEntityStream(requestData);
for (String headerName : request.getHeaders().names()) {
containerRequest.headers(headerName, request.getHeaders().getAll(headerName));
}
containerRequest.setWriter(new RxNettyContainerResponseWriter(response));
containerRequest.setRequestScopedInitializer(new RequestScopedInitializer() {
@Override
public void initialize(final ServiceLocator locator) {
locator.<Ref<HttpServerRequest<ByteBuf>>>getService(RequestTYPE).set(request);
locator.<Ref<HttpServerResponse<ByteBuf>>>getService(ResponseTYPE).set(response);
}
});
return Observable.<Void> create(subscriber -> {
try {
appHandler.handle(containerRequest);
subscriber.onCompleted();
}
finally {
IOUtils.closeQuietly(requestData);
}
}).doOnTerminate(() -> response.close(true)).subscribeOn(Schedulers.io());
}
private SecurityContext getSecurityContext(URI requestUri, HttpServerRequest<ByteBuf> request) {
// Not yet handling security
}
// A custom ContainerResponseWriter to write to Netty response
public static class RxNettyContainerResponseWriter implements ContainerResponseWriter {
private final HttpServerResponse<ByteBuf> serverResponse;
private final ByteBuf contentBuffer;
public RxNettyContainerResponseWriter(HttpServerResponse<ByteBuf> serverResponse) {
this.serverResponse = serverResponse;
this.contentBuffer = serverResponse.getChannel().alloc().buffer();
}
@Override
public void commit() {
if (!serverResponse.isCloseIssued()) {
serverResponse.writeAndFlush(contentBuffer);
}
}
..
@Override
public void failure(Throwable throwable) {
LOGGER.error("Failure Servicing Request", throwable);
if (!serverResponse.isCloseIssued()) {
serverResponse.setStatus(HttpResponseStatus.INTERNAL_SERVER_ERROR);
serverResponse.writeString("Request Failed...");
}
}
@Override
public OutputStream writeResponseStatusAndHeaders(long contentLength,
ContainerResponse containerResponse) throws ContainerException {
serverResponse.setStatus(HttpResponseStatus.valueOf(containerResponse.getStatus()));
HttpResponseHeaders responseHeaders = serverResponse.getHeaders();
if (!containerResponse.getHeaders().containsKey(HttpHeaders.Names.CONTENT_LENGTH)) {
responseHeaders.setHeader(HttpHeaders.Names.CONTENT_LENGTH, contentLength);
}
for (Map.Entry<String, List<String>> header : containerResponse.getStringHeaders().entrySet()) {
responseHeaders.setHeader(header.getKey(), header.getValue());
}
return new ByteBufOutputStream(contentBuffer);
}
}
}
</pre>
As mentioned, the above is heavily influenced by the code from Karyon's Jersey 1.X <a href="https://github.com/Netflix/karyon/blob/master/karyon2-jersey-blocking/src/main/java/netflix/karyon/jersey/blocking/JerseyBasedRouter.java">JerseyBasedRouter</a> and <a href="https://github.com/Netflix/karyon/blob/master/karyon2-jersey-blocking/src/main/java/netflix/karyon/jersey/blocking/NettyToJerseyBridge.java">NettyToJerseyBridge</a>. It has a few additional enhancements in being able to inject RxNetty's <a href="https://github.com/allenxwang/RxNetty/blob/master/rx-netty/src/main/java/io/reactivex/netty/protocol/http/server/HttpServerRequest.java">HttpServerRequest</a> and <a href="https://github.com/allenxwang/RxNetty/blob/master/rx-netty/src/main/java/io/reactivex/netty/protocol/http/server/HttpServerResponse.java">HttpServerResponse</a> into a Jersey Resource. One thing to note is that I could not find a way to inject the parameterized equivalents of <i>HttpServerRequest</i> and <i>HttpServerResponse</i> into a Jersey Resource and unless greatly mistaken it is <a href="http://www.codeitive.com/vSSNNzqXgk/how-to-inject-an-unbound-generic-type-in-hk2.html">limitation of HK2</a> that prevents the same. An example of the injected resource looks like:<br />
<pre class="java" name="code" style="text-align: left;">public class SomeResource {
private final HttpServerRequest<ByteBuf> request;
private final HttpServerResponse<ByteBuf> response;
@SuppressWarnings("unchecked")
@Inject
public SomeResource(@SuppressWarnings("rawtypes") HttpServerRequest request,
@SuppressWarnings("rawtypes") HttpServerResponse response) {
this.request = request;
this.response = response;
}
....
}
</pre>
<h2 style="text-align: left;">
</h2>
<h2 style="text-align: left;">
<br /></h2>
<h2 style="text-align: left;">
Using the Container
</h2>
<div>
<br /></div>
In line with other containers provided by Jersey, there is a factory class and methods to start the RxNetty Jersey 2.X container.<br />
<pre class="java" name="code">public class RxNettyHttpServerFactory {
private static final int DEFAULT_HTTP_PORT = 80;
public static HttpServer<<ByteBuf, ByteBuf> createHttpServer(Application application) {
return createHttpServer(DEFAULT_HTTP_PORT, application, false);
}
public static HttpServer<ByteBuf, ByteBuf> createHttpServer(Application application, boolean start) {
return createHttpServer(DEFAULT_HTTP_PORT, application, start);
}
public static HttpServer<ByteBuf, ByteBuf> createHttpServer(int port, Application application) {
return createHttpServer(port, application, false);
}
public static HttpServer<ByteBuf, ByteBuf> createHttpServer(int port, Application application,
boolean start) {
HttpServer<ByteBuf, ByteBuf> httpServer = RxNetty.createHttpServer(port,
new RxNettyHttpContainer(application));
if (start) {
httpServer.start();
}
return httpServer;
}
}
</pre>
As an example:
<br />
<pre class="java" name="code">public class Main {
public static void main(String args[]) {
ResourceConfig resourceConfig = ResourceConfig.forApplication(new NotesApplication());
RxNettyHttpServerFactory.createHttpServer(9090, resourceConfig, true);
System.in.read(); // Wait for user input to shutdown
}
}
</pre>
With the above, you have a RxNetty based Jersey 2.0 Container that you can have fun with. Wait! What about an example?
<br />
<br />
<h2 style="text-align: left;">
Example Using RxNetty Jersey 2.0 Container</h2>
<div>
<br /></div>
As always, I lean back to my <a href="http://sleeplessinslc.blogspot.com/2014/04/jerseyjaxrs-2x-example-using-spring.html">Notes example using Jersey 2.0 </a>where a Note can be CRUDded! In addition, my example would not be complete without a touch of Spring in it, so we use Spring for DI. One of the sadness on the Jersey Spring integration from the artifact <a href="http://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22org.glassfish.jersey.ext%22%20AND%20a%3A%22jersey-spring3%22">jersey-spring3</a> is that one needs to have javax.servlet as compile time dependency even if one does not use it. The Notes client is developed using RxJersey and the example has Spring for injection. and can be <a href="http://home.comcast.net/~acharya.s/java/rxnetty-jersey2.zip">downloaded from HERE </a>or accessed at github a<a href="https://github.com/sanjayvacharya/sleeplessinslc/tree/master/rxnetty-jersey2">t https://github.com/sanjayvacharya/sleeplessinslc/tree/master/rxnetty-jersey2</a><br />
<div>
<br /></div>
<h2 style="text-align: left;">
Performance of the RxNetty Jersey 2 Container</h2>
<div>
<br /></div>
A simple JMeter Load Tests exists in the Notes integration test project that tests the performance of the above RxNetty Container and Grizzly.<br />
<div>
<ul style="text-align: left;">
</ul>
Now as a disclaimer, I am not a proclaimed JMeter expert so I could have got some stuff wrong. That said the following are some of the results contrasting the performance of the Grizzly container with the RxNetty Jersey 2 Container.<br />
<br />
<h4 style="text-align: left;">
User Threads - 50 - Loop Count 25</h4>
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_DahFRZ3rKVLv7PIeVDxbrlh5-hO-AtLz7L1grH5RQiG-Bkf-vgAAHqMYfmFMs4JCUDtg-cp2IZz284LEW1odaAmtC8tizB7b4NpOJWyrjEbwwFcwnulQOVAmt3eggl7WzGvdKGO7Ltz5/s1600/50ThreadsLoopCount25.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="20" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_DahFRZ3rKVLv7PIeVDxbrlh5-hO-AtLz7L1grH5RQiG-Bkf-vgAAHqMYfmFMs4JCUDtg-cp2IZz284LEW1odaAmtC8tizB7b4NpOJWyrjEbwwFcwnulQOVAmt3eggl7WzGvdKGO7Ltz5/s320/50ThreadsLoopCount25.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPf_SR3aDclphHk-RewNIFsFuokaqq5TYF32pEyvZz49MTwyWBxSowFVHpi515jx1ezkNC_g_zuuWiUh67A5EAXtS1McbFOEsGNzB2SXgDHvYy8trflyprWkhu4BfhGnfeqiORvub15nMg/s1600/50ThreadsLoopCount25-graph.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="155" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPf_SR3aDclphHk-RewNIFsFuokaqq5TYF32pEyvZz49MTwyWBxSowFVHpi515jx1ezkNC_g_zuuWiUh67A5EAXtS1McbFOEsGNzB2SXgDHvYy8trflyprWkhu4BfhGnfeqiORvub15nMg/s320/50ThreadsLoopCount25-graph.png" width="320" /></a></div>
<div>
<br /></div>
<h4 style="text-align: left;">
User Threads - 500 - Loop Count 10</h4>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjlw28JTC9uE9zEazb_rxea3GDI6jo3jLG0kFqs3Lk-X36nRXopVkO7kWeFVxin39nLAt3WoYzVBJyMBAfSGo2ZrMjrT-jopuLlOhVSa6GCIUBLFKOVrv9xi-bHC9gVSo1ti88QKDUr0PW/s1600/500ThreadsLoopCount10.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="20" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjlw28JTC9uE9zEazb_rxea3GDI6jo3jLG0kFqs3Lk-X36nRXopVkO7kWeFVxin39nLAt3WoYzVBJyMBAfSGo2ZrMjrT-jopuLlOhVSa6GCIUBLFKOVrv9xi-bHC9gVSo1ti88QKDUr0PW/s320/500ThreadsLoopCount10.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigyhK78moVk2GAeUgSDwFKf1lKck4s4pNyoXHAVX-UjGnkIUcdbJGizk33XBM0sqOsNoN-2QBNa5jzQ48KgXRJI6oyU4R_vAbdTTF7e2Q2FVwEXouxnxnPml2ICxLSYrhYT9M9OihbuZSi/s1600/500Users10repeat-graph.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="155" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigyhK78moVk2GAeUgSDwFKf1lKck4s4pNyoXHAVX-UjGnkIUcdbJGizk33XBM0sqOsNoN-2QBNa5jzQ48KgXRJI6oyU4R_vAbdTTF7e2Q2FVwEXouxnxnPml2ICxLSYrhYT9M9OihbuZSi/s320/500Users10repeat-graph.png" width="320" /></a></div>
<div>
The two tests runs are to create notes and to obtain back notes. It appears that from both the tests, the throughput and responsiveness of the <i>RxNettyContainer</i> on the<i> Notes Service </i>is far better than the Grizzly equivalent. Note that GC metrics have not been looked into. There is one gotcha that I have not mentioned to the above. When a <b>large</b> payload is requested, the RxNetty Jersey 2.0 container chokes. As it is based of the Karyon Jersey 1.X implementation, I need to check and see if Karyon Jersey has the same problem. I believe it has to do with buffer copy between the HttpServerResponse and the content buffer (ByteBuf). To witness the problem in action, add the <b>get all notes </b>call to the JMeter script. There are two launchers for the <a href="https://github.com/sanjayvacharya/sleeplessinslc/blob/master/rxnetty-jersey2/rxnetty-jersey2-notes-integration-test/src/test/java/com/welflex/notes/GrizzlyServerLauncher.java">Grizzly</a> and <a href="https://github.com/sanjayvacharya/sleeplessinslc/blob/master/rxnetty-jersey2/rxnetty-jersey2-notes-integration-test/src/test/java/com/welflex/notes/RxNettyServerLauncher.java">RxNetty</a> Containers respectively. Both of these need to be started before the JMeter test is launched.</div>
<div>
<br /></div>
<h2 style="text-align: left;">
Conclusion</h2>
<div>
<br /></div>
<div>
RxNetty looks pretty sweet at least from my simple tests. Using it as a Micro-container looks interesting with Jersey 2. The container I have created does not support @Aysnc on a Jersey Resource, similar to the jdk-http container and the Karyon jersey bridge, and to be honest, I am not even sure it makes sense to do so.<br />
<br />
The RxNetty Jersey 2.X container shown above -<br />
<ul style="text-align: left;">
<li>Is only a proof of concept. It does not have the necessary unit-tests and or validation. Something that could be easily done though.</li>
<li>Utilizes a class from Karyon - <a href="https://github.com/Netflix/karyon/blob/master/karyon2-governator/src/main/java/netflix/karyon/transport/util/HttpContentInputStream.java">HttpContentInputStream</a> so has a dependency on a Karyon jar</li>
<li>Karyon Jersey 1.X could easily be migrated to Jersey 2.X based of the above code it appears</li>
<li>If you wanted to use Jersey 2 with Netty, I would look into the Netty Jersey 2.X Container I mentioned earlier.</li>
</ul>
If anyone has thoughts around the container, load test, or example, ping me.</div>
</div>
Sanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.com1tag:blogger.com,1999:blog-4667121987470696359.post-2861631505474845972015-04-17T19:03:00.001-06:002016-08-12T20:12:57.946-06:00Reactive Programming with Jersey and RxJava<div dir="ltr" style="text-align: left;" trbidi="on">
<h2 style="text-align: left;">
Introduction</h2>
<div>
<br /></div>
<div>
I have the <a href="http://reactivex.io/">reactive programming</a> fever right now and am having fun using <a href="https://github.com/ReactiveX/RxJava">RxJava </a>(Reactive Extensions for Java). When I found out about <a href="https://jersey.java.net/documentation/latest/rx-client.html">reactive support for Jersey</a>, I figured a BLOG might be warranted.</div>
<div>
<h2 style="text-align: left;">
Problem Domain</h2>
</div>
<div>
It's kinda nice to have problem domain. In our case company Acme had a <a href="http://en.wikipedia.org/wiki/Monolithic_application">monolithic</a> retail web app application that sold products. There were many modules composing the monolith such as CheckOut, Account details, etc all contributing to a single deployable. One module among them was Product which had had core product information, pricing information and inventory that were obtained from a database.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_CZupY8V3ZJfd2AsSOYqAp6zxQSoawk7AutYgkzpiwENppr8uv_2ZMb_Yz7PfgwYbpWCHeFuTcbbZnbrC8B7rcydMTK-8zMY-KrMlKYv8IWvOQ8tAIxpNyuYHXMc2ZpwFklYzK9Y-Vuf3/s1600/ProductMonilith.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_CZupY8V3ZJfd2AsSOYqAp6zxQSoawk7AutYgkzpiwENppr8uv_2ZMb_Yz7PfgwYbpWCHeFuTcbbZnbrC8B7rcydMTK-8zMY-KrMlKYv8IWvOQ8tAIxpNyuYHXMc2ZpwFklYzK9Y-Vuf3/s1600/ProductMonilith.png" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
The solution worked really well at first. However, over time, the company felt the following pain points -<br />
<ul style="text-align: left;">
<li>One Deploy thus One Rollback - All or nothing</li>
<li>Long build time and testing</li>
<li>Information such as price, reviews etc were needed elsewhere as there was some logic associated with these thus having it centralized for consumption would be nice.</li>
<li>Coordination nightmare among teams working on different areas to make sure that all came together.</li>
</ul>
The Architects decided to extract the concept of Product into separate services. Over time, the company created separate web services for the base product, inventory, reviews and price. These were aligned with the different teams working on the respective areas and allowed for each area to evolve and scale independently.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-R2NwSnVqE3B-plsfqrjE8__Ttws5OJsUY2D9v2t5fXb4TeEoiCPC_W6TqlC5YDTOeGj32B4X9PKB02b-4VIyxlKYaLq07aifi2F2zbNdmBK29_Rwvl4_J2vV9uiPYluiHlRCvRuO-EOk/s1600/MicroServicePhase-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-R2NwSnVqE3B-plsfqrjE8__Ttws5OJsUY2D9v2t5fXb4TeEoiCPC_W6TqlC5YDTOeGj32B4X9PKB02b-4VIyxlKYaLq07aifi2F2zbNdmBK29_Rwvl4_J2vV9uiPYluiHlRCvRuO-EOk/s1600/MicroServicePhase-1.png" /></a></div>
<br />
<br />
All the web service calls were written using JAX-RS/Jersey. The company had totally adopted the micro services bandwagon. A product would be created for the Web Site by calling the different services and aggregating the result as shown below -<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnSniJvKU_tgnoSC6in3_ZwKD74vkuwbV_MKePcFf3U6UxZCDjbdLB9W2kkOYKZ_8DPWB3ERXVq55clXZeGRLKoiljh15P61gGY21Q_0ZItZSD0wBN-wLV_9IXLz__7SQVGTa8pjVvsesh/s1600/ProductSerialCoordination.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnSniJvKU_tgnoSC6in3_ZwKD74vkuwbV_MKePcFf3U6UxZCDjbdLB9W2kkOYKZ_8DPWB3ERXVq55clXZeGRLKoiljh15P61gGY21Q_0ZItZSD0wBN-wLV_9IXLz__7SQVGTa8pjVvsesh/s1600/ProductSerialCoordination.png" /></a></div>
<br />
The following is a sample code that uses just one jersey client that demonstrates the above :</div>
<pre class="java" name="code"> @GET
public void getProduct(@PathParam("id") Long productId,
@Suspended final AsyncResponse response) {
try {
// Get the Product
BaseProduct baseProduct = client.target(serviceUrls.getBaseProductUrl()).path("/products")
.path(String.valueOf(productId)).request().get(BaseProduct.class);
// Get reviews of the Product
Reviews reviews = client.target(serviceUrls.getReviewsUrl()).path("/productReviews")
.path(String.valueOf(productId)).request().get(Reviews.class);
// Create a Result Product object - Base Product object does not have price and inventory
Product resultProduct = resultProductFrom(baseProduct, reviews);
// Obtain the Price for Result Product
resultProduct.setPrice(client.target(serviceUrls.getPriceUrl()).path("/productPrice")
.path(productId.toString()).request().get(ProductPrice.class).getPrice());
// Get Price and Inventory of Product Options and set the same
for (Long optionId : optionId(resultProduct.getOptions())) {
Double price = client.target(serviceUrls.getPriceUrl()).path("/productPrice")
.path(optionId.toString()).request().get(ProductPrice.class).getPrice();
ProductInventory inventory = client.target(serviceUrls.getInventoryUrl())
.path("/productInventory").path(optionId.toString()).request()
.get(ProductInventory.class);
ProductOption option = resultProduct.getOption(optionId);
option.setInventory(inventory.getCount());
option.setPrice(price);
}
response.resume(resultProduct);
}
catch (Exception e) {
response.resume(e);
}
}
</pre>
<div>
<br />
<ul style="text-align: left;">
</ul>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
ACME soon found that calling these services had a performance degradation compared to when they were a monolithic product service that obtained all product information using a database operation.The performance degradation was primarily attributed to the serial nature in which the requests were being invoked and results subsequently composed not counting the fact that a database join among disparate tables was far more performant. As an immediate fix, the direction from the Architects was to ensure these services were called in parallel when appropriate -<br />
<ul>
<li>Call the Product Service to obtain core product data serially - You need base product information and its options</li>
<li>Call the Review Service asynchronously</li>
<li>Call the Pricing Service asynchronously to obtain the price of the Product and Options</li>
<li>Call the Inventory Service asynchronously to obtain the inventory of the different Product Options</li>
</ul>
The above worked and things were more performant, however, the code looked a mess due to the different <a href="http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/CountDownLatch.html">CountDownLatches</a>, composition logic and Futures they had in play. The Architects met again, hours were spent locked in a room while pizza's where delivered under their door until they heard of <a href="http://n.wikipedia.org/wiki/Reactive_programming">Reactive Programming</a> in general and <a href="https://jersey.java.net/documentation/latest/rx-client.html">Reactive Jersey</a> with <a href="https://github.com/ReactiveX/RxJava">RxJava</a> in particular! Bling, Bling, bulbs coming to light illuminating their halo's and away they go reacting to their new idea. They found that Reactive Jersey promotes a clean API to handle parallel execution and composition while not having to worry about Countdown latches and the likes. The resulting code developed looked like -<br />
<pre class="java" name="code">
@GET
public void observableProduct(@PathParam("id") final Long productId,
@Suspended final AsyncResponse response) {
// An Observable of a Result Product from
Observable<Product> product = Observable.zip(baseProduct(productId), reviews(productId),
new Func2<BaseProduct, Reviews, Product>() {
@Override
public Product call(BaseProduct product, Reviews review) {
return resultProductFrom(product, review);
}
});
// All Product
Observable<Long> productIds = productAndOptionIds(product);
// Observable of Options only
Observable<Long> optionIds = productIds.filter(new Func1<Long, Boolean>() {
@Override
public Boolean call(Long prodId) {
return !prodId.equals(productId);
}
});
// Set Inventory Data
product
.zipWith(inventories(productId, optionIds).toList(), new Func2<Product, List<ProductInventory>, Product>() {
@Override
public Product call(Product resultProduct, List<ProductInventory> productInventories) {
for (ProductInventory inventory : productInventories) {
if (!inventory.getProductId().equals(resultProduct.getProductId())) {
resultProduct.getOption(inventory.getProductId())
.setInventory(inventory.getCount());
}
}
return resultProduct;
}
})
// Set Price Data
.zipWith(prices(productIds).toList(), new Func2<Product, List<ProductPrice>, Product>() {
@Override
public Product call(Product resultProduct, List<ProductPrice> prices) {
for (ProductPrice price : prices) {
if (price.getProductId().equals(resultProduct.getProductId())) {
resultProduct.setPrice(price.getPrice());
}
else {
resultProduct.getOption(price.getProductId()).setPrice(price.getPrice());
}
}
return resultProduct;
}
}).observeOn(Schedulers.io()).subscribe(new Action1<Product>() {
@Override
public void call(Product productToSet) {
response.resume(productToSet);
}
}, new Action1<Throwable>() {
@Override
public void call(Throwable t1) {
response.resume(t1);
}
});
}
/**
* @return an Observable of the BaseProduct
*/
private Observable<BaseProduct> baseProduct(Long productId) {
return RxObservable
.from(
client.target(serviceUrls.getBaseProductUrl()).path("/products").path(String.valueOf(productId)))
.request().rx().get(BaseProduct.class);
}
/**
* @return An Observable of the Reviews
*/
private Observable<Reviews> reviews(Long productId) {
return RxObservable
.from(client.target(serviceUrls.getReviewsUrl()).path("/productReviews").path(String.valueOf(productId)))
.request().rx().get(Reviews.class);
}
/**
* @return An Observable having Product and Option Ids
*/
private Observable<Long> productAndOptionIds(Observable<Product> product) {
return product.flatMap(new Func1<Product, Observable<Long>>() {
@Override
public Observable<Long> call(Product resultProduct) {
return Observable.from(Iterables.concat(
Lists.<Long> newArrayList(resultProduct.getProductId()),
Iterables.transform(resultProduct.getOptions(), new Function<ProductOption, Long>() {
@Override
public Long apply(ProductOption option) {
return option.getProductId();
}
})));
}
});
}
/**
* Inventories returns back inventories of the Primary product and options.
* However, for the primary product, no web service call is invoked as inventory of the main product is the sum of
* inventories of all options. However, a dummy ProductInventory is created to maintain order during final concatenation.
*
* @param productId Id of the Product
* @param optionIds Observable of OptionIds
* @return An Observable of Product Inventory
*/
private Observable<ProductInventory> inventories(Long productId, Observable<Long> optionIds) {
return Observable.just(new ProductInventory(productId, 0))
.concatWith(optionIds.flatMap(new Func1<Long, Observable<ProductInventory>>() {
@Override
public Observable<ProductInventory> call(Long optionId) {
return RxObservable
.from(client.target(serviceUrls.getInventoryUrl()).path("/productInventory").path("/{productId}"))
.resolveTemplate("productId", optionId).request().rx().get(ProductInventory.class);
}
}));
}
/**
* @return An Observable of ProductPrice for product Ids
*/
private Observable<ProductPrice> prices(Observable<Long> productIds) {
return productIds
.flatMap(new Func1<Long, Observable<ProductPrice>>() {
@Override
public Observable<ProductPrice> call(Long productId) {
return RxObservable
.from(client.target(serviceUrls.getPriceUrl()).path("/productPrice").path("/{productId}"))
.resolveTemplate("productId", productId.toString()).request().rx()
.get(ProductPrice.class);
}
});
}
</pre>
<br />
Phew! A lot of code for something simple huh? The above code is demonstrated using jdk 7. With jdk 8 Lambda's, it should be far more succinct. However, I will admit that there is more code but not code that is not clear (hopefully). The important thing is that we are not dealing with the '<i>callback hell</i>' associated with Java Futures or Invocation callbacks. That said, the performance difference between the serial execution and the Reactive Jersey version appears to strongly favor the Reactive Jersey version by a significant magnitude. By taking the serial call and turning in into parallel execution, I am certain you will achieve closer numbers to the Reactive Jersey version but at the cost of having to maintain latches etc.<br />
<br />
<h2 style="text-align: left;">
Running the Example</h2>
<br />
<div>
An example application can be obtained from <a href="https://github.com/sanjayvacharya/sleeplessinslc/tree/master/reactive-jersey.">https://github.com/sanjayvacharya/sleeplessinslc/tree/master/reactive-jersey.</a> The example does not have the Acme web site but has a Product-Gateway that serves as an orchestration layer. There are two resources in the Product-Gateway, one that returns the product using serial orchestration and a second that uses Reactive Jersey clients. An Integration Test project exists where a client is used to invoke both the Serial and Reactive resources and logs an average response time. Checkout the project, and simply execute a '<b>mvn install' </b>at the root level of the project.</div>
<h2 style="text-align: left;">
Summary</h2>
<ul style="text-align: left;">
<li>Reactive programming appears more verbose. Possible that I have not optimized it well enough, tips would be appreciated. jdk-8 Lamda's would regardless reduce the footprint.</li>
<li>Requires a mind shift regarding how you write code.</li>
<li>Aysnc could be introduced as required using <i>Observable.subscribeOn()</i> and <i>Observable.observeOn()</i> features as an when required. Read more about it on a <a href="http://stackoverflow.com/questions/26249030/rxjava-fetching-observables-in-parallel">Stack Overflow post</a>.</li>
<li>It's a style of programming not a particular technology. RxJava seems to be emerging as the library of choice for Java.</li>
<li>Question whether something benefits from being Reactive before introducing Reactive code.</li>
<li>Jersey Reactive is a Glassfish implementation and <b>not part of a JSR.</b> So....</li>
<li>Monlith to Microservices might only be trading one problem for another so observe caution and be judicious in your selection of what you choose to make into a microservice</li>
<li>Batch request that fetches multiple items is not necessarily faster than multiple requests for a each item.</li>
</ul>
Finally, if there is a way to make my Reactive jersey code less verbose, let me know...all I ask is don't say collapse the services back to the monolith :-)</div>
</div>
Sanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.com6tag:blogger.com,1999:blog-4667121987470696359.post-14826940566662266042014-09-29T20:25:00.002-06:002014-10-06T18:08:51.822-06:00Spring @Value and javax.inject.Provider for Changeable Property Injection<div dir="ltr" style="text-align: left;" trbidi="on">
<h2 style="text-align: left;">
Introduction</h2>
<div>
<br /></div>
With Spring and other DI containers one works with 'properties' that are used to configure values. Some values might needs to be changed dynamically at runtime while others might be statically configured. A changeable property lends itself to a<i><a href="http://docs.oracle.com/javaee/6/api/javax/inject/Provider.html"> javax.inject.Provider</a></i> style where the property is obtained as required and used. Sure that one could inject a<i><a href="http://docs.oracle.com/javase/7/docs/api/java/util/Properties.html"> java.util.Properties </a></i>class and use it to obtain the corresponding property. It just seems more in line with The <a href="http://en.wikipedia.org/wiki/Law_of_Demeter">Law of Demeter</a> to inject the actual property or the next closest thing, i.e., a dedicated <i>Provider </i>of the property.<br />
<br />
Spring has the support for injecting a property via <a href="http://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/beans/factory/annotation/Value.html">@Value</a> annotation as :<br />
<pre class="java" name="code">public class Foo {
@Value("${fooVal}")
private Integer integer;
...
}
</pre>
or Constructor Injection<br />
<pre class="java" name="code">public class Foo {
private final Integer integer;
public Foo(@Value("${fooVal}") Integer integer) {
this.integer = integer;
}
...
}
</pre>
or Setter Injection<br />
<pre class="java" name="code">public class Foo {
private Integer integer;
@Inject
public setInteger(@Value("${fooVal}") Integer integer) {
this.integer = integer;
}
...
}
</pre>
<br />
Or if using JavaConfig:
<br />
<pre class="java" name="code">@Configuration
public class FooConfig {
@Bean
public Foo foo(@Value("${fooVal}") Integer integer) {
return new Foo(integer);
}
}
</pre>
<br />
If <i>fooVal </i>never changes during the execution of the program, great, you inject it and then its set. What if it were to change, due to it being a configurable value? What would be nice to do is the following:
<br />
<pre class="java" name="code">public class Foo {
private final Provider<Integer> integerProvider;
public Foo(Provider<Integer> integerProvider) {
this.integerProvider = integerProvider;
}
public void someOperation() {
Integer integer = integerProvider.get(); // Current value of fooVal
..
}
}
@Configuration
public class FooConfig {
@Bean
public Foo foo(@Value("${fooVal}") Provider<Integer> integerProvider) {
return new Foo(integer);
}
}
</pre>
Looks pretty reasonable, however, when you attempt the same with Spring Java Config, you end up with:
<br />
<pre name="code">org.springframework.beans.TypeMismatchException: Failed to convert value of type 'java.lang.String' to required type 'java.lang.Integer'; nested exception is java.lang.IllegalArgumentException: MethodParameter argument must have its nestingLevel set to 1
at org.springframework.beans.TypeConverterSupport.doConvert(TypeConverterSupport.java:77)
at org.springframework.beans.TypeConverterSupport.convertIfNecessary(TypeConverterSupport.java:47)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:875)
</pre>
This should just work. Simplest way I could get it to work is to fool spring that the nesting level of the<i> Provider<Integer&gt</i>; is actual one level lesser. The way I did it was to create my own <a href="http://org.spring.framework.beans./">org.spring.framework.beans.</a><i><a href="http://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/beans/TypeConverter.html">TypeConverter</a> </i>that delegated to the default <i>TypeConverter </i>used by Spring, i.e., the <i><a href="http://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/beans/SimpleTypeConverter.html">SimpleTypeConverter</a></i>:
<br />
<pre class="java" name="code">public class CustomTypeConverter implements TypeConverter {
private final SimpleTypeConverter simpleTypeConverter;
public CustomTypeConverter() {
simpleTypeConverter = new SimpleTypeConverter(); // This is the default used by Spring
}
public <T> T convertIfNecessary(Object newValue, Class<T> requiredType,
MethodParameter methodParam) throws IllegalArgumentException {
Type type = methodParam.getGenericParameterType();
MethodParameter parameterTarget = null;
if (type instanceof ParameterizedType) {
ParameterizedType paramType = ParameterizedType.class.cast(type);
Type rawType = paramType.getRawType();
if (rawType.equals(Provider.class)
&& methodParam.hasParameterAnnotation(Value.class)) {
// If the Raw type is javax.inject.Provider, reduce the nesting level
parameterTarget = new MethodParameter(methodParam);
// Send a new Method Parameter down stream, don't want to fiddle with original
parameterTarget.decreaseNestingLevel();
}
}
return simpleTypeConverter.convertIfNecessary(newValue, requiredType, parameterTarget);
}
...// Delegate other methods to simpleTypeConverter
}
</pre>
With the above you could do the following:
<br />
<pre class="java" name="code">public class SpringTest {
@Test
public void test() {
AnnotationConfigApplicationContext context = new AnnotationConfigApplicationContext();
context.getBeanFactory().setTypeConverter(new CustomTypeConverter());
context.register(SimpleConfig.class);
context.refresh();
context.getBean(SimpleBean.class).print();
}
public static class SimpleConfig {
@Bean(name = "props") // Change this to a something that detects file system changes and pulls it in
public PropertyPlaceholderConfigurer propertyPlaceHolderConfigurer() {
PropertyPlaceholderConfigurer configurer = new PropertyPlaceholderConfigurer();
Resource location = new ClassPathResource("appProperties.properties");
configurer.setLocation(location);
return configurer;
}
@Bean
public SimpleBean simpleBean(@Value("${fooVal}") Provider<Integer> propValue,
@Value("${barVal}") Provider<Boolean> booleanVal) {
return new SimpleBean(propValue, booleanVal);
}
}
public static class SimpleBean {
private final Provider<Integer> propVal;
private final Provider<Boolean> booleanVal;
public SimpleBean(Provider<Integer>, Provider<Boolean> booleanVal) {
this.propVal = propValue;
this.booleanVal = booleanVal;
}
public void print() {
System.out.println(propVal.get() + "," + booleanVal.get());
}
}
}
</pre>
Now with the above if your properties changes, as you are injecting a <i>javax.inject.Provider</i> to the <i>SimpleBean</i>, at runtime, it will obtain the current value to use.
<br />
<br />
All this is great, I understand that I am hacking Spring to get what I want, is this the best way to handle something like dynamically changeable properties? Thoughts and suggestion welcomed.
<br />
<br />
<b>Update :</b><br />
I filed an enhancement request with Spring maintainers and they were fast to respond with this actually being a bug and are back porting fixes to previous versions as well as ensuring current and future versions have this feature available.<a href="https://jira.spring.io/browse/SPR-12297"> https://jira.spring.io/browse/SPR-12297</a> </div>
Sanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.com0tag:blogger.com,1999:blog-4667121987470696359.post-24176341549244641542014-09-14T14:48:00.001-06:002015-12-20T08:55:54.747-07:00Dynamic Service Discovery with Apache Curator<div dir="ltr" style="text-align: left;" trbidi="on">
<a href="http://en.wikipedia.org/wiki/Web_Services_Discovery">Service Discovery</a> is a core part of most Service Oriented Architectures in particular and Distributed Network Component Architectures in general.<br />
<br />
Traditional usages to find the location of a service have been to use Local file system, Shared File System, Database or some 3rd party Service Registry that statically define service locations. In many cases, one knows exactly the number of services that will be used and their deployment locations thus leading to a statically defined registry as an ideal candidate for discovery. In larger scale systems statically defined locations can become harder to manage as -<br />
<ul style="text-align: left;">
<li>Number of services in the ecosystem grow (micro-services)</li>
<li>Number of deployment 'environments' grow - Test Environment 1, Test Enviroment 2...etc</li>
<li>Elasticity scalability becomes desirable - More important with Cloud based environments</li>
<li>Multiple services versions need to be supported simultaneously (A/B test)</li>
</ul>
Enter the concept of Dynamic Service Registration and Discovery. Services on becoming available register themselves as a member ready to participate with some 'registry'. Service Consumers are notified of the availability of the new service and avail its usage. If the service terminates for whatever reason, the Service Consumer is notified of the same and can stop availing its usage. This is not a novel concept and has likely been addressed in different ways.<br />
<br />
Any Dynamic Service Registration System presents the following challenges -<br />
<ul style="text-align: left;">
<li><b>Availability </b>- The Service Registry can become the <b>SPOF </b>(Single Point Of Failure) for an entire ecosystem</li>
<li><b>Reliability </b>- Keeping the registry and thus the service consumers in sync with which service instance is currently available. A service might go down and if the registry and service consumer are not notified immediately, service calls might fail.</li>
<li><b>Generality </b>- Registry that supports multiple operating systems and technologies.</li>
</ul>
Again, the above is solvable by different ways. The benefit we have is that there are a number of tools that intrinsically support the requirements of a dynamic registry such as <a href="http://sleeplessinslc.blogspot.com/2012/01/apache-zookeeper-maven-example.html">ZooKeeper</a>, Netflix Eureka, Linked In's <a href="https://github.com/linkedin/rest.li/wiki/Dynamic-Discovery">Dynamic Discovery</a>, Doozer etc. Jason Wilder has an excellent BLOG on Open Source <a href="http://jasonwilder.com/blog/2014/02/04/service-discovery-in-the-cloud/">Service Discovery</a> and a comparison of popular open source service registries.<br />
<br />
I have been looking into <a href="http://zookeeper.apache.org/">ZooKeeper </a>and thus will share a few of my findings on using it but before I do the same, I would like to take a moment to discuss Load Balancing.<br />
<br />
In standard H/A architectures a Load Balancer serves to balance traffic across the different service instances as shown below -<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnJqUrEQvD98eCpDeXbnOG2s-H78HlUV4C2xFsCyD0LzV5nB9MoJHOuAmhZMuA8d1MRxZ3ZYPUa3KkiBPhVdtKOjNK6WIbuLz4u7PjK_4nnPXeknJb4xCOKgncAJlDCTyH0ADiHEVZWJaM/s1600/simpleloadbalancer.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="230" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnJqUrEQvD98eCpDeXbnOG2s-H78HlUV4C2xFsCyD0LzV5nB9MoJHOuAmhZMuA8d1MRxZ3ZYPUa3KkiBPhVdtKOjNK6WIbuLz4u7PjK_4nnPXeknJb4xCOKgncAJlDCTyH0ADiHEVZWJaM/s1600/simpleloadbalancer.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
The same works well but poses the following challenges -<br />
<ul style="text-align: left;">
<li>A new Load Balancer for every service and every environment</li>
<li>An additional hop between a Service Consumer and Service</li>
<li>Adding/Removing services per need and time to do the same</li>
<li>Intelligent Routing and Supporting Service Degradation</li>
</ul>
A Dynamic Service Registry could help with the mentioned concerns where different instances of <i>Service C</i> register themselves with ZooKeeper and <i>Service A</i> and <i>Service B</i> on obtaining locations of <i>Service C</i> execute calls directly to them -<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPTAZ2eM7GyxWtx8WEGNZssV37ZVMj1snKvp-n30DjdSCYGMsIdzkr_nKi8yzKKO_G-iIoRb6oPOvWrcn3_JVrRsSIMdyb8yu5ZRiWP_AshvtPChorlogKS_ACAhl59pz6Oy2OJearamaU/s1600/curatorloadbalancer.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="257" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPTAZ2eM7GyxWtx8WEGNZssV37ZVMj1snKvp-n30DjdSCYGMsIdzkr_nKi8yzKKO_G-iIoRb6oPOvWrcn3_JVrRsSIMdyb8yu5ZRiWP_AshvtPChorlogKS_ACAhl59pz6Oy2OJearamaU/s1600/curatorloadbalancer.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<br />
ZooKeeper's <a href="http://zookeeper.apache.org/doc/r3.2.1/zookeeperProgrammers.html#Ephemeral+Nodes">Ephermal nodes</a> serve well to maintain registered service instances as an Ephermal node only exists as long the client is alive, so if a client dies, the Ephermal node is deleted. It is possible to use ZooKeeper directly and develop code that will register services, discover registered service etc. However, there is a library called <a href="http://curator.apache.org/">Apache Curator</a> that was initially created by <a href="http://netflix.github.io/#repo">Netflix </a>to make writing ZooKeeper based application easier and more reliable. It was then incorporated as an Apache Project. Curator provides '<b><a href="http://curator.apache.org/curator-recipes/index.html">recipes</a></b>' to common use cases one would use ZooKeeper for, i.e., Leader Election, Queue, Semaphore, Distributed Locks etc. In addition to these 'recipes', it also provides a mechanism for <a href="http://curator.apache.org/curator-x-discovery/index.html">Service Registration and Discovery</a>.<br />
<br />
The remainder of this BLOG will look at providing a simple example that demonstrates the use of Apache Curator in the context of a simple JAX-RS web service that gets registered and is subsequently dynamically discovered by clients. For the sake of the example, I will continue to use the<a href="http://sleeplessinslc.blogspot.com/2014/04/jerseyjaxrs-2x-example-using-spring.html"> <b>Notes</b> JAX-RS Web Service</a> that I have used on previous posts.<br />
<b><br /></b>
<b>Service Registration -</b><br />
<br />
In the example domain, all services will be registered under the ZooKeeper path "<b><i>/services</i></b>". When the Notes Service starts, it registers itself with ZooKeeper as an <i>ephermal</i> node in the path "<i><b>/services/notes/XXXXXXXX</b></i>", where XXX indicates the ephermal instance.<br />
<br />
A simple class called <i>ServiceRegistrar </i>is responsible for registering the Notes Service:<br />
<pre class="java" name="code">public class ServiceRegistrarImpl implements ServiceRegistrar {
...
private static final String BASE_PATH = "/services";
private static final String SERVICE_NAME = "notes";
private final CuratorFramework client; // A Curator Client
private ServiceDiscovery<InstanceDetails> serviceDiscovery;
// This instance of the service
private ServiceInstance<InstanceDetails> thisInstance;
private final JsonInstanceSerializer<InstanceDetails> serializer;
public ServiceRegistrarImpl(CuratorFramework client, int servicePort) throws UnknownHostException, Exception {
this.client = client;
serializer = new JsonInstanceSerializer<>(InstanceDetails.class); // Payload Serializer
UriSpec uriSpec = new UriSpec("{scheme}://{address}:{port}"); // Scheme, address and port
thisInstance = ServiceInstance.<InstanceDetails>builder().name(SERVICE_NAME)
.uriSpec(uriSpec)
.address(InetAddress.getLocalHost().getHostAddress()) // Service information
.payload(new InstanceDetails()).port(servicePort) // Port and payload
.build(); // this instance definition
}
@Override
public void registerService() throws Exception {
serviceDiscovery = ServiceDiscoveryBuilder.builder(InstanceDetails.class)
.client(client)
.basePath(BASE_PATH).serializer(serializer).thisInstance(thisInstance)
.build();
serviceDiscovery.start(); // Registers this service
}
@Override
public void close() throws IOException {
try {
serviceDiscovery.close();
}
catch (Exception e) {
LOG.info("Error Closing Discovery", e);
}
}
}
</pre>
A Servlet Listener is defined whose responsibility is to start the registrar -
<br />
<pre class="java" name="code">public class ServiceRegistrarListener implements ServletContextListener {
private static final Logger LOG = Logger.getLogger(ServiceRegistrarListener.class);
@Override
public void contextDestroyed(ServletContextEvent sc) {
}
@Override
public void contextInitialized(ServletContextEvent sc) {
try {
WebApplicationContextUtils.getRequiredWebApplicationContext(sc.getServletContext())
.getBean(ServiceRegistrar.class).registerService();
}
catch (Exception e) {
LOG.error("Error Registering Service", e);
throw new RuntimeException("Exception Registering Service", e);
}
}
}
</pre>
<br />
<b>Service Discovery -
</b><br />
A Simple Discoverer class is shown below which uses the Curator library in obtaining instances of registered Notes Services. Curator provides a <i>ServiceDiscovery </i>class that is made aware of a Service Registry. A <i>ServiceProvider </i>is then used to obtain registered service instances for a particular service.<br />
<pre class="java" name="code">public class ServiceDiscoverer {
private static final Logger LOG = Logger.getLogger(ServiceDiscovery.class);
private static final String BASE_PATH = "/services";
private final CuratorFramework curatorClient;
private final ServiceDiscovery<InstanceDetails> serviceDiscovery;
private final ServiceProvider<InstanceDetails> serviceProvider;
private final JsonInstanceSerializer<InstanceDetails> serializer;
private final List<Closeable> closeAbles = Lists.newArrayList();
public ServiceDiscoverer(String zookeeperAddress, String serviceName) throws Exception {
curatorClient = CuratorFrameworkFactory.newClient(zookeeperAddress,
new ExponentialBackoffRetry(1000, 3)); // Ideally this would be injected
serializer = new JsonInstanceSerializer<>(InstanceDetails.class); // Payload Serializer
serviceDiscovery = ServiceDiscoveryBuilder.builder(InstanceDetails.class).client(curatorClient)
.basePath(BASE_PATH).serializer(serializer).build(); // Service Discovery
serviceProvider = serviceDiscovery.serviceProviderBuilder().serviceName(serviceName).build(); // Service Provider for a particular service
}
public void start() {
try {
curatorClient.start();
closeAbles.add(curatorClient);
serviceDiscovery.start();
closeAbles.add(0, serviceDiscovery);
serviceProvider.start();
closeAbles.add(0, serviceProvider);
}
catch (Exception e) {
throw new RuntimeException("Error starting Service Discoverer", e);
}
}
public void close() {
for (Closeable closeable : closeAbles) {
// Close all
...
}
}
public ServiceInstance<InstanceDetails> getServiceUrl() {
try {
return instance = serviceProvider.getInstance();
}
catch (Exception e) {
throw new RuntimeException("Error obtaining service url", e);
}
}
}
</pre>
The <i>ServiceProvider</i> is used in the Notes Service Client as follows:
<br />
<pre class="java" name="code">public class NotesClientImpl implements NotesClient {
private static final Logger LOG = Logger.getLogger(NotesClientImpl.class);
public static final String SERVICE_NAME = "notes";
private final Client webServiceClient; // JAX-RS Client
private final ServiceDiscoverer serviceDiscoverer; // Service Disoverer
/**
* @param getNotesServerUrl() Server URI
* @throws Exception
*/
public NotesClientImpl(String zookeeperAddress) throws Exception {
serviceDiscoverer = new ServiceDiscoverer(zookeeperAddress, SERVICE_NAME);
serviceDiscoverer.start();
ClientConfig config = new ClientConfig();
webServiceClient = ClientBuilder.newClient(config);
}
@Override
public NoteResult create(Note note) {
ServiceInstance<InstanceDetails> instance = serviceDiscoverer.getServiceInstance();
try {
return webServiceClient
.target(UriBuilder.fromUri(instance.buildUriSpec()).path("/notes").build())
.request(MediaType.APPLICATION_XML)
.post(Entity.entity(note, MediaType.APPLICATION_XML_TYPE), NoteResult.class);
}
catch (ProcessingException e) {
// If a ProcessingException occurs, the current ServiceInstance is marked as down
serviceDiscoverer.noteError(instance);
throw e;
}
}
</pre>
In the above code, a <i>ServiceInstance </i>is obtained from the <i>ServiceDiscoverer</i>. It is important to '<b>not</b>' locally hold onto the instance but use it during the processing of a call and then let go of it. If a <i><a href="http://docs.oracle.com/javaee/7/api/javax/ws/rs/ProcessingException.html">javax.ws.rs.ProcessingException</a></i> occurs say due to a Service Instance dying abruptly during processing for the request, then, notifying Curator to not use that instance for a certain period of time is the recommended direction. There is a time out delay between when the service instance dies and ZooKeeper times out the ephermal node associated with the service. So unless the instance is marked as being in an error state on the service client, the <i>ServiceDiscoverer</i> would continue to provide the downed service instance for usage.<br />
<br />
<b>Integration Test </b>-<br />
<br />
A simple Integration Test has been shown where a number of<i> Notes Server </i>instances are started up and they register themselves with a test <i>ZooKeeper </i>instance. The<i> Notes Client </i>is able to round robin to each of those instances. Each of the servers are designed to randomly die after a period of time leading to them being un-registered .When a <i>Notes Server</i> Instance goes down, it will no longer be invoked as part of the test. There is a <i>Notes Server</i> started that runs for a long time and acts as a fall back for demonstration completeness.<br />
<pre class="java" name="code">public class SimpleIntegrationTest {
private TestingServer testServer; // ZooKeeper Server
private NotesServer longRunningNotesServer; // A Notes Server that does not die for the demo
private static final int MAX_SERVERS = 10; // Max Servers to use
@Before
public void setUp() throws Exception {
testServer = new TestingServer(9001); // Start the Curator ZooKeeper Test Server
}
@After
public void tearDown() throws IOException {...}
@Test
public void integration() throws Exception {
CyclicBarrier barrier = new CyclicBarrier(MAX_SERVERS + 1);
for (Integer port : ports()) { // Start Instances of Notes Server on different ports
new NotesServer(port, barrier).start(); // Start an Instance of the Notes Server at the provided port
Thread.sleep(1000);
}
longRunningNotesServer = new NotesServer(9210, 500000L, null); // Start a Long Running Notes Server
longRunningNotesServer.start();
barrier.await(); // Wait for all Servers start
NotesClient notesClient = new NotesClientImpl(testServer.getConnectString());
List<NoteResult> noteResults = Lists.newArrayList();
int i = 0;
int maxNotes = 10000;
while (i < maxNotes) {
try {
Note note = new Note("" + i, "" + i);
NoteResult result = notesClient.create(note);
noteResults.add(result);
i++;
}
catch (Exception e) {
e.printStackTrace(); // If an error occurs on one instance, try again
}
}
// Ensure that all Notes Servers have been utilized
assertEquals(MAX_SERVERS +1, notesClient.getServiceInstancesUsed().size());
}
private static Set<Integer> ports() {
// Provide a unique set of ports for running multiple Notes Server
}
....
}
</pre>
<br />
<b>Thoughts -</b><br />
<ul style="text-align: left;">
<li>ZooKeeper is CP so in the event of Partition, writes can suffer.</li>
<li>ZooKeeper is SPOF of the ecosystem. Should have a contingency strategy.</li>
<ul>
<li>Linked In's <a href="https://github.com/linkedin/rest.li/wiki/Dynamic-Discovery">Dynamic Discovery</a> Architecture addresses something like this but has a lot of moving parts</li>
<li>Simplicity of a Load Balancer makes one think YAGNI</li>
<ul>
<li>Tools like <a href="http://puppetlabs.com/">Puppet Labs</a> might make a lesser case for burden of standing up environments as well</li>
</ul>
</ul>
<li>Service Discovery allow clients to be 'smart'. Support for Service Degradation, alerting, and monitoring kind of go hand in hand.</li>
<li>Service Registration and De-Registration can be smartly built as well. For example, a Service could itself determine it's not healthy for whatever reason and de-registerer itself from the registry.</li>
<li>Supporting multiple environments where an ecosystem is self discoverable is simpler than having to provide overrides across different environments.</li>
<li>Managing downed nodes with Curator needs to be handled carefully. Idempotence is your friend here.</li>
<li>Curator can definitely help with intelligent load balancing and different Load Balancing Strategies</li>
<ul>
<li>Round Robin</li>
<li>Random</li>
<li>Custom one that involves Service Degradation</li>
</ul>
<li>Curator provides a <a href="http://curator.apache.org/curator-x-discovery-server/index.html">REST service</a> that helps non-java clients wishing to participate in service registration and discovery.</li>
<li>If Netflix, Linked In and Amazon do something like this, it warrants looking into :-)</li>
</ul>
<br />
<br /></div>
Sanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.com2tag:blogger.com,1999:blog-4667121987470696359.post-36627221939794230742014-08-22T21:05:00.001-06:002014-08-22T21:05:38.391-06:00JVM Native memory leak<div dir="ltr" style="text-align: left;" trbidi="on">
<h2 style="text-align: left;">
The Problem</h2>
<div>
<br /></div>
<div>
There are times I learn something new of something that has been known since ages. Today was one of those times. Had a J<a href="http://en.wikipedia.org/wiki/Java_virtual_machine">ava Virtual Machine</a> (JVM) whose <b>heap </b>usage looked great but the JVM process ended up growing <b>native </b>memory and finally died with an exception stating unable to allocate native memory. One such example might look like:</div>
<pre name="code">java.lang.OutOfMemoryError at java.util.zip.Inflater.init(Native Method) at java.util.zip.Inflater.<init>(Inflater.java:101) at java.util.zip.ZipFile.getInflater(ZipFile.java:450)
</init></pre>
<br />
Or
<br />
<pre name="code">#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 184 bytes for AllocateHeap
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
</pre>
<div>
<br /></div>
<div>
The application in question that manifested the above is a simple <a href="https://jersey.java.net/">jersey </a>web service that serves up and consumes JSON using <a href="http://docs.oracle.com/javaee/6/tutorial/doc/gkknj.html">JSON JAXB</a> (Clue...). Initial interrogation yielded the following:</div>
<div>
<ul style="text-align: left;">
<li>Thread usage is normal (<span style="background-color: white; font-family: Arial, 'Liberation Sans', 'DejaVu Sans', sans-serif; font-size: 14px; line-height: 17.804800033569336px;">-Xss settings has no benefit)</span></li>
<li><span style="background-color: white; font-family: Arial, 'Liberation Sans', 'DejaVu Sans', sans-serif; font-size: 14px; line-height: 17.804800033569336px;"><span style="font-family: Calibri, sans-serif; line-height: normal;"> –XXMaxDirectMemorySize - No difference</span></span></li>
<li>Heap usage is healthy</li>
<li>Heap dumps look healthy - no leaks</li>
<li>Perm Gen is looking good</li>
<li>Native file descriptors are within normal usage limits</li>
<li>java.net.Socket creation did not appear to cause issues</li>
<li>JDK version latest (1.7.X)</li>
<li><span style="background-color: white; font-family: Arial, 'Liberation Sans', 'DejaVu Sans', sans-serif; font-size: 14px; line-height: 17.804800033569336px;">Happens on Virtual machines or stand alone boxes</span></li>
</ul>
<span style="font-family: Arial, Liberation Sans, DejaVu Sans, sans-serif;"><span style="font-size: 14px; line-height: 17.804800033569336px;">Some Stack Overflow and other assistance:</span></span></div>
<div>
<ul style="text-align: left;">
<li><span style="font-family: Arial, Liberation Sans, DejaVu Sans, sans-serif;"><span style="font-size: 14px; line-height: 17.804800033569336px;"><a href="http://stackoverflow.com/questions/10329546/java-process-memory-usage-much-more-than-what-runtime-totalmemory-et-al-methods">http://stackoverflow.com/questions/10329546/java-process-memory-usage-much-more-than-what-runtime-totalmemory-et-al-methods</a></span></span></li>
<li><span style="font-family: Arial, Liberation Sans, DejaVu Sans, sans-serif;"><span style="font-size: 14px; line-height: 17.804800033569336px;"><a href="http://stackoverflow.com/questions/4893192/process-memory-vs-heap-jvm">http://stackoverflow.com/questions/4893192/process-memory-vs-heap-jvm</a></span></span></li>
<li><span style="font-family: Arial, Liberation Sans, DejaVu Sans, sans-serif;"><a href="http://www.oracle.com/technetwork/java/javase/memleaks-137499.html">http://www.oracle.com/technetwork/java/javase/memleaks-137499.html</a></span></li>
</ul>
<ul style="text-align: left;">
</ul>
<span style="font-family: Arial, Liberation Sans, DejaVu Sans, sans-serif;"><span style="font-size: 14px; line-height: 17.804800033569336px;">What fixed the issue - </span></span></div>
<div>
<ul style="text-align: left;">
<li><span style="font-family: Arial, Liberation Sans, DejaVu Sans, sans-serif;"><span style="font-size: 14px; line-height: 17.804800033569336px;">Using the <a href="http://www.oracle.com/technetwork/tutorials/tutorials-1876574.html">G1 Garbage collector</a></span></span></li>
<li><span style="font-family: Arial, Liberation Sans, DejaVu Sans, sans-serif;"><span style="font-size: 14px; line-height: 17.804800033569336px;">Or Setting Xms and Xmx to smaller values, i.e., lowering heap size</span></span></li>
</ul>
<span style="font-family: Arial, Liberation Sans, DejaVu Sans, sans-serif;"><span style="font-size: 14px; line-height: 17.804800033569336px;">Right, so using <b>G1</b> Garbage collector worked, or setting max heap to a lower value also worked? WTF!</span></span></div>
<h2 style="text-align: left;">
What happened?</h2>
<div>
<br /></div>
<div>
<b>Creation of a JAXB Context on every request/response</b>. The application in question was not <a href="http://yetanothercoding.blogspot.com/2013/01/using-guava-cache-to-avoid-expensive.html">caching JAXB Contexts</a> but was creating them on every request/response. Initial thought is, yeah, that is a performance problem, but why will it count towards native process memory growth? Consider the following benign code:<br />
<br />
<br /></div>
<pre class="java" name="code">public class SimpleTest extends Thread {
public void run() {
while (!Thread.currentThread().isInterrupted()) {
JSONJAXBContext context = new JSONJAXBContext(SomeDTOClass.class); // From jersey-json.jar
}
}
public static void main(String args[]) {
new SimpleTest().start();
}
}
</pre>
<div>
<br /></div>
<div>
Well, the problem is that JAXB Contexts indirectly involve in the invoking of <b>native </b>memory allocations. How does it do it and what happens underneath? Well, that is an exercise for the reader to figure out and comment on this BLOG with findings ;-).<br />
<br />
The root cause can be dialed down to the following issue posted:<a href="http://bugs.java.com/view_bug.do?bug_id=4797189"> http://bugs.java.com/view_bug.do?bug_id=4797189</a>
<br />
<br />
<pre class="java" name="code">public class Bug {
public static void main( String args[] ) {
while ( true ) {
/* If ANY of these two lines is not commented, the JVM runs out of memory */
final Deflater deflater = new Deflater( 9, true );
final Inflater inflater = new Inflater( true );
}
}
}
</pre>
<br />
The same code that might work without native OOM:
<br />
<pre class="java" name="code">public class Bug {
public static void main( String args[] ) {
int count = 0;
while ( true ) {
final Deflater deflater = new Deflater( 9, true );
final Inflater inflater = new Inflater( true );
i++;
if (i % 100 == 0) {
System.gc(); // Don't, please don't do this!
}
}
}
}
</pre>
<br />
As shown above, you might allocate '<b>small</b>' objects in the JVM but underneath they might end up allocating native memory for their operations. The intention is noble, that when the objects are finalized, the native operating system memory will be de-allocated as well. However, the freeing of the native memory is contingent on the Garbage collector running and leading to <b>finalization </b>of these 'small' objects. Its a matter of relative pace, the native memory is growing far faster compared to JVM heap and the lack of Garbage collection occurring is what leads to running out of native space. When using the <b>G1</b> collector or <b>lowering </b>max heap settings, garbage collection occurs more <b>often</b>, thus preventing the native heap from exploding over time.<br />
<br />
Peace!</div>
</div>
Sanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.com1tag:blogger.com,1999:blog-4667121987470696359.post-84255617999265349472014-04-20T10:21:00.001-06:002015-12-20T08:03:11.206-07:00jersey/jaxrs 2.X example using spring JavaConfig and spring security<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<div>
<h3 style="text-align: left;">
Introduction</h3>
<div style="text-align: left;">
<br />
I had blogged previously on using <a href="http://sleeplessinslc.blogspot.com/2012/10/jax-rs-20-jersey-20-preview-example.html">jersey/JAX-RS 2.0</a> when it was in a pre-release. Since then Jersey/JAX-RS 2.0 has released and undergone a few versions. As I have recently been working with the jersey 2.7-spring integration and spring security and figured I'd share an example.</div>
<h3 style="text-align: left;">
Requirements</h3>
<div style="text-align: left;">
<br />
As is quite standard with me, I like to 'manufacture' requirements. The application being developed is a Notes application that allows one to perform <i>CRUD</i> operations for simple notes. So, for the Web Service being created, I have the following requirements:</div>
<ol style="text-align: left;">
<li>No XML- You get type safety during refactoring</li>
<li>Spring Dependency Injection with Java Config. The official <a href="https://github.com/jersey/jersey/tree/master/examples/helloworld-spring-webapp">jersey/spring3</a> example is very nice but does not demonstrate Java Config usage.</li>
<li>Jersey Should manage Resource classes an have Spring service objects injected into them</li>
<li>Security should be enabled via <a href="http://projects.spring.io/spring-security/">Spring Security</a>. Only users with the ROLE '<i>notesuser</i>' should be able to create a note.</li>
</ol>
</div>
<h3 style="text-align: left;">
The Project</h3>
<h4 style="text-align: left;">
Look Ma, no XML</h4>
<div style="text-align: left;">
This example will use Servlet 3.X specification, so we eliminate web.xml. There are many ways one can eliminate XML using Servlet 3.X, but this example focuses on the usage of<a href="http://docs.oracle.com/javaee/6/api/javax/servlet/ServletContainerInitializer.html"> javax.servlet.ServletContainerInitializer</a>. With a <i>ServletContainerInitializer</i> one can programatically register Servlets, Filters and Listeners thus giving you type safety that can be useful during refactoring. As we are in Spring Land, rather than utilize <i>ServletContainerInitializer</i>, an extension of Spring's <a href="http://docs.spring.io/spring/docs/4.0.x/javadoc-api/org/springframework/web/WebApplicationInitializer.html">WebApplicationInitializer</a> is used as shown below:</div>
<pre class="java" name="code">@Order(Ordered.HIGHEST_PRECEDENCE)
public class NotesApplicationInitializer implements WebApplicationInitializer {
private static final String SPRING_SECURITY_FILTER_NAME = "springSecurityFilterChain";
@Override public void onStartup(ServletContext ctx) throws ServletException {
// Listeners
ctx.addListener(ContextLoaderListener.class);
ctx.setInitParameter(ContextLoader.CONTEXT_CLASS_PARAM,
AnnotationConfigWebApplicationContext.class.getName());
ctx.setInitParameter(ContextLoader.CONFIG_LOCATION_PARAM, SpringConfig.class.getName());
// Register Spring Security Filter chain
ctx.addFilter(SPRING_SECURITY_FILTER_NAME, DelegatingFilterProxy.class)
.addMappingForUrlPatterns(
EnumSet.<DispatcherType> of(DispatcherType.REQUEST, DispatcherType.FORWARD), false, "/*");
// Register Jersey 2.0 servlet
ServletRegistration.Dynamic servletRegistration = ctx.addServlet("Notes",
ServletContainer.class.getName());
servletRegistration.addMapping("/*");
servletRegistration.setLoadOnStartup(1);
servletRegistration.setInitParameter("javax.ws.rs.Application", NotesApplication.class.getName());
}
}
</pre>
<div style="text-align: left;">
The WebApplicationInitializer does the following:<br />
<ol style="text-align: left;">
<li>Sets up a Spring ContextLoaderListener</li>
<li>Tells the Context loader the location of the Spring Java Config that is to be used for Services</li>
<li>Registers the Spring Security Filter chain</li>
<li>Registers the Jersey Servlet Container by providing it the NotesApplication as the class to use for Resource and Provider management</li>
</ol>
</div>
<div style="text-align: left;">
Of note is the fact that the <i>NotesApplicationInitializer</i> has been set to highest precedence as that will ensure that it is executed before any other <i>WebApplicationInitializer</i> provided by accompanying jars. If for example, the <a href="https://jersey.java.net/apidocs/2.7/jersey/org/glassfish/jersey/server/spring/SpringWebApplicationInitializer.html">SpringWebApplicationInitializer</a> from jerseyspring3.jar gets loaded, then it attempts to find a spring <i>applicationContext.xml</i> and will fail as our example does not use spring xml style bean definitions.</div>
<h4>
Spring Dependency Injection</h4>
In conjunction with the registration of the Spring Java Config in the WebApplicationInitializer shown above, the Java Config enables the Services and Spring Security filter chain via the following:
<br />
<pre class="java" name="code">@Configuration
@EnableNotesService
@EnableNotesSecurity
public class SpringConfig {
}
</pre>
I have not delved into the details of the @Enable annotations but they are in line with Spring's style to import dependencies. For the scope of this example, you can assume that <i>@EnableNotesSecurity</i> results in the importing of the notes Java services and <i>@EnableNotesSecurity</i> imports the security configuration.
</div>
<h4>
Jersey Should Manage the Resource Classes</h4>
I wanted all my JAX-RS resources managed by Jersey and not Spring. I did not want to annotate my JAX-RS classes with <a href="http://docs.spring.io/spring/docs/4.0.0.RC1/javadoc-api/org/springframework/context/annotation/Configuration.html">@Component</a> + Classpath-scanning and/or have them defined in a Java Config which would then result in Spring managing them. All the Resources and relevant providers are registered with Jersey by extending the <a href="http://docs.oracle.com/javaee/6/api/javax/ws/rs/core/Application.html">javax.ws.rs.core.Application</a> class as shown below:
<br />
<pre class="java" name="code">public class NotesApplication extends Application {
@Override
public Set<Class<?>> getClasses() {
return ImmutableSet.<Class<?>>of(NotesResource.class,
HealthResource.class,
NoteNotFoundExceptionMapper.class, LoggingFilter.class, AccessDeniedExeptionMapper.class);
}
}
</pre>
Note that the 'service' java classes, such as NotesService.java (managed by Spring) are made available via dependency injection into the NotesResource via Jersey's <a href="https://hk2.java.net/spring-bridge/">Spring-HK2 bridge</a>.
</div>
<h4>
Security Should be enabled Via Spring Security</h4>
For this example,I have very simple Spring Security filter chain set up that ensures a POST to create a Note can only be done by a user who has the ROLE '<i>notesuser</i>' provided in a HTTP header while making the call. It is trivial, but hey, this is an example :-)
The goal was to demonstrate the use of a <a href="http://docs.spring.io/spring-security/site/docs/3.1.6.RELEASE/apidocs/org/springframework/security/access/prepost/PreAuthorize.html">@PreAuthorize</a> annotation on the create a Note and how it can be made to work when Jersey manages the resource and not spring. The Notes resource create method looks like the following:
<br />
<pre class="java" name="code">@Path("/notes")
@Produces({ MediaType.APPLICATION_XML })
@Consumes({ MediaType.APPLICATION_XML })
public class NotesResource {
// Jersey object injection
private final UriInfo uriInfo;
// Spring object injection
private final NotesService notesService;
// Note that UriInfo is obtained from Jersey but the NotesService is a spring dependency
@Inject
public NotesResource(@Context UriInfo uriInfo, NotesService notesService) {
this.uriInfo = uriInfo;
this.notesService = notesService;
}
@POST
@Loggable
@PreAuthorize("hasRole('notesuser')") // Pre Authorize role that allows only notesuser to create the note
public Response create(Note note) {
LOG.debug("Creating a note:" + note);
NoteResult result = createNote(note);
return Response.created(result.getLocation()).entity(result).build();
}
.....
}
</pre>
As the NotesResource is managed by Jersey, the only way I could figure to enforce the @PreAuthorize annotation was via <a href="http://eclipse.org/aspectj/">AspectJ</a> weaving of the resource class. The same is accomplished by adding the aspectJ maven plugin which is used during the build process to enhance the JAX-RS Resource classes:
<br />
<pre class="xml" name="code"> <plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>aspectj-maven-plugin</artifactId>
<version>1.6</version>
<configuration>
...
<sources>
<source>
<basedir>${basedir}/src/main/java/com/welflex/notes/rest/resource</basedir>
<includes>
<include>**/*Resource.java</include>
</includes>
</source>
</sources>
....
<aspectLibraries>
<aspectLibrary>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-aspects</artifactId>
</aspectLibrary>
</aspectLibraries>
</configuration>
<executions>
<execution>
<goals>
<goal>compile</goal>
</goals>
</execution>
<executions>
<dependencies>
<dependency>
<groupId>org.aspectj</groupId>
<artifactId>aspectjrt</artifactId>
<version>1.7.4</version>
</dependency>
<dependency>
<groupId>org.aspectj</groupId>
<artifactId>aspectjtools</artifactId>
<version>1.7.4</version>
</dependency>
</dependencies>
<plugin>
</pre>
<h3 style="text-align: left;">
Running the Example</h3>
<div>
<br /></div>
The example is provided as a maven project. I am utilizing the concept of a maven '<i>bom</i>' for the first time for managing the related jersey dependencies. Execute a '<i>mvn install</i>' to see the example in action or execute a <i>'mvn jetty:run</i>' on the web project to start the service. <a href="https://github.com/sanjayvacharya/sleeplessinslc/tree/master/Notes-jersey-jaxrs">The example itself can be downloaded from here</a>.
<br />
<h4 style="text-align: left;">
Client</h4>
<div>
The client will send the 'role' as a header parameter as shown below for creating a note:</div>
<div>
<pre class="java" name="code"> public NoteResult create(Note note, String role) {
return webServiceClient.target(UriBuilder.fromUri(baseUri).path("/notes").build())
.request(MediaType.APPLICATION_XML)
.header("role", role)
.post(Entity.entity(note, MediaType.APPLICATION_XML_TYPE), NoteResult.class);
}
</pre>
<h4 style="text-align: left;">
Integration Test</h4>
Integration test starts up a web container and executes the client operations. The following shows the spring-security role being honored:
<br />
<pre class="java" name="code"> @Test(expected = ForbiddenException.class)
public void roleAbsent() {
Note note = new Note();
note.setUserId("sacharya");
note.setContent("Something to say");
client.create(note, "Fake Role"); // Should throw a JAX-RS ForbiddenException
fail("Should not have created a note as the role provided is not supported");
}
</pre>
Running the equivalent of the above with the role of '<i>notesuser</i>' will allow the creation of the note. The life-cycle integration test in the attached example demonstrates the same succeeding. The example should be devoid of <i>web.xml</i>. If anyone coming across this blog has integrated Jersey/Spring/Spring-security in a different way, please share, would love to hear.
That's all folks! Enjoy!
</div>
</div>
Sanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.com4tag:blogger.com,1999:blog-4667121987470696359.post-51154568291831041362013-02-21T21:56:00.001-07:002017-06-23T19:54:21.604-06:00Netflix Hystrix - It's Hysterical...err Hystrixcal?<div dir="ltr" style="text-align: left;" trbidi="on">
<h2 style="text-align: left;">
Introduction:</h2>
<div>
<br /></div>
<div>
<div>
In any distributed architecture, failure of services are inevitable. These could be due to network related issues, bugs in code, unexpected loads, failure of dependencies, what have you? When failure of a single service in a network starts affecting the availability of other services, then that service could become a single point of failure for the entire dependency graph associated with that service. <a href="https://github.com/Netflix/Hystrix/wiki">Hystrix </a>is a library developed and subsequently outsourced by Netflix that is geared toward making a distributed architecture resilient while addressing some of the above concerns. </div>
</div>
<div>
Hystrix function as described by the creators "<span style="background-color: white; color: #333333; font-family: "helvetica" , "arial" , "freesans" , "clean" , sans-serif; font-size: 14px; line-height: 22px;"><i>Hystrix is a library designed to control the interactions between these distributed services providing greater latency and fault tolerance. Hystrix does this by isolating points of access between the services, stopping cascading failures across them, and providing fallback options, all of which improve the system's overall resiliency</i>."</span></div>
<div>
<span style="background-color: white; color: #333333; font-family: "helvetica" , "arial" , "freesans" , "clean" , sans-serif; font-size: 14px; line-height: 22px;">I could talk more about Hystrix but reading their <a href="https://github.com/Netflix/Hystrix/wiki/Getting-Started">excellent WIKI</a> is definitely the more <a href="http://en.wikipedia.org/wiki/Don't_repeat_yourself">DRY </a>route. My goal in this BLOG is simply to have fun while providing you an example of Hystrix usage to play around with :-)</span><br />
<span style="background-color: white; color: #333333; font-family: "helvetica" , "arial" , "freesans" , "clean" , sans-serif; font-size: 14px; line-height: 22px;"><br /></span>
<br />
<h2>
Fictional Case Study</h2>
<div>
<br /></div>
<div>
Developers of company Acme had developed a <i>GreetingService </i>and had also provided a client for consuming the service. The client developed looked like:<br />
<br />
<pre class="java" name="code">public class GreetingJerseyClient implements GreetingClient {
private final String baseUri;
private final Client client;
public GreetingJerseyClient(String baseUri) {
this.baseUri = baseUri;
client = ClientFactory.newClient();
}
/**
* @return a Greeting in the specific language, for example "Hello" or "Hola"
*/
@Override
public String getGreeting(String languageCode) {
....
}
/**
* @return a Greeting for the individual in the specfied language, for example "Hello Sanjay", "Hola Sanjay".
* @throws InvalidNameException if null was provided as the language code.
*/
@Override
public String greet(String name, String languageCode) throws InvalidNameException {
...
}
}
</pre>
<br />
This <i>GreetingService </i>was a core service that was utilized by all their web applications and in some cases consumed by other services as well. The <i>GreetingService </i>itself was a beast that which on load tests showed requests could take upto <i>1 second </i>to respond at times (probably full GC's kicking in). Accordingly the developers configured the <i>read-timeout</i> for the connections to be 1 second so that any request taking longer than <i>1 second</i> would time out. Upon deployment all looked good for sometime but then they started encountering problems where the service started degrading, i.e., it was not the occasional request that took a second to respond but all requests in a particular time window that exhibited latency or failure. This resulted in the application container threads all blocking leading to a denial of service.<br />
<br />
Bosses were mad, stake holders took to narcotics, stock values plummeted.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.xconomy.com/wordpress/wp-content/images/2009/10/WorstBossEver.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://www.xconomy.com/wordpress/wp-content/images/2009/10/WorstBossEver.jpg" height="320" width="227" /></a></div>
<br />
<br />
A postmortem of the incident was held. Cause of the problem was not singular but multiple things that happened, a perfect storm if you may, a bug was introduced that increased response times, their load balancer ran into unexpected issues, a data center technician spilled his drink on some servers of the cluster..phew. Results of the postmortem were:<br />
<div style="text-align: left;">
</div>
<ul style="text-align: left;">
<li>To provide a default greeting for the <i>getGreeting() </i>call in the event the Greeting service were experiencing degradation.</li>
<li>Deploy the Greeting Service in a cloud provider and pay them only for usage ($$$$$) in the event of failure of their existent measly data center. Only the <i>greet(name, languageCode)</i> call would access that.</li>
<li>Provide a transparent means of switching to the cloud service in the event of degradation of the in house Greeting Service and switch back when the in home Greeting Service is responsive again.</li>
<li>Be able to track and monitor service degradation so the network operations folk could react quickly.</li>
</ul>
Tasked with the above, the development team decided that Hystrix would help address the above concern really well. As changing the API was not an option, the developers decided to provide a <i>HystrixGreetingClient </i>that would wrap the service calls with a <i><a href="http://netflix.github.com/Hystrix/javadoc/">HystrixCommand</a></i>.<br />
<br />
The following is the enhancement they made to provide a default greeting for the<i> greet() </i>call in the event of service degradation:
<br />
<pre class="java" name="code">public class HystrixGreetingClient implements GreetingClient {
private final GreetingClient client;
public HystrixGreetingClient(GreetingClient client) {
}
@Override
public String greet() {
return new GetGreetingHystrixCommand().execute();
}
// Default Greeting Agreed Upon
public static final String DEFAULT_GREETING = "Hello!";
class GetGreetingHystrixCommand extends HystrixCommand<String> {
public GetGreetingHystrixCommand() {
super(Setter.withGroupKey(HystrixCommandGroupKey.Factory.asKey("GreetingClient"));
}
@Override
protected String run() throws Exception {
// Attempt to obtain greeting from the service
return client.getGreeting();
}
@Override
protected String getFallback() {
// Fallback provides a default greeting
return DEFAULT_GREETING;
}
}
}
</pre>
<br />
The <i>GetGreetingCommand </i>is a <i>HystrixCommand </i>which will invoke the fallback greeting if Hystrix detected service degration. For the<i> greet(name, languageCode)</i> operation, as the goal was to invoke the cloud deployed <i>Greeting Service</i> in the event of service degradation of their in-house version, the developers provided a fallback client that could be invoked as shown below:
<br />
<pre class="java" name="code">public class HystrixGreetingClient implements GreetingClient {
private final GreetingClient primary;
private final GreetingClient fallback;
/**
* @param primary The primary client which communicates with the in-house data center
* @param fallback The fallback client which communicates with the $$$$$ cloud provider
*/
public HystrixGreetingClient(GreetingClient primary, GreetingClient fallback) {
}
@Override
public String greet(String languageCode) {
....
}
@Override
public String greet(String name, String languageCode) {
try {
return new GreetHystrixCommand(name, languageCode).execute();
}
catch (HystrixBadRequestException e) {
if (e.getCause() instanceof InvalidNameException) {
throw InvalidNameException.class.cast(e.getCause());
}
throw e;
}
}
class GreetHystrixCommand extends HystrixCommand<String> {
private final String name;
private final String languageCode;
public GreetHystrixCommand(String name, String languageCode) {
super(Setter.withGroupKey(HystrixCommandGroupKey.Factory.asKey("GreetingClient"))
.andCommandPropertiesDefaults(HystrixCommandProperties.Setter().withExecutionIsolationThreadTimeoutInMilliseconds(500)));
this.name = name;
this.languageCode = languageCode;
}
@Override
protected String run() throws Exception {
try {
// Attempt to invoke the primary service
return primary.greet(name, languageCode);
}
catch (InvalidNameException e) {
// Throw a HystrixBadRequestException as this is not a network or service issue
throw new HystrixBadRequestException("Invalid Name", e);
}
}
@Override
protected String getFallback() {
// If primary failed or was short circuited, invoke the secondary
return new GreetHystrixFallbackCommand().execute();
}
/**
* Fallback command designed to talk to the cloud provider service
*/
class GreetHystrixFallbackCommand extends HystrixCommand<String> {
public GreetHystrixFallbackCommand() {
super(Setter.withGroupKey(HystrixCommandGroupKey.Factory.asKey("GreetingClient"));
}
/**
* There is no fallback for this call. Fail fast.
*/
@Override
protected String run() throws Exception {
try {
return fallback.greet(name, languageCode);
}
catch (InvalidNameException e) {
// This is a user or programming error, not a network or service problem
// so throw a HystrixBadRequestException
throw new HystrixBadRequestException("Invalid Name", e);
}
catch (RuntimeException e) {
// Demonstrating that all other exceptions are thrown
throw e;
}
}
}
}
}
</pre>
The call to<i> greet(name, language)</i> if experiencing service degradation will result in the fallback cloud provider equivalent being invoked. The cloud provider call is also wrapped in a <i>HystrixCommand </i>so that it also is subjected to SLA constraints. If the fall back provider cannot fulfill the request, then the there is no fallback for it and failure will occur fast.
One thing to note is that if a name provided was invalid, for example <i>null</i>, the service would throw an <i>InvalidNameException</i>. The same should not contribute toward calling a fallback or short circuiting metrics. For this reason, a<a href="http://netflix.github.com/Hystrix/javadoc/"> <i>HystrixBadRequestException </i></a>is thrown which will not result in the fallback being invoked as well as not contributing toward the failure metrics.
<br />
<br />
<div style="text-align: left;">
<h2 style="text-align: left;">
Results of their efforts</h2>
<div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.highestfive.com/wp-content/uploads/happy-boss-man.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://www.highestfive.com/wp-content/uploads/happy-boss-man.jpg" height="320" width="211" /></a></div>
<br /></div>
Once the new client was released every consumer adopted the same were able to have their applications be resilient to the failure of the dreaded Greeting Service :-)<br />
<br />
Logs looked like:<br />
<pre name="code">INFO - com.welflex.example.client.HystrixGreetingClient$GetGreetingHystrixCommand.getFallback(62) | Fallback default greeting is being provided
INFO - com.welflex.example.client.HystrixGreetingClient$GetGreetingHystrixCommand.getFallback(62) | Fallback default greeting is being provided
INFO - com.welflex.example.client.HystrixGreetingClient$GetGreetingHystrixCommand.getFallback(62) | Fallback default greeting is being provided
.. More of the above
INFO - com.welflex.example.client.HystrixGreetingClient$GetGreetingHystrixCommand.getFallback(62) | Fallback default greeting is being provided
INFO - com.welflex.example.client.HystrixGreetingClient$GetGreetingHystrixCommand.run(56) | Obtained Greeting from Service
INFO - com.welflex.example.client.HystrixGreetingClient$GetGreetingHystrixCommand.getFallback(62) | Fallback default greeting is being provided
INFO - com.welflex.example.client.HystrixGreetingClient$GetGreetingHystrixCommand.getFallback(62) | Fallback default greeting is being provided
INFO - com.welflex.example.client.HystrixGreetingClient$GetGreetingHystrixCommand.getFallback(62) | Fallback default greeting is being provided
INFO - com.welflex.example.client.HystrixGreetingClient$GetGreetingHystrixCommand.run(56) | Obtained Greeting from Service
INFO - com.welflex.example.client.HystrixGreetingClient$GetGreetingHystrixCommand.run(56) | Obtained Greeting from Service
....
</pre>
<pre name="code">INFO - com.welflex.example.client.HystrixGreetingClient$GreetHystrixCommand.run(82) | Obtaining greeting from Primary service
INFO - com.welflex.example.client.HystrixGreetingClient$GreetHystrixCommand.run(82) | Obtaining greeting from Primary service
INFO - com.welflex.example.client.HystrixGreetingClient$GreetHystrixCommand.run(82) | Obtaining greeting from Primary service
.... More of the above
INFO - com.welflex.example.client.HystrixGreetingClient$GreetHystrixCommand$GreetHystrixFallbackCommand.run(107) | Obtaining greeting from Secondary Service
INFO - com.welflex.example.client.HystrixGreetingClient$GreetHystrixCommand$GreetHystrixFallbackCommand.run(107) | Obtaining greeting from Secondary Service
INFO - com.welflex.example.client.HystrixGreetingClient$GreetHystrixCommand.run(84) | Obtained greeting from Primary service
INFO - com.welflex.example.client.HystrixGreetingClient$GreetHystrixCommand$GreetHystrixFallbackCommand.run(109) | Obtained greeting from Secondary Service
INFO - com.welflex.example.client.HystrixGreetingClient$GreetHystrixCommand$GreetHystrixFallbackCommand.run(109) | Obtained greeting from Secondary Service
INFO - com.welflex.example.client.HystrixGreetingClient$GreetHystrixCommand.run(82) | Obtaining greeting from Primary service^M
INFO - com.welflex.example.client.HystrixGreetingClient$GreetHystrixCommand$GreetHystrixFallbackCommand.run(109) | Obtained greeting from Secondary Service
INFO - com.welflex.example.client.HystrixGreetingClient$GreetHystrixCommand$GreetHystrixFallbackCommand.run(109) | Obtained greeting from Secondary Service
ERROR - com.netflix.hystrix.HystrixCommand.executeCommand(812) | Error executing HystrixCommand
java.lang.RuntimeException: Unable to greet:500
at com.welflex.example.client.GreetingJerseyClient.greet(GreetingJerseyClient.java:57)
at com.welflex.example.client.HystrixGreetingClient$GreetHystrixCommand.run(HystrixGreetingClient.java:83)
at com.welflex.example.client.HystrixGreetingClient$GreetHystrixCommand.run(HystrixGreetingClient.java:67)
at com.netflix.hystrix.HystrixCommand.executeCommand(HystrixCommand.java:764)
at com.netflix.hystrix.HystrixCommand.access$1400(HystrixCommand.java:81)
at com.netflix.hystrix.HystrixCommand$2.call(HystrixCommand.java:706)
at com.netflix.hystrix.strategy.concurrency.HystrixContextCallable.call(HystrixContextCallable.java:45)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
INFO - com.welflex.example.client.HystrixGreetingClient$GreetHystrixCommand$GreetHystrixFallbackCommand.run(107) | Obtaining greeting from Secondary Service
INFO - com.welflex.example.client.HystrixGreetingClient$GreetHystrixCommand$GreetHystrixFallbackCommand.run(107) | Obtaining greeting from Secondary Service
INFO - com.welflex.example.client.HystrixGreetingClient$GreetHystrixCommand$GreetHystrixFallbackCommand.run(107) | Obtaining greeting from Secondary Service
..More of the above
INFO - com.welflex.example.client.HystrixGreetingClient$GreetHystrixCommand.run(84) | Obtained greeting from Primary service
</pre>
<pre name="code"></pre>
<pre name="code">The <i>getGreeting(</i>) command falls back to the fall back greeting agreed upon when the primary service shows degradation. On the call to <i>greet()</i>, the cloud service is invoked when the primary service shows degradation and then reverts back to the primary service when it health improves.</pre>
</div>
</div>
<span style="color: #333333; font-family: "helvetica" , "arial" , "freesans" , "clean" , sans-serif;"><span style="font-size: 14px; line-height: 22px;"><br /></span></span></div>
<h3 style="text-align: left;">
</h3>
<div>
<h2 style="text-align: left;">
Monitoring</h2>
</div>
<div>
For monitoring the company set up all web applications using the Greeting service to have the<a href="https://github.com/Netflix/Hystrix/tree/master/hystrix-contrib/hystrix-metrics-event-stream"> Hystrix Event Stream Servlet </a>so that statistics and metrics could be obtained from each deployment of the application on how the command's were performing. They aggregated these metrics using <a href="https://github.com/Netflix/Hystrix/wiki/Dashboard">Turbine </a>and finally had a pretty <a href="https://github.com/Netflix/Hystrix/wiki/Dashboard">Hystrix Dashboard </a>at which the Network Operations folk spent hours admiring the animated graph.<br />
<br />
<h2 style="text-align: left;">
An Example</h2>
</div>
<div>
As always,<a href="http://home.comcast.net/~acharya.s/java/hystrix-example.zip"> herewith is a maven example of the above GreetingWebService and Hystrix for download</a>. It is a simple Jersey 2.0 application with a test that demonstrates fallback and fail fast. It does not exercise all the configuration that Hystrix provides, that's for you to have fun with. You can tweak the Rest resources to include further delays, tweak the Hystrix commands with different SLAs. Simply run the following command and witness a simple test demonstrating the functionality:
<br />
<pre name="code">>mvn install
</pre>
</div>
<div>
<br />
<h2 style="text-align: left;">
Conclusion</h2>
</div>
<div>
Why would you want to adopt Hystrix? I ask why not? Do you believe your network to be infallible, your software bug free, your availability ready to be questioned? Yes there is a cost, a cost toward the extra code that one needs to wrap around network calls with Hystrix Commands but compared to the cost of your leaders taking to narcotics or more appropriately your site being non-responsive, it might be a small price to pay. Adoption is not easy IMHO. Care must be taken to understand the exact SLA parameters to configure a Hystrix Command as getting it wrong might hurt rather than alleviate a problem. For this reason a gradual adoption is the preferred route.<br />
<br />
I have taken the liberty of a creating a HystrixClient that is only partially configurable. I do not believe this to be the right approach and feel that separate client that allows full configuration of the HystrixCommand by a service consumer is the correct approach.<br />
<br />
As always, I might have got things wrong in which case, I would love to learn the error of my ways :-) In the end, remember its Hystrix not Hysterics as I often confused it to be. Enjoy!</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://farr3601.files.wordpress.com/2012/04/hysterical1.gif?w=500" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://farr3601.files.wordpress.com/2012/04/hysterical1.gif?w=500" /></a></div>
<div>
<br />
<h2 style="text-align: left;">
Resources</h2>
<div>
<ul style="text-align: left;">
<li><a href="http://techblog.netflix.com/2012/12/hystrix-dashboard-and-turbine.html">Hystrix Dashboard and Turbine</a></li>
<li><a href="http://techblog.netflix.com/2012/11/hystrix.html">Introducing Hystrix</a></li>
<li><a href="http://gigaom.com/2012/11/26/netflix-open-sources-tool-for-making-cloud-services-play-nice/">Netflix Open Sources Hystrix</a></li>
</ul>
</div>
<h2 style="text-align: left;">
Disclaimer</h2>
</div>
<div>
All images linked with in this BLOG are NOT my own. I am simply referring to them for demonstration purposes. If owners have concerns, I will gladly remove them.<br />
<h2 style="text-align: left;">
Update</h2>
<div>
Below video is me in 2015 presenting at <a href="http://overstock.com/">Overstock.com</a> on the benefits of using <a href="https://sleeplessinslc.blogspot.com/2013/02/netflix-hystrix-its-hystericalerr.html">Hystrix </a>for architectural resiliency. Since then Overstock.com has embraced the concept and technology and grown to be a highly resilient web site. My content is inspired by many others on the subject from Slide Share and elsewhere. Sharing this if it might benefit others with permission from <a href="http://overstock.com/">Overstock.com</a>.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe width="320" height="266" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/zGWN0fy_p-g/0.jpg" src="https://www.youtube.com/embed/zGWN0fy_p-g?feature=player_embedded" frameborder="0" allowfullscreen></iframe></div>
<br />
<br /></div>
</div>
</div>
Sanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.com2tag:blogger.com,1999:blog-4667121987470696359.post-64693271696908126102012-10-05T21:26:00.001-06:002014-04-23T18:05:20.639-06:00JAX-RS 2.0 - Jersey 2.0 Preview - An Example<div dir="ltr" style="text-align: left;" trbidi="on">
<h3 style="text-align: left;">
Introduction</h3>
<br />
<a href="http://jcp.org/en/jsr/detail?id=339">JAX-RS 2.0 spec</a> has reached public review status. Implementors of the API have been hard at work. One among them is <a href="http://jersey.java.net/jersey20.html">Jersey</a>.
I have a vested interest in Jersey as I use it at work and therefore wanted to keep up to date on the changes in JAX-RS 2.0 and Jersey 2.0 and thus this BLOG.
Jersey 2.0 is not backward compatible with Jersey 1.0 per their site and is instead a major re-write to correctly handle the new specification. When working with a previous REST provider, I found that when they created a 2.0 version of their product, they did not care to be backward "<b>sensitive</b>". I did not say compatible but sensitive :-). Backward sensitivity IMHO is a way to provide new functionality and refactor of an existing API in a way that provides a gradual migration path of consumers of the API and not requiring an all or nothing upgrade.
<br />
<br />
When backward incompatible changes are made to API's without changing package names, then binary incompatibilities surface when both versions of the API are required within the same class loader system. These incompatibilities can be easily absorbed by small scale adopters who can easily ensure all dependencies are upgraded simultaneously and only the latest version of the API is present. This requirement of simultaneous upgrade is however an arduous challenge for larger and more complex deployments whose development and deployment windows are very separated. Sure that one can look at OSGI or write their own class loaders if they should choose to but I am too dull headed for such extravagance.
<br />
<br />
In short what I want to ensure with Jersey 2.0 is the following use case:
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhhq_ouC8l_4Ruzt-rydbvLze0-T_6CYOG2YPGlE5g5EYR95smngpToxY_AlTtcKi8kPyahLo1JMOj28SK7T1hXE2aqyW_Q9UzHTYYUM6KEa4dEQ1JsN9QJdxJRbfA_92cGcHrM-EIUg3QN/s1600/services.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhhq_ouC8l_4Ruzt-rydbvLze0-T_6CYOG2YPGlE5g5EYR95smngpToxY_AlTtcKi8kPyahLo1JMOj28SK7T1hXE2aqyW_Q9UzHTYYUM6KEa4dEQ1JsN9QJdxJRbfA_92cGcHrM-EIUg3QN/s400/services.PNG" height="152" width="400" /></a></div>
<br />
There are three Services (REST services running within a Servlet container) written in Jersey 1.X. Each of the Services have multiple consumers but we are currently interested in the following where Service A calls Service B and Service C using their respective clients. A Service client is nothing more than a friendly API/library wrapping Jersey Client calls that can be distributed to different consumers of the service.
<br />
<br />
For the sake of discussion, let us consider that Service B is upgraded to Jersey 2.0 but Service C is not. As part of the upgrade, the maintainers of Service B provide a client to consume the service which is written in Jersey 2.0 as well. Will the Service clients of Service B and Service C be able to co-exist and function in Service A? Does the fact that Service A has not been upgraded to Jersey 2.0 cause problems for the clients of Service B and Service C to function in the container? In addition, let us say that Service A also gets upgraded to Jersey 2.0, then will the Service client for C that is using Jersey 1.X be able to function within Service A? Clearly, one option is to provide multiple clients of the service, one for Jersey 1.X and a second for Jersey 2.0. The same is achievable but does involve additional work of development and maintainence. So the question remains, whether the compatibility discussed will hold true or not? The BLOG and example provided herewith will evaluate the same.
<br />
<br />
<h3 style="text-align: left;">
JAX-RS 2.0 and what to expect</h3>
<br />
<b>Client API</b>
<br />
<br />
With earlier versions of the JAX-RS, the specification only accounted for a Server Side API for RESTful calls. The same led to different implementors of the Server API such as Jersey and <a href="http://www.jboss.org/resteasy">RESTEasy </a>independently developing a client side API (<a href="http://sleeplessinslc.blogspot.com/2010/02/rest-client-frameworks-your-options.html">See my BLOG post on REST Clients</a>). JAX-RS 2.0 will contain a Client API to support the server side specification. The JAX-RS 2.0 Client API is a major incorporation of the Jersey 1.X client API.<br />
<br />
<b>Asynchronous Support on Client and Server side</b>
<br />
<br />
Client Side asynchronous support will allow a request to be dispatched in a non-blocking manner with results made available asynchronously. On the server side, long running requests that are I/O bound can be dispatched away, thus releasing the Application server thread to service other requests.
<br />
<br />
<b>Validation</b>
<br />
<br />
The lack of validation of data received from Headers, Body etc is something that I <a href="http://sleeplessinslc.blogspot.com/2012/02/jersey-jax-rs-mvc-killed-spring-mvc.html">whined about in a previous BLOG</a>. JAX-RS 2.0 will provide support for validation of request data.<br />
<br />
<b>Filters and Handlers</b>
<br />
<br />
JAX-RS 2.0 specification accounts for Client and Server Filters. As in the case of the Client API, where implementer's provided their own versions of these filters for the Client and Server. The JAX-RS 2.0 specification absorbs the same into its API.
For a more detailed understanding of changes in JAX-RS 2.0, <a href="https://blogs.oracle.com/arungupta/entry/jax_rs_2_0_early">a read of Arun Gupta's blog is recommended</a>.
This BLOG will be leaning on the Notes Web Service example that I had previously utilized where Notes can be created, updated and deleted via a client.
<br />
<br />
<h3>
Jersey 2.0 and HK2</h3>
<br />
<br />
In previous version of my examples, I have used either <a href="http://www.springsource.org/">Spring </a>or <a href="http://code.google.com/p/google-guice/">Guice </a>for injecting dependencies into Resources. At the time of this BLOG, I could not find the Spring or Guice support and have therefore used what Jersey 2.0 natively uses for its dependency injection and service location needs, i.e., <a href="http://hk2.java.net/">HK2</a>. HK2 is based on <a href="http://www.jcp.org/en/jsr/detail?id=330">JSR 330</a> standard annotations. In the following example, a service binder utilizes HK2 and binds interface contracts to implementors as shown below. This binder which is defined in a "service" maven module is then used in the web maven module to enable injection of the NotesService:
<br />
<pre class="java" name="code">public class ServiceBinder implements Binder {
@Override
public void bind(DynamicConfiguration config) {
config.bind(BuilderHelper.link(NotesServiceImpl.class).to(NotesService.class).build());
}
}
/**
* Application class
*/
public class NotesApplication extends ResourceConfig {
public NotesApplication() {
super(NotesResource.class..);
// Register different Binders
addBinders(new ServiceBinder(), new JacksonBinder());
}
}
/**
* A Resource class injected with the Notes Service
*/
@Path("/notes")
public class NotesResource {
@javax.inject.Inject
private NotesService service; // Notes Service injection
...
}
</pre>
<br />
<br />
<h3>
Client</h3>
<br />
<br />
As previously mentioned, JAX-RS 2.0 supports a client API and Jersey provides an implementation of the same. At the time of writing this BLOG, I could not find hookins for Apache HTTP Client and therefore will be using the standard HttpClient for HTTP calls.
<br />
<br />
<b>Client Creation</b>
<br />
<br />
The process of configuring a client and creating the same is very similar to Jersey 1.X.
<br />
<pre class="java" name="code">// Configuration for the Client
ClientConfig config = new ClientConfig();
// Add Providers
config.register(new JacksonJaxbJsonProvider());
config.register(new LoggingFilter());
// Instantiate the Client using the Client Factory
Client webServiceClient = ClientFactory.newClient(config);
</pre>
<br />
<b>Executing REST Request</b>
<br />
<br />
With Jersey 1.X, a typical POST to create a Note would have looked like-
<br />
<pre class="java" name="code">NoteResult result = client.resource(UriBuilder.fromUri(baseUri).path("/notes").build())
.accept(MediaType.APPLICATION_XML).entity(note, MediaType.APPLICATION_XML).post(NoteResult.class);
</pre>
The same request using the JAX-RS 2.0 client looks like:
<br />
<pre class="java" name="code">NotesResult result = client.target(UriBuilder.fromUri(baseUri).path("/notes").build())
.request(MediaType.APPLICATION_XML) // Request instead of accept
.post(Entity.entity(note, MediaType.APPLICATION_XML_TYPE), NoteResult.class);
</pre>
The primary difference seems to be where the Resource is identified as "target" rather than "resource". There is no "accepts" method but is replaced with "request" for media type. The Entity is built from a Entity Builder method which returns an instance of <i>java.ws.rs.client.Entity</i>. Overall the API seems quite flowing and does not seem to conflict with the Jersey 1.X client API.
<br />
<br />
<h3>
Server Changes</h3>
<br />
<br />
One of the changes I wanted to work with is the Validation but I do not believe the same has been fully implemented as yet. Apart from the Async support mentioned earlier, one area of interest for me is the Filters and Name Binding feature of JAX-RS.<br />
<br />
<b>Name binding and Filter</b>
<br />
<br />
For those who have used Jersey's ContainerRequest and ContainerResponse Filters, the JAX-RS 2.0 equivalents should be quite familiar. An example of a simple ContainerRequestFilter that logs header attributes is shown below:
<br />
<pre class="java" name="code">@Provider
@Loggable
public class LoggingFilter implements ContainerRequestFilter {
private static final Logger LOG = Logger.getLogger(LoggingFilter.class);
@Override
public void filter(ContainerRequestContext requestContext) throws IOException {
MultivaluedMap<String, String> map = requestContext.getHeaders();
LOG.info(map);
}
}
</pre>
The annotation @Loggable shown above is a custom one that I created to demonstrate the <b>Name Binding</b> features of JAX-RS 2.0 where a Filter or Handler can be associated with a Resource class, method or an invocation on the client. In short this means one has finer grained control over which methods/resources the filter or handler is made available to. The annotation <i>@Loggable</i> is shown below:
<br />
<pre class="java" name="code">@NameBinding
@Target({ ElementType.TYPE, ElementType.METHOD })
@Retention(value = RetentionPolicy.RUNTIME)
public @interface Loggable {
}
</pre>
With the above, if we wish to apply the LoggingFilter only to a specific method of a Resource, one simply has to annotate the method with <i>@Loggable</i>:
<br />
<pre class="java" name="code">@Path("/notes")
public class NotesResource {
@POST
@Loggable // Will be logged
public Response create(Note note) {
...
}
@GET // No annotation @Loggable added
public Note get(Long noteId) {
...
}
</pre>
If one wanted to apply the filter on a global level, then <b>Global Binding</b> is used where annotating the filter or handler with <i>@GlobalBinding</i> makes it applicable to all resources of an application.
<br />
<br />
<h3>
Asynchronous HTTP</h3>
<br />
<br />
This is where its time for fun, at least for me :-). With <a href="http://today.java.net/pub/a/today/2008/10/14/introduction-to-servlet-3.html">Servlet 3.0</a>, one can utilize NIO features and thus free service threads from blocking IO Tasks. Jersey supports suspending the incoming connection and resuming it at a later time. On the client side, the ability to dispatch requests asynchronously is a convenience.
<br />
<br />
<b>Fictitious Requirements of the Notes Service</b>
<br />
<ul>
<li>Notes Client should be able to submit a large number of Notes for creation. The Service should not block the JVM Server thread while it waits to submit but instead should background the same. Client should be notified of a Task ID asynchronously when it becomes available.</li>
<li>Notes Client should be able to obtain the status of Note creation asynchronously. The Service should not block the JVM Server thread while waiting for task completion. Upon obtaining the task result, the client should be notified of the same.</li>
<li>Notes Client should be able to register a listener that is notified of Note Creation events from the server</li>
</ul>
<br />
<b>Posting a list of Notes for Creation</b>
<br />
<br />
A list of Notes are submitted to the resource to asynchronously create them. The call itself is asynchronous with a Task ID being delivered to the <a href="http://jersey.java.net/nonav/apidocs/snapshot/jersey/javax/ws/rs/client/InvocationCallback.html">InvocationCallback </a>when available. One could accomplish the same via an Executor Service etc but this feature simplifies the operation.
<br />
<pre class="java" name="code"> @Override
public void submitAsync(List<Note> notes, InvocationCallback<Long> taskCallback) {
webServiceClient.target(UriBuilder.fromUri(baseUri).path("/notes/async").build())
.request(MediaType.TEXT_PLAIN).async()
.post(Entity.entity(new NotesContainer(notes), MediaType.APPLICATION_XML), taskCallback);
}
</pre>
On the server side, the AsyncNotesResource handles the POST as shown below:
<br />
<pre class="java" name="code"> @POST
public void createAsync(@Suspended final AsyncResponse response, final NotesContainer notes) {
QUEUE_EXECUTOR.submit(new Runnable() {
public void run() {
try {
// This task of submitting notes can take some time
Long taskId = notesService.asyncCreate(notes.getNotes());
LOG.info("Submitted Notes for Creation successfully, resuming client connection with TaskId:"
+ taskId);
// Resume the connection
response.resume(String.valueOf(taskId));
}
catch (InterruptedException e) {
LOG.error("Waiting to submit notes creation interrupted", e);
response.resume(e);
}
}
});
}
</pre>
One can also obtain a <i>java.util.concurrent.Future</i> using the API if desired.
<br />
<br />
<b>Obtaining Result of Task Processing Asynchronously</b>
<br />
<br />
The <i>InvocationCallback </i>of the POST request will be provided a Task ID. Using this Task ID, an asynchronous GET request can be dispatched to obtain the results of the task. As the task might not be completed at the time of the request, the client issues the request in an asynchronous manner and is provided the results of the task when available through the <i>InvocationCallback</i>.
<br />
<pre class="java" name="code"> public void getAsyncNoteCreationResult(Long taskId,
InvocationCallback<NotesResultContainer> resultCallback) {
webServiceClient.target(UriBuilder.fromUri(baseUri).path("/notes/async/" + taskId).build())
.request(MediaType.APPLICATION_XML).async().get(resultCallback);
}
</pre>
The Server resource as shown below will not block the JVM server thread but will background the process of obtaining the Task results and the backgrounded process will resume the client connection with the task result when available. Also a time out is set on the request, if the server is not able to respond within the alloted period of time, then a <i>503 </i>response code is sent back with a "<i>Retry-After</i>" header set to 60 sec.
<br />
<pre class="java" name="code">@GET
@Path("{taskId}")
public void getTaskResult(@Suspended final AsyncResponse ar, @PathParam("taskId") final Long taskId) {
LOG.info("Received Request to Obtain Task Result for Task:" + taskId);
ar.setTimeout(10000L, TimeUnit.MILLISECONDS);
ar.setTimeoutHandler(new TimeoutHandler() {
@Override
public void handleTimeout(AsyncResponse asyncResponse) {
// Respond with 503 with Retry-After header set to 60 seconds
// when the service could become available.
asyncResponse.cancel(60);
}
});
QUEUE_EXECUTOR.submit(new Runnable() {
@Override public void run() {
try {
// This call will block until results are available.
List<Long> noteIds = notesService.getAysyncNoteCreationResult(taskId);
NotesResultContainer resultContainer = new NotesResultContainer();
....
LOG.info("Received Note Creation Result, resuming client connection");
ar.resume(resultContainer);
}
catch (InterruptedException ex) {
LOG.info("Interrupted Cancelling Request", ex);
ar.cancel();
}
}
});
}
</pre>
<b>Server Sent Events</b>
<br />
<br />
A Feature of Jersey, though I believe not to be really a feature of JAX-RS is the ability to listen to Server sent events. The same is accomplished by the client obtaining an instance of <i>org.glassfish.jersey.media.sse.EventChannel</i> from the Service which then receives events published by the server. The channel is kept alive until disconnected.
The Client code to obtain the EventProcessor is shown below:
<br />
<pre class="java" name="code"> WebTarget target = webServiceClient.target(UriBuilder.fromUri(baseUri)
.path("/notes/async/serverSent").build());
// Register a reader for the Event Processor
target.configuration().register(EventProcessorReader.class);
// Obtain an event Processor from the Service
EventProcessor processor = target.request().get(EventProcessor.class);
processor.process(new EventListener() {
public void onEvent(InboundEvent inboundEvent) {
Note inboundNote = inboundEvent.getData(Note.class);
for (NoteListener listener : serverEventListeners) {
listener.notify(inboundNote);
}
}
});
</pre>
On the service end, a thread is started that will dispatch Note creation events to the channel.
<br />
<pre class="java" name="code"> @Produces(EventChannel.SERVER_SENT_EVENTS)
@Path("serverSent")
@GET
public EventChannel serverEvents() {
final EventChannel channel = new EventChannel();
new EventChannelThread(channel, notesService).start();
return seq;
}
public class EventChannelThread extends Thread {
...
public void run() {
// While loop jargon ommited for lack of real estate
channel.write(new OutboundEvent.Builder().name("domain-progress")
.mediaType(MediaType.APPLICATION_XML_TYPE).data(Note.class, noteToDispatch).build());
}
}
</pre>
<br />
<h3>
Example Project</h3>
<br />
<br />
<a href="http://home.comcast.net/~acharya.s/java/jersey-jax-rs.zip">An example project can be downloaded from HERE.</a>
<br />
<br />
As mentioned previously, the example uses the concept of CRUD as related to a Note. A multi-module maven project is provided herewith that demonstrates the Notes application as written with Jersey 1.X and Jersey 2.X. The Jersey 2.X version has some additional features such as ASYNC, Name Binding and Filter that are demonstrated. In addition, there is a compatibility-test project that evaluates the compatibility concerns that I had mentioned at the beginning of my BLOG. Please note that 2.0-SNAPSHOT of Jersey is being used. A brief description of the maven modules you will find in the example:
<br />
<ul>
<li><b>Notes-common</b>: Contains Data transfer objects and common code used by the Jersey 1.X and Jersey 2.X applications</li>
<li><b>Notes-service</b>: Contains Java classes that are functional services for managing the Notes. </li>
<li><b>Notes-jersey-1.X</b>: Contains Client, Webapp and Integration test modules that demonstrate Jersey 1.X.</li>
<li><b>Notes-jersey-2.X</b>: Contains Client, Webapp and Integration test modules that demonstrate Jersey 2.X features</li>
</ul>
In order to exercise the compatibility tests two test webapps are created, one using Jersey 1.X and the second using Jersey 2.0. The Notes-client's of Jersey 1.X and Jersey 2.X are exercised within both mentioned test webapps to check for compatibility problems.
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEggF8FxEKuQYmQvWfNIpv0HVnkxm_CGbqM9K7DkBKbB3p8KtLKhP73jASbC2J_QmABghp9XRU4GjhuqugpvJSfjt93WAtvcC1wPylKRNegjCQ_C-UvQzdPa8Gpe6T6mo-k_py5umLALy_Q1/s1600/compatiblity-integration.PNG" imageanchor="1"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEggF8FxEKuQYmQvWfNIpv0HVnkxm_CGbqM9K7DkBKbB3p8KtLKhP73jASbC2J_QmABghp9XRU4GjhuqugpvJSfjt93WAtvcC1wPylKRNegjCQ_C-UvQzdPa8Gpe6T6mo-k_py5umLALy_Q1/s640/compatiblity-integration.PNG" height="179" width="640" /></a>
<br />
The compatibility integration test invokes a resource "/<i>compatibility</i>" on a test web app that is developed using jersey. When the resource is invoked, both the Notes Jersey Clients are exercised to perform CRUD operations using RESTful calls to a Notes Web service Webapp. There are two versions of the test-jersey-webapp, one written in Jersey-1.X and the second in Jersey 2.0.
<br />
<br />
You can run the full suite of Integration tests merely by executing a <i>"mvn install"</i> from the root level of the project. If you wish to run individual tests, you can also drill down into the Notes-jersey-1.X and Notes-jersey-2.0 modules and execute the same command.
<br />
<br />
<h3>
Compatibility Test Results</h3>
<br />
<br />
So can Jersey 2.0 and Jersey 1.X co-exist in the same class loader system? The answer seems to be "<b>Yes</b>" but with some effort. jersey-client-1.X.jar depends on jersey-core-1.X.jar. The latter bundles JAX-RS 1.X (javax.ws.rs.*) packages in it. jersey-client-2.X depends on javax.ws.rs.* packages that are part of JAX-RS 2.0 and therefore requires that JAX-RS API 2.0 is available.
<br />
<br />
If an application that is running Jersey 1.X wishes to utilize a Jersey 1.X and Jersey 2.X client to call other web services, it is best to<b> upgrade the service</b> in question. The reasoning for the same is best established by the following use case:<br />
<br />
Consider the <a href="http://jsr311.java.net/nonav/javadoc/javax/ws/rs/core/UriBuilder.html">javax.ws.rs.core.UriBuilder</a> from jersey-core-1.X.jar:
<br />
<br />
<pre class="java" name="code">public abstract class UriBuilder {
public abstract UriBuilder uri(URI uri) throws IllegalArgumentException;
}
</pre>
<br />
An implementation by Jersey of the same is <a href="http://jersey.java.net/nonav/apidocs/1.14/jersey/com/sun/jersey/api/uri/UriBuilderImpl.html">com.sun.jersey.api.uri.UriBuilderImpl</a> which is as included part of jersey-core-1.X.jar.
<br />
<br />
Now the JAX-RS 2.0 version of <a href="http://jersey.java.net/nonav/apidocs/snapshot/jersey/javax/ws/rs/core/UriBuilder.html">UriBuilder </a>has additional abstract methods one of which is shown below:
<br />
<br />
<pre class="java" name="code">public abstract class UriBuilder {
public abstract UriBuilder uri(URI uri) throws IllegalArgumentException;
// New Method Added as part of JAX-RS 2.0
public abstract UriBuilder uri(String uri) throws IllegalArgumentException;
}
</pre>
An implementation of the same is <i>org.glassfish.jersey.uri.internal.JerseyUriBuilder</i>.
<br />
<br />
When a UriBuilder is loaded, it is important that the implementation that defines all the abstract methods of UriBuilder is loaded. If not, one would get see an exception like:
<br />
<pre class="java" name="code">java.lang.AbstractMethodError: javax.ws.rs.core.UriBuilder.uri(Ljava/lang/String;)Ljavax/ws/rs/core/UriBuilder;
</pre>
The same problem could exist for other JAX-RS 2.0 classes for which Jersey 1.X does not have concrete implementations. As the JAX-RS 2.0 API has to exist in the classpath, one needs to ensure that JAX-RS 1.X API classes are not included as well.
<br />
<br />
The jersey-core-1.X.jar includes javax.ws.rs.* packages from jax-rs 1.0 API. Re-bundling the jersey-core-1.X jar without these packages and including jersey 2.0 and jax-rs 2.0 API classes will always ensure that the API's have the latest and complete implementations.
<br />
<br />
In addition, the implementation of <a href="http://jersey.java.net/nonav/apidocs/1.8/jersey/javax/ws/rs/ext/RuntimeDelegate.html">javax.ws.rs.ext.RuntimeDelegate</a> is responsible for instantiating a version of UriBuilder. If the Jersey 1.X version of RuntimeDelegate is chosen at runtime, then it will attempt to load <i>com.sun.jersey.apu.uri.UriBuilderImpl</i> which will not have the abstract methods defined as part or JAX-RS 2.0's UriBuilder. For this reason, the <i>org.glassfish.jersey.server.internal.RuntimeDelegateImpl</i> must be chosen so that the JAX-RS 2.0 implementation of UriBuilder, i.e., <i>org.glassfish.jersey.uri.internal.JerseyUriBuilder</i> is always created. The same is accomplished by defining in META-INF/services a file java.ws.rs.ext.RuntimeDelegate that contains a single line, <i>org.glassfish.jersey.uri.internal.JerseyUriBuilder</i>.
<br />
<br />
The compatiblity-test web applications exercise the above mentioned route in addition to including a custom version of jersey-core-1.X.jar which is nothing more than a version that is stripped of javax.ws.rs.* java packages.
<br />
<br />
With the above changes, I found that my compatibility tests passed. Of course I am working under the assumption that the JAX-RS 2.0 API classes did not change method signatures in an incompatible way :-).
<br />
<br />
<h3>
Conclusion</h3>
<br />
<br />
Overall I feel that the JAX-RS 2.0 implementation with the unification of the Client API is definitely a move in the right direction. Jersey 2.0-SNAPSHOT also seemed to work well from an API and implementation perspective. It is heartening to know that co-existence can be achieved using Jersey 1.X and Jersey 2.0.The Jersey folks have also stated that they will be providing a migration path for Jersey 1.X adopters which is really professional of them. <a href="http://jersey.java.net/nonav/documentation/2.0-m08/migration.html">Some work has already been done to that effect on their site</a>.
<br />
<br />
On the Async features of JAX-RS 2.0, I see potential value on the server side with NIO. On the client side, I find the API to be a convenience. It is arguable that in a clustered stateless environment client side ASYNC might provide limited value.
<br />
<br />
I have not covered all JAX-RS 2.0 features but only those that have interested me. There are many changes and I might investigate the remaining come another day. Finally, if I have got some areas wrong or my understanding if flawed, I look forward to your corrections on the same.<br />
<br />
<b>UPDATE</b> - An example of the released<a href="http://sleeplessinslc.blogspot.com/2014/04/jerseyjaxrs-2x-example-using-spring.html"> jersey 2/jax-rs 2.0 that uses Spring DI </a>is now available</div>
Sanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.com5tag:blogger.com,1999:blog-4667121987470696359.post-74803523338794524342012-07-08T19:08:00.000-06:002012-07-08T19:08:52.111-06:00Spring MVC 3.1 Presentation and Tutorial<div dir="ltr" style="text-align: left;" trbidi="on">
A few months ago, I had a chance to present on <a href="http://static.springsource.org/spring/docs/current/spring-framework-reference/html/mvc.html">Spring MVC</a>. It was a simple 45 minute presentation but I had a blast doing it. A friend of mine had asked whether I could share the same and this BLOG post is exactly that.
<br />
<br />
My audience for the presentation had no experience with the framework and therefore my presentation was geared toward introducing the same to them. A Spring MVC presentation without supporting code is like having a cake without the icing. So included herewith is also the code I developed for the same.
<br />
<br />
The model object of interest for the demo code is an <i>Application</i> in some company's domain. It could be any application such as a web app, web service or whatever. The Application has a single property, its name. The final example demonstrates connectivity between applications using<a href="http://arborjs.org/"> arbor.js</a> to display the relationship.<br />
<br />
The examples provided are in the form of a multi-module maven project and are meant to gradually progress a reader into Spring MVC.<br />
<h4 style="text-align: left;">
1. springMvcBasic
</h4>
This is the basic project and provides the simplest example of a Controller that renders a list of applications.<br />
<h4 style="text-align: left;">
2. springMvcFormValidation
</h4>
The springMvcFormValidation module demonstrates the use of JSR 303 validations of Forms and form submission.<br />
<h4 style="text-align: left;">
3. springMvcRequestResponses
</h4>
This project demonstrates the use of the REST support of Spring MVC by sending and receiving XML and JSON.<br />
<h4 style="text-align: left;">
4. springMvcDecorated
</h4>
The springMvcDecorated module is conglomeration of the above examples and also demonstrates the following:
<br />
<br />
<b>a. </b>Internationalization
<br />
<b>b. </b>Spring Theme support using <a href="http://twitter.github.com/bootstrap/">Twitter Bootstrap</a> for the styling.
<br />
<b>c.</b> arbor.js for demonstrating a<a href="http://www.blogger.com/goog_425543926">pplication interaction</a><br />
<b>d. </b><a href="http://www.displaytag.org/1.2/">displayTag</a> and <a href="http://datatables.net/">dataTables.js</a> for table support
<br />
<b>e.</b> sitemesh for decorating pages
<br />
<br />
Enter into each project and execute a <i>mvn jetty:run </i>to start the application. You can access each application at the corresponding ports.<br />
<br />
The <a href="http://home.comcast.net/~acharya.s/java/springMvcPresentation.pdf">presentation is attached herewith</a>. The <a href="http://home.comcast.net/~acharya.s/java/springMvcDemo.zip">code samples can be downloaded from here</a>.<br />
<br />
Follow the presentation along with the provided examples to see it in action. I am grateful to the multitude of community presentations and blogs that helped me with my presentation. </div>Sanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.com2tag:blogger.com,1999:blog-4667121987470696359.post-82714454715855031212012-06-29T20:59:00.000-06:002012-06-29T20:59:08.788-06:00Spring Data MongoDB Example<div dir="ltr" style="text-align: left;" trbidi="on">
Sometime ago I had Blogged about using<a href="http://sleeplessinslc.blogspot.com/2010/10/mongodb-with-morphia-example.html"> Morphia with Mongo DB</a>. Since then I have come across the <a href="http://www.springsource.org/spring-data/">Spring Data</a> project and wanted to take their API for <a href="http://www.mongodb.org/">Mongo </a> on a ride. So this BLOG is duplicating the functionality of what was present in the Morphia one with the difference that it uses Spring Data and demonstrates Mongo Map-Reduce as well.
As most of my recent Blogs that use Spring, I am going to be using a pure JavaConfig approach to the example.<br />
<br />
<h3 style="text-align: left;">
1. Setting up Spring Mongo </h3>
<br />
The Spring API provides an abstract Spring Java Config class, <a href="http://static.springsource.org/spring-data/data-document/docs/1.0.0.M2/api/org/springframework/data/document/mongodb/config/AbstractMongoConfiguration.html">org.springframework.data.mongodb.config.AbstractMongoConfiguration</a>. This class requires the following methods to be implemented, <i>getDatabaseName()</i> and<i> mongo()</i> which returns a Mongo instance. The class also has a method to create a<a href="http://static.springsource.org/spring-data/data-mongo/docs/current/api/org/springframework/data/mongodb/core/MongoTemplate.html"> MongoTemplate</a>. Extending the mentioned class, the following is a Mongo Config:<br />
<pre class="java" name="code">@Configuration
@PropertySource("classpath:/mongo.properties")
public class MongoConfig extends AbstractMongoConfiguration {
private Environment env;
@Autowired
public void setEnvironment(Environment environment) {
this.env = environment;
}
@Bean
public Mongo mongo() throws UnknownHostException, MongoException {
// create a new Mongo instance
return new Mongo(env.getProperty("host"));
}
@Override
public String getDatabaseName() {
return env.getProperty("databaseName");
}
}
</pre>
<h3 style="text-align: left;">
2. Model Objects and Annotations </h3>
<br />
<span style="background-color: white;">As per my former example, we have four primary objects that comprise our domain. A <i>Product </i>in the system such as an XBOX, WII, PS3 etc. A <i>Customer </i>who purchases items by creating an <i>Order</i>. An Order has references to <i>LineItem</i>(s) which in turn have a quantity and a reference to a <i>Product </i>for that line.</span><br />
<h3 style="text-align: left;">
</h3>
<h4 style="text-align: left;">
2.1<span style="background-color: white;"> The Order model object looks like the following:</span></h4>
<pre class="java" name="code">// @Document to indicate the orders collection
@Document(collection = "orders")
public class Order {
// Identifier
@Id
private ObjectId id;
// DB Reference to a Customer. This is a Link to a Customer from the Customer collection
@DBRef
private Customer customer;
// Line items are part of the Order and do not exist independently of the order
private List<LineItem> lines;
...
}
</pre>
<span style="background-color: white;">The identifier of a POJO can be ObjectId, String or BigInteger. Note that Orders is its own rightful mongo collection however, as LineItems do not exist without the context of an order, they are embedded. A Customer however might be associated with multiple orders and thus the<a href="http://static.springsource.org/spring-data/data-document/docs/1.0.0.M4/api/org/springframework/data/mongodb/core/mapping/DBRef.html"> @DBRef</a> annotation is used to link to a Customer.</span><br />
<br/>
<h3 style="text-align: left;">
<span style="background-color: white;">
3. Implementing the DAO pattern </span></h3>
<br />
<span style="background-color: white;">One can use the Mongo Template directly or extend or compose a DAO class that provides standard CRUD operations. I have chosen the extension route for this example.
The Spring Mongo API provides an interface <a href="http://static.springsource.org/spring-data/data-commons/docs/1.1.0.RELEASE/api/org/springframework/data/repository/CrudRepository.html">org.springframework.data.repository.CrudRepository</a> that defines methods as indicated
by the name for CRUD operations. An extention to this interface is the <a href="http://static.springsource.org/spring-data/data-commons/docs/1.1.0.RELEASE/api/org/springframework/data/repository/PagingAndSortingRepository.html">org.springframework.data.repository.PagingAndSortingRepository</a> which provides methods for paginated access to the data. One implementation of these interfaces is the<a href="http://static.springsource.org/spring-data/data-document/docs/1.0.0.M4/api/org/springframework/data/mongodb/repository/SimpleMongoRepository.html"> SimpleMongoRepository</a> which the DAO implementations in this example extend: </span><br />
<pre class="java" name="code" style="text-align: left;">// OrderDao interface exposing only certain operations via the API
public interface OrderDao {
Order save(Order order);
Order find(ObjectId orderId);
List<Order> findOrdersByCustomer(Customer customer);
List<Order> findOrdersWithProduct(Product product);
}
public class OrderDaoImpl extends SimpleMongoRepository<Order, ObjectId> implements OrderDao {
public OrderDaoImpl(MongoRepositoryFactory factory, MongoTemplate template) {
super(new MongoRepositoryFactory(template).<Order, ObjectId>getEntityInformation(Order.class), template);
}
@Override
public List<Order> findOrdersByCustomer(Customer customer) {
// Create a Query and execute the same
Query query = Query.query(Criteria.where("customer").is(customer));
// Note the equivalent of Hibernate where one would do getHibernateTemplate()...
return getMongoOperations().find(query, Order.class);
}
@Override
public List<Order> findOrdersWithProduct(Product product) {
// Where the lines matches the provided product
Query query = Query.query(Criteria.where("lines.product.$id").is(product));
return getMongoOperations().find(query, Order.class);
}
}
</pre>
One of the quirks that I found is that I was not able to use Criteria.where("lines.product").is(product) but had to instead resort to using the $id. I believe this is a BUG and will be fixed.
Another peculiarity I found between Mongo 1.0.2.RELEASE and the milestone of 1.1.0.M1 was in the <i>save()</i> method of SimpleMongoRepository:
<br />
<pre class="java" name="code">//1.0.2.RELEASE
public <T> T save(T entity) {
}
// 1.1.0.M1
public <S extends T> S save(S entity) {
}
</pre>
Although the above will not cause a Runtime error upon upgrading due to erasure, it will force a user to have to override the <i>save()</i> or similar methods during compile time. If upgrading from 1.0.2.RELEASE to 1.1.0.M1, you will have to add the following to the OrderDaoImpl in order for it to compile:<br />
<pre class="java" name="code">@Override
@SuppressWarnings("unchecked")
public Order save(Order order) {
return super.save(order);
}
</pre>
<br />
<h3 style="text-align: left;">
4. Configuration for the DAO's</h3>
<br />
A Java Config is set up that wires up the DAO's
<br />
<pre class="java" name="code" style="text-align: left;">@Configuration
@Import(MongoConfig.class)
public class DaoConfig {
@Autowired
private MongoConfig mongoConfig;
@Bean
public MongoRepositoryFactory getMongoRepositoryFactory() {
try {
return new MongoRepositoryFactory(mongoConfig.mongoTemplate());
}
catch (Exception e) {
throw new RuntimeException("error creating mongo repository factory", e);
}
}
@Bean
public OrderDao getOrderDao() {
try {
return new OrderDaoImpl(getMongoRepositoryFactory(), mongoConfig.mongoTemplate());
}
catch (Exception e) {
throw new RuntimeException("error creating OrderDao", e);
}
}
...
}
</pre>
<h3 style="text-align: left;">
5. Life Cycle Event Listening </h3>
<br />
The Order object has the following two properties, <i>createDate </i>and <i>lastUpdate </i>date which are updated prior to persisting the object.
To listen for life cycle events, an implemenation of the <a href="http://static.springsource.org/spring-data/data-document/docs/1.0.0.M4/api/org/springframework/data/mongodb/core/mapping/event/AbstractMongoEventListener.html">org.springframework.data.mongodb.core.mapping.event.AbstractMongoEventListener</a> can be provided that defines methods for life cycle listening. In the example provide we override the <i>onBeforeConvert()</i> method to set the create and lastUpdateDate properties.
<br />
<pre class="java" name="code" style="text-align: left;">public class OrderSaveListener extends AbstractMongoEventListener<Order> {
/**
* This method is responsible for any code before updating to the database object.
*/
@Override
public void onBeforeConvert(Order order) {
order.setCreationDate(order.getCreationDate() == null ? new Date() : order.getCreationDate());
order.setLastUpdateDate(order.getLastUpdateDate() == null ? order.getCreationDate() : new Date());
}
}
</pre>
<h3 style="text-align: left;">
6. Indexing </h3>
<br />
The Spring Data API for Mongo has support for Indexing and ensuring the presence of indices as well.
An index can be created using the MongoTemplate via:
<br />
<pre class="java" name="code">mongoTemplate.ensureIndex(new Index().on("lastName",Order.ASCENDING), Customer.class);</pre>
<h3 style="text-align: left;">
7. JPA Cross Domain or Polyglot Persistence</h3>
<br />
If you wish to re-use your JPA objects to persist to Mongo, then take a look at the following article for further information about the same.<br />
<a href="http://www.blogger.com/%C2%A0http://www.littlelostmanuals.com/2011/10/example-cross-store-with-mongodb-and.html"> http://www.littlelostmanuals.com/2011/10/example-cross-store-with-mongodb-and.html</a><br />
<br />
<h3 style="text-align: left;">
8. Map Reduce</h3>
<br />
<div>
The MongoTemplate supports common map reduce operations. I am leaning on the <a href="http://static.springsource.org/spring-data/data-document/docs/current/reference/html/#mongo.mapreduce.example">basic example from the Spring Data site</a> and enhancing it to work with the comments example I have used in all my M/R examples in the past. A collection is created for Comments and it contains data like:</div>
<pre name="code">
{ "_id" : ObjectId("4e5ff893c0277826074ec533"), "commenterId" : "jamesbond", "comment":"James Bond lives in a cave", "country" : "INDIA"] }
{ "_id" : ObjectId("4e5ff893c0277826074ec535"), "commenterId" : "nemesis", "comment":"Bond uses Walther PPK", "country" : "RUSSIA"] }
{ "_id" : ObjectId("4e2ff893c0277826074ec534"), "commenterId" : "ninja", "comment":"Roger Rabit wanted to be on Geico", "country" : "RUSSIA"] }
</pre>
The map reduce works of JSON files for the mapping and reducing functions. For the mapping function we have <i>mapComments.js</i> which only maps certain words:
<br />
<pre name="code">function () {
var searchingFor = new Array("james", "2012", "cave", "walther", "bond");
var commentSplit = this.comment.split(" ");
for (var i = 0; i < commentSplit.length; i++) {
for (var j = 0; j < searchingFor.length; j++) {
if (commentSplit[i].toLowerCase() == searchingFor[j]) {
emit(commentSplit[i], 1);
}
}
}
}
</pre>
For the reduce operation, another javascript file <i>reduce.js</i>:
<br />
<pre name="code">function (key, values) {
var sum = 0;
for (var i = 0; i < values.length; i++) {
sum += values[i];
}
return sum;
}
</pre>
The <i>mapComment.js</i> and the <i>reduce.js</i> are made available in the classpath and the M/R operation is invoked as shown below:
<br />
<pre class="java" name="code">public List<ValueObject> mapReduce() {
MapReduceResults<ValueObject> results = getMongoOperations().mapReduce("comments", "classpath:mapComment.js" , "classpath:reduce.js", ValueObject.class);
return Lists.<ValueObject>newArrayList(results);
}
</pre>
Upon executing the map reduce, one would see results like:
<br />
<pre name="code">ValueObject [id=2012, value=119.0]
ValueObject [id=Bond, value=258.0]
ValueObject [id=James, value=241.0]
ValueObject [id=Walther, value=134.0]
ValueObject [id=bond, value=117.0]
ValueObject [id=cave, value=381.0]
</pre>
<div>
<span style="background-color: white;"><br /></span>
<br />
<h3 style="text-align: left;">
<span style="background-color: white;">Conclusion</span></h3>
</div>
<div>
As always, the Spring folks keep impressing me with their API. Even with the change to their API, they preserved binary backward compatibility thus making an upgrade easy. The MongoTemplate supports common M/R operations, sweet! I have not customized the M/R code to my liking but its only a demo after all.</div>
<div>
<span style="background-color: white;">I quite liked the API, it is intuitive and easy to learn. I clearly have not explored all the options but then I am not really using Mongo at work to do the same ;-)</span></div>
<div>
</div>
<h3 style="text-align: left;">
Example</h3>
<br />
<div>
<a href="http://home.comcast.net/~acharya.s/java/mongoSpringExample.zip">Download the example from here</a>. It is a maven project that you can either import into Eclipse or simply run <i>mvn test</i> from the command line to see the simple unit tests in action. The tests themselves make use of an embedded mongo instance courtesy of <a href="https://github.com/michaelmosmann/embedmongo.flapdoodle.de">https://github.com/michaelmosmann/embedmongo.flapdoodle.de</a>.<br />
Enjoy!</div>
</div>Sanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.com1tag:blogger.com,1999:blog-4667121987470696359.post-56047425006183463102012-02-19T19:39:00.001-07:002015-12-20T08:37:11.906-07:00Jersey JAX-RS MVC Killed the Spring MVC Star<div dir="ltr" style="text-align: left;" trbidi="on">
<a href="http://en.wikipedia.org/wiki/Java_API_for_RESTful_Web_Services">JAX-RS</a> is arguably the de-facto standard for creating RESTful web services in Java. Most JAX-RS providers have a way to implement a de-coupled MVC paradigm; much like <a href="http://static.springsource.org/spring/docs/current/spring-framework-reference/html/mvc.html">Spring MVC</a>. This BLOG will demonstrate how <a href="http://jersey.java.net/">Jersey</a> JAX-RS can be used as an MVC web framework and will spend some time discussing the difference between Jersey MVC and Spring MVC. As part of the investigation, <a href="http://sleeplessinslc.blogspot.com/2012/01/spring-31-mvc-example.html">the Flight Application that I have previously demonstrated using Spring MVC </a>will be translated to use Jersey MVC.<br />
<br />
Some of the additional goals of this BLOG are:<br />
<b>1.</b> Integrate with <a href="http://static.springsource.org/spring-security/site/">Spring Security</a> for authentication/authorization<br />
<b>2.</b> Use Spring for the Dependency injection<br />
<b>3.</b> Use <a href="http://twitter.github.com/bootstrap/">Twitter Bootstrap</a> for the view part<br />
<br />
<b>Front Controller:</b><br />
Both Spring MVC and Jersey have an implementation of the Front Controller design pattern with the <a href="http://static.springsource.org/spring/docs/3.1.x/javadoc-api/org/springframework/web/servlet/DispatcherServlet.html">DispatcherServlet </a>from Spring MVC and<i> </i><a href="http://jersey.java.net/nonav/apidocs/latest/jersey/">ServletContainer </a>from Jersey. The former dispatches to <i>Controller </i>classes while the later dispatches to <i>Resource </i>classes.<br />
<br />
<b>Controller/Resource:</b><br />
In a typical Spring MVC Controller we have a <a href="http://static.springsource.org/spring/docs/3.1.x/javadoc-api/org/springframework/web/portlet/ModelAndView.html">ModelAndView </a>object returned with data for the view tier to digest:
<br />
<pre class="java" name="code">@Controller
@Request("/airports.html")
public class AirportsController {
@RequestMapping(method = RequestMethod.GET)
public ModelAndView getAirports() {
List<Airport> airports = airportService.getAirports();
ModelAndView mav = new ModelAndView("/airports");
mav.addObject("airports", airports);
return mav;
}
}
</pre>
Jersey provides a class equivalent to <i>ModelAndView </i>called <a href="http://jersey.java.net/nonav/apidocs/latest/jersey/com/sun/jersey/api/view/Viewable.html">Viewable </a>that defines the <i>view </i>to be used and also houses the model objects:
<br />
<pre class="java" name="code">@Path("/airports.html")
public class AirportsResource {
@GET
public Viewable getAirports() {
List<Airport> airports = airportService.getAirports();
Map<String, List<Airports>> modelMap = Maps.newHashMap();
modelMap.put("airports", airports);
return new Viewable("/airports", modelMap);
}
}
</pre>
<b>View Processing:</b> <br />
Spring MVC provides <a href="http://static.springsource.org/spring/docs/3.1.x/javadoc-api/org/springframework/web/servlet/ViewResolver.html">View Resolvers</a> to resolve the template name to a template reference.<br />
In the Jersey MVC land, implementations of the <a href="http://jersey.java.net/nonav/apidocs/latest/jersey/com/sun/jersey/spi/template/ViewProcessor.html">ViewProcessor </a>interface are used to resolve a template name to a template reference. In addition, they are also responsible for processing the template and the results of which are written to the output stream. One such processor is available in the Jersey code base for JSP's, i.e., the <i>JSPTemplateProcessor. </i> The processor requires the location of the JSP files and the same is made available via the argument of <i>JSTTemplatesBasePath. </i> The <i>Viewable </i>interface lends itself to providing alternate View Processors for other templating languages like Free Marker and Velocity but the same would need to be built:
<br />
<pre class="java" name="code"><filter>
<filter-name>jersey</filter-name>
<filter-class>com.sun.jersey.spi.spring.container.servlet.SpringServlet</filter-class>
<init-param>
<param-name>com.sun.jersey.config.property.JSPTemplatesBasePath</param-name>
<param-value>/WEB-INF/pages</param-value>
</init-param>
</filter>
</pre>
<b>View:</b><br />
In the JSP, all the model objects provided by the Resource tier are easily accessed by using a prefix of "it". For example<br />
<i>${it.airports}</i> as shown below:
<br />
<pre class="java" name="code"><display:table name="${it.airports}" class="table table-condensed"
requestURI="" id="a" export="true" pagesize="10">
<display:column title="Code" property="code" sortable="true"/>
<display:column title="Name" property="name" sortable="true" />
<display:column title="City" property="city" sortable="true"/>
</display:table>
</pre>
With the above, one has the exact same functionality between Spring MVC and Jersey. It is important to note that with Spring MVC however, there are a bunch of options for the return type by which the view is resolved. Spring MVC follows a convention based approach for its Controllers and return types can be quite a few:<br />
<ul style="text-align: left;">
<li>ModelAndView object</li>
<li>Model object, for example, a List of Airports</li>
<li>Map of Model objects</li>
<li>A View object</li>
<li>A String that represents a View name</li>
</ul>
I will not get into details of the how the above are achieved by Spring MVC as it is beyond the scope of this BLOG. It is sufficient to accept that Spring MVC does support these alternate return types and follows a convention based approach to view resolution.<br />
<br />
<b>Accessing Request Data:</b><br />
Web applications need to have access to request information like HTTP Headers, Cookies, Query Parameters and Form data (discussed later). Both Spring MVC and Jersey JAX-RS have annotations that help extract the same.<br />
<pre class="java" name="code">@RequestMapping(method=GET)
public void foo(@RequestParam("q") String q, @CookieValue("c") String c,
@RequestHeader("h") String h) {
// Spring MVC
}
@GET
@Path
public void foo(@QueryParam("q") String q, @CookieParam("c") String c,
@HeaderParam("h") String h) {
// JAX-RS
}
</pre>
<br />
<b>Form Data Management:</b><br />
Handling Form data and validating the same is one area where every MVC framework has to be tested against. <br />
<br />
From a binding perspective, when a form is submitted, one would like to have the form details map to a POJO. With Spring MVC, a form POJO is automatically populated with the details of the HTML Form coupled with the ability of having the contents of the Form being validated and results made available via a BindingResult object as shown below:<br />
<pre class="java" name="code">public class FlightController {
@RequestMapping(method=POST)
public ModelAndView searchFlights(FlightSearchCriteria criteria, BindingResult result) {
...
if (result.hasErrors()) { ... }
..
}
}
</pre>
In the JAX-RS domain, there are a few ways to access Form parameters:<br />
<br />
<b>1. </b>Using the <a href="http://jersey.java.net/nonav/apidocs/latest/jersey/com/sun/jersey/api/representation/Form.html">Form</a><i> </i>object or a <i>MultiValuedMap </i>and explicitly extracting the form data by name:
<br />
<pre class="java" name="code"> public Viewable searchFlights(Form form) {
String fromAirport = form.getFirst("from");
String toAirport = form.getFirst("to");
...
}
</pre>
<b>2.</b> Using <a href="http://jersey.java.net/nonav/apidocs/latest/jersey/javax/ws/rs/FormParam.html">@FormParam</a> annotation for auto injection of Form parameters:
<br />
<pre class="java" name="code">public Viewable searchFlights(@FormParam("from") String fromAirport, @FormParam("to") String toAirport) {
...
}
</pre>
<b>3.</b> Using <a href="http://jersey.java.net/nonav/apidocs/latest/jersey/com/sun/jersey/api/core/InjectParam.html">@InjectParam</a> with a Java POJO whose attributes are annotated with <i>@FormParam</i>:<br />
<br />
Using <i>@FormParam</i> when there are a few Form parameters works well but fails when there are multiple parameters as the same translates to a very long method signature. In such cases a single object can house the individual parameters as shown below with the Object being injected automatically to the controller method due to the magic of Jersey's Injectable annotation. Read more about the same from <a href="http://blog.valotas.com/2011/01/resteasy-form-annotation-for-jersey.html">Valotas BLOG</a>:
<br />
<pre class="java" name="code">public class FlightSearchCriteria {
@FormParam("from")
String fromAirport;
@FormParam("to")
String toAirport;
...
}
public class FlightResource {
public Viewable searchFlights(@InjectParam FlightSearchCriteria criteria) {
....
}
}
</pre>
With the above, Jersey's MVC is able to map a form object to a POJO but it does involve adding annotations of @FormParam for each form parameter. With the Spring MVC binding, one did not have to deal with the same. In addition, the above examples of Jersey MVC are missing validation of the Form. To achieve the same, one might be able to create a<a href="http://codahale.com/what-makes-jersey-interesting-injection-providers/"> custom Injection Provider</a> to validate and provide the validation results as part of the method signature. I did not find a ready way to do the automatic validation and thus leave that effort for the making of a future BLOG post. That said, validation can easily be performed using JSR 303 validation or a custom validator easily enough programatically within the Jersey MVC resource method:
<br />
<pre class="java" name="code">public Viewable searchFlights(@InjectParam FlightSearchCriteria criteria) {
...
Set<ConstraintViolation<FlightSearchCriteria>> violations = validator.validate(criteria);
if (violations.size() > 0) {
modelMap.put("errors", violations);
return ...;
}
...
}
</pre>
On the view, displaying the errors on a global level is pretty easy using JAX-RS, as one can simply loop over the violations and display the violation message. Displaying an error message corresponding to each violation explicitly, well Spring MVC has tag support to do the same and although achievable by writing a custom tag, it is an added effort for a Jersey MVC application.<br />
<br />
<b>Http Session Variables:</b><br />
If an application wishes to use the Http Session for managing state, Spring MVC has annotated convention based support for the same. Consider the following example from the petclinic app where managing the lifecycle of the pet in the HTTP session is shown:<br />
<pre class="java" name="code">@Controller
@SessionAttributes("pet")
public class PetController {
@RequestMapping(method=GET)
public Pet get(@RequestParam int id) {
Pet pet = petService.getPet(id);
..
return pet; // Pet added to Session
}
@RequestMapping(method=POST)
public String update(Pet pet, BindingResult result, SessionStatus status) {
..
petService.update(pet);
status.setComplete(); // Remove Pet from session
...
}
}
</pre>
<b> </b><br />
I did not see an equivalent offering with Jersey MVC.<br />
<br />
<b>Security:</b><br />
As the Spring Security framework is not tightly coupled to Spring MVC, integrating the framework to support the Jersey MVC application is trivial. On the view tier, Spring Security Tags readily provide access control. If one does not wish to use Spring Security on the web framework, one can easily roll out their own with access being restricted by Jersey filters or Servlet Filters.<br />
<br />
<b>Thoughts on Jersey JAX-RS MVC versus Spring MVC:</b><br />
JAX-RS is a great framework tailored toward creating RESTful web services and Jersey is a solid implementation of the same. As Jersey is a JAX-RS implementation, one could swap out Jersey for another provider like <a href="http://www.jboss.org/resteasy">RestEasy </a>if desired without much effort. Spring MVC on the other hand has REST web service support but the same is not a JAX-RS implementation and therefore one is tied to Spring MVC. Read more about a RESTful comparison of Spring MVC and JAX-RS on a very nice <a href="http://www.infoq.com/articles/springmvc_jsx-rs">INFOQ BLOG</a><br />
<br />
From the perspective of an MVC application, as shown, creating a web application using Jersey is clearly achievable. It is important to note that one is leaving the JAX-RS specifications and heading into custom Jersey land while doing so though. On the controller tier, I find that the ease of binding Form objects to POJOs is not as full fledged as Spring MVC's offering. The lack of JSR 303 support via automatic validation of Form objects with Jersey MVC is a bit of a downer but not a show stopper. Spring MVC's controller and convention based routing allow for a multitude of return types from a Controller method that Jersey does not seem to have out of the box. Whether the same is a benefit or a pitfall is debatable. <br />
<br />
On the view tier, Spring MVC has strong support for the different templating languages. Be it Free Marker, Velocity, Excel, Jasper Reports, what have you. In addition, Spring MVC out of the box provides a way to chain view resolvers something that Jersey MVC does not have. Spring MVC also comes built in with a rich set of tag libraries supporting JSP, Free Marker and Velocity. There is no such support automatically available via Jersey JAX-RS. Spring MVC also has a strong user community and support in the form of books, forums and blogs working in its favor.<br />
<br />
I think back at analogy of designing a bedroom. One can go to a furniture store and simply buy a bedroom set where all components mesh well together, i.e., Spring MVC or one can choose to make the same by buying and building out the components individually, i.e., Jersey MVC. Both approaches will lead one to the same bedroom but how hard was it to get in? When you are in a hurry, the fastest path is welcomed, if you know what I mean ;-) All puns definitely intended ! A colleague of mine also pointed out that Jersey JAX-RS would be driven to make steads in further releases on its core offering, i.e, RESTful services but Spring MVC would make strides on enhancing its web application support. This is definitely a factor one must consider when deciding between Jersey MVC and Spring MVC.<br />
<br />
On the other hand, arguably, if ones application is not very big and a lot of the benefits provided by Spring MVC can be ignored in favor of a light weight MVC application, Jersey MVC can easily fit that role. In particular Jersey MVC will work quite well for web applications that are heavy on Ajax, use client side validation and with partial round trips to the server as they can utilize the strong RESTful support (JSON, XML etc) from Jersey and use the MVC part of Jersey for page transitions.<br />
<br />
Condensing, I think Jersey JAX-RS is getting close on the MVC support but for now the Spring MVC star is not extinguished by the Jersey JAX-RS/MVC Juggernaut. If I have not understood the features of Jersey's MVC support correctly, please do comment and clarify the same. <br />
<br />
<b>The Jersey MVC Example: </b><br />
<a href="https://github.com/sanjayvacharya/sleeplessinslc/tree/master/jaxrsFlightExample">The Jersey MVC Example can be downloaded from HERE</a>. After extracting the file, execute a "<i>mvn jetty:run</i>" from the root of the project and access the application at <a href="http://localhost:8080/">http://localhost:8080</a>. The user credentials are the same as in my<a href="http://sleeplessinslc.blogspot.com/2012/02/spring-security-stateless-cookie-based.html"> Spring MVC Security example</a>. Once you get in, play with the validation and scrutinize the source. The example uses Spring for DI.<br />
<br />
The styling itself is using <a href="http://twitter.github.com/bootstrap/">Twitter Bootstrap</a>, thanks to a good friend of mine pushing me to try it. I am sadly an average front end developer and therefore offer my thanks to the supremo of web development, Matt Raible, and his customization of App Fuse, based of which, I have been able to change the style of the Flight Application. On Twitter Bootstrap, I did like the ease it presented of creating a decent looking web application but I must lean on <a href="http://raibledesigns.com/rd/entry/refreshing_appfuse_s_ui_with">Matt's BLOG</a> and his assessment that although Twitter Bootstrap is a great starter template, it is by no means a substitute for a template created by a solid CSS developer. <br />
<br /></div>
Sanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.com11tag:blogger.com,1999:blog-4667121987470696359.post-45791687375596110692012-02-12T12:16:00.000-07:002015-12-20T08:24:53.888-07:00Spring Security - Stateless Cookie Based Authentication with Java Config<div dir="ltr" style="text-align: left;" trbidi="on">
It has been security time for me recently at work, single sign on and the likes. While at it, I stumbled upon my favorite framework Spring and its offering <a href="http://static.springsource.org/spring-security/site/">Spring Security</a>. In the words of the creators of the framework, <i>"Spring Security is a powerful and highly customizable authentication and access-control framework. It is the de-facto standard for securing Spring-based applications". </i>Spring Security has been around since sometime now but I have not had a chance to use it. This is my first foray into the framework and my BLOG is targeted at accomplishing the following objectives:<br />
<br />
<b>1.</b> <a href="http://sleeplessinslc.blogspot.com/2012/01/spring-31-mvc-example.html">Enhance the Example Project I created to demonstrate Spring Java Config</a> with Authentication and Authorization powered by Spring Security.<br />
<b>2.</b> Demonstrate an embedded LDAP server in the project as the source of user authentication and user roles.<br />
<b>3. </b>Ensure that the application is stateless thus being able to scale out really easily. Use a Cookie instead of HttpSession to store the user's authenticated state.<br />
<b>4. </b>Have the application use Form Based Authentication. <br />
<b>5. </b>Use Java Config for Spring MVC and Spring Security<br />
<br />
The Example Flight project was augmented with the following ROLE based authorization:<br />
<br />
<b>1. Standard User</b> - A user who can view Airports, Search Flights and see reservations<br />
<b>2. Agent</b> - An agent who can perform all the operations a user can with the additional ability to make reservations on flights<br />
<b>3. Administrator -</b> An administrator who can do everything a standard user can do with the additional ability to create new Airports for the Flight application. They however cannot make a reservation, i.e., no Agent privilege.<br />
<br />
From an LDAP perspective, the following groups can be considered to match the above roles: <i>USER</i>, <i>AGENT </i>and <i>ADMIN</i>.<br />
<br />
<a href="http://directory.apache.org/">ApacheDS </a>is one example of an open source LDAP server that can be run in embedded mode. I started looking into how to do the same only to find out that the Spring Security project provides a very easy way to the same. If using Spring Security, the standard way to have an embedded server in your application for demonstration or testing purposes is to simply have the following line defined in your spring <i>beans.xml. </i><a href="http://krams915.blogspot.com/2011/01/spring-security-mvc-using-embedded-ldap.html">krams BLOG on using embedded LDAP with Spring MVC was a major help in gettting the code functional</a>:<br />
<pre class="xml" name="code"> <--ldiff file to import -->
<ldap-server root="o=welflex" ldif="classpath:flightcontrol.ldiff" />
</pre>
<br />
The above claim seems exotic but the presence of the above line and the magic of <a href="http://static.springsource.org/spring/docs/3.1.x/reference/html/extensible-xml.html">Springs Extensible XML Authoring</a> has an embedded LDAP going with minimal developer effort. The Java Config equivalent of the same is shown below where the container is programatically created:<br />
<pre class="java" name="code">@Configuration
public class SecurityConfig {
/**
* This bean starts an embedded LDAP Server. Note that <code>start</code>
* is not called on the server as the same is done as part of the bean life cycle's
* afterPropertySet() method.
*
* @return The Embedded Ldap Server
* @throws Exception
*/
@Bean(name = "ldap-server")
public ApacheDSContainer getLdapServer() throws Exception {
ApacheDSContainer container = new ApacheDSContainer("o=welflex",
"classpath:flightcontrol.ldiff");
container.setPort(EMBEDDED_LDAP_SERVER_PORT);
return container;
}
...
}
</pre>
With the above Java Config, the embedded LDAP server is primed with the contents from the<i> </i><a href="http://home.comcast.net/%7Eacharya.s/java/flightcontrol.ldiff">flightcontrol.ldiff</a> file.
<br />
<br />
For the Stateless requirement, the flight control application will rely on a Cookie on the client browser to detect the user that is logged in rather than use the HTTP Session. Spring Security tends to gravitate toward using the <a href="http://docs.oracle.com/javaee/5/api/javax/servlet/http/HttpSession.html">HTTP Session</a>. However as of Spring 3.1, they introduced a configuration of "<i>stateless</i>". The setting of stateless meant that a HTTP session would not be created by Spring Security. The same however does not translate into a Cookie based authentication mechanism being introduced. Using name spaces, one would bootstrap spring security for the application as shown below:<br />
<pre class="xml" name="code"> <http auto-config="true" create-session="stateless">
<-- Allow anon access -->
<intercept-url pattern="/login*" access="IS_AUTHENTICATED_ANONYMOUSLY" />
<intercept-url pattern="/logoutSuccess*" access="IS_AUTHENTICATED_ANONYMOUSLY" />
<intercept-url pattern="/scripts*" access="IS_AUTHENTICATED_ANONYMOUSLY" />
<--Enforce user role -->
<intercept-url pattern="/*" access="ROLE_USER" />
<-- Form login, error, logout definition -- >
<form-login login-page="/login.html"
always-use-default-target="true" default-target-url="/home.html"
authentication-failure-url="/login.html?login_error=1" />
logout logout-url="/logout" logout-success-url="/logoutSuccess.html" />
</http>
</pre>
In order to understand how the above Spring Security namespace is translated to Java Beans, <a href="http://blog.springsource.org/2010/03/06/behind-the-spring-security-namespace/">take a look at the BLOG by Luke Taylor on the same</a>.
The above will work fine if one is using<a href="http://stackoverflow.com/questions/8800855/create-session-stateless-usage"> Basic or Digest authentication</a> to secure resources but does not work with form based authentication and a stateless environment. If you wish to use Basic or Digest based authentication, then take a look at<a href="http://www.baeldung.com/2011/11/20/basic-and-digest-authentication-for-a-restful-service-with-spring-security-3-1/"> baeldung's BLOG on the same</a>.<br />
<br />
As the requirement of the flight application is to use form based login via Cookie Support, a filter is created that is a substitute for the <a href="http://static.springsource.org/spring-security/site/apidocs/org/springframework/security/web/session/SessionManagementFilter.html">SessionManagementFilter</a>:<br />
<pre class="java" name="code">public class CookieAuthenticationFilter extends GenericFilterBean {
public CookieAuthenticationFilter(LdapUserDetailsService userDetailService,
CookieService cookieService) {...}
@Override
public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException,
ServletException {
HttpServletRequest request = (HttpServletRequest) req;
HttpServletResponse response = (HttpServletResponse) res;
SecurityContext contextBeforeChainExecution = loadSecurityContext(request);
// Set the Security Context for this thread
try {
SecurityContextHolder.setContext(contextBeforeChainExecution);
chain.doFilter(request, response);
}
finally {
// Free the thread of the context
SecurityContextHolder.clearContext();
}
}
private SecurityContext loadSecurityContext(HttpServletRequest request) {
final String userName = cookieService.extractUserName(request.getCookies());
return userName != null
? new CookieSecurityContext(userDetailService.loadUserByUsername(userName))
: SecurityContextHolder.createEmptyContext();
}
}
</pre>
In the above code, the <i>loadSecurityContext </i>method is responsible for loading user information from the authentication provider. The method will attempt to obtain the authenticated principal from the custom user cookie set and if present, create a Spring <a href="http://static.springsource.org/spring-security/site/apidocs/org/springframework/security/core/context/SecurityContext.html">SecurityContext</a><i> </i>and set the same in the <a href="http://static.springsource.org/spring-security/site/apidocs/org/springframework/security/core/context/SecurityContextHolder.html">SecurityContextHolder</a>. Setting of the Context will ensure that no further authentication is performed and the requested resource will be served up. If the User security cookie were not present, then an <i>empty context</i> is provided so that filters further downstream can re-direct the request to a <i>login </i>page.<br />
<br />
So where does the User Cookie get set? After a successful authentication, an <a href="http://static.springsource.org/spring-security/site/apidocs/org/springframework/security/web/authentication/AuthenticationSuccessHandler.html">AuthenticationSuccessHandler </a>is invoked by Spring Security's <a href="http://static.springsource.org/spring-security/site/apidocs/org/springframework/security/web/authentication/UsernamePasswordAuthenticationFilter.html">UsernamePasswordAuthenticationFilter </a>and the former is where the cookie gets set:<br />
<pre class="java" name="code">public class AuthSuccessHandler implements AuthenticationSuccessHandler {
...
@Override
public void onAuthenticationSuccess(HttpServletRequest request, HttpServletResponse response,
Authentication authentication) throws IOException, ServletException {
SecurityContext context = SecurityContextHolder.getContext();
Object principalObj = context.getAuthentication().getPrincipal();
String principal = ((LdapUserDetails) principalObj).getUsername();
// Create the User Cookie
response.addCookie(cookieService.createCookie(principal));
response.sendRedirect("/home.html");
}
}
</pre>
The <i>UserNamePasswordAuthenticationFilter </i>is notified of the custom <i>AuthenticationSuccessHandler </i>as shown below:
<br />
<pre class="java" name="code">@Configuration
public class SecurityConfig {
....
private UsernamePasswordAuthenticationFilter getUserNamePasswordAuthenticationFilter() {
UsernamePasswordAuthenticationFilter filter = new UsernamePasswordAuthenticationFilter();
filter.setAllowSessionCreation(false);
filter.setAuthenticationManager(getAuthenticationManager());
// Set the Auth Success handler
filter.setAuthenticationSuccessHandler(new AuthSuccessHandler(getCookieService()));
filter.setAuthenticationFailureHandler(new SimpleUrlAuthenticationFailureHandler("/login.html?login_error=1"));
filter.setFilterProcessesUrl("/j_spring_security_check");
return filter;
}
}
</pre>
The final chain of Filters is declared as:<br />
<pre class="java" name="code">@Configuration
public class SecurityConfig {
....
@Bean(name = "springSecurityFilterChain")
public FilterChainProxy getFilterChainProxy() {
SecurityFilterChain chain = new SecurityFilterChain() {
@Override
public boolean matches(HttpServletRequest request) {
// All goes through here
return true;
}
@Override
public List<Filter> getFilters() {
List<Filter> filters = new ArrayList<Filter>();
filters.add(getCookieAuthenticationFilter());
filters.add(getLogoutFilter());
filters.add(getUserNamePasswordAuthenticationFilter());
filters.add(getSecurityContextHolderAwareRequestFilter());
filters.add(getAnonymousAuthenticationFilter());
filters.add(getExceptionTranslationFilter());
filters.add(getFilterSecurityInterceptor());
return filters;
}
};
return new FilterChainProxy(chain);
}
}
</pre>
With the above configurations, the application has all the necessary ingredients to provide Cookie based authentication and Role based authorization. The following HTML snippets demonstrate how Spring Security's <i>taglibs </i>are used to honor the roles on the view tier of the architecture:
<br />
<pre class="xml" name="code"><-- Airports page -- >
<sec:authorize ifAllGranted="ROLE_ADMIN">
<-- Display the form to add an airport -- >
</sec:authorize>
<-- Flight Search page. Display column to book flight only for AGENTs -->
<display:table name="flightSearchResult.flights" class="table"
requestURI="" id="flight" export="true" pagesize="10">
.....
<sec:authorize ifAllGranted="ROLE_AGENT">
<display:column title="" sortable="false" href="/bookFlight.html"
paramId="id" paramProperty="id" titleKey="flightId">
Book
</display:column>
</sec:authorize>
</display:table>
</pre>
Individual methods of the Service classes can also be secured for authorization using Spring Security via annotations. The examples does not get into the same.
<br />
<br />
<b>Conclusion</b><br />
<br />
The example provided uses Java Config in its entirety and can give an entrant into Spring Security an idea of how the request chain is set up simply by following the Java Config. The beauty of the Spring Security framework lies in the fact that authentication and authorization details are set up within the filter chain and clearly remove boiler plate from a developer who is working of the Controller tier. Spring's Security does not have to stop at the view tier and can be used to authorize access to to Service tier methods as well. <br />
<br />
The example obtains the user roles from the embedded LDAP server on every request. The can be performance issue. Consider not prematurely optimizing as LDAP look ups are supposed to be rapid. If you do face a problem with performance, consider a backing cache. Also note that the user cookie created is not secure and can easily be spoofed. In an actual implementation one would use some way to identify against spoofed cookies. <br />
<br />
<a href="https://github.com/sanjayvacharya/sleeplessinslc/tree/master/springSecurityExample">Download the full source code as a maven example from here</a> and once extracted, execute a "<i>mvn jetty:run</i>" to start the same. All the security configuration is set up in the package <i>com.welflex.web.security</i>. You will see an embedded LDAP server start and the corresponding schema imported into the server. Access the web application at <a href="http://localhost:8080/">http://localhost:8080</a>. <br />
<br />
When prompted to login use one of the following user name/password combinations to see the different roles and corresponding access control in action:<br />
<br />
<b>1. Admin User </b>- sadmin/pass <br />
<b>2. Agent User -</b> sagent/pass<br />
<b>3. Standard User -</b> suser/pass<br />
<br />
Investigate your browsers cookies to see that a USER cookie has been set by the application and that there is no sign of <i>JSESSIONID </i>cookie.<br />
<br />
I hope this is of help to someone trying to integrate Spring Security into their application and wants to use form based login with Cookies to remember the user. There have been some <a href="http://stackoverflow.com/questions/5907594/spring-security-context-using-cookie">Stack Overflow</a> and <a href="http://forum.springsource.org/showthread.php?119899-Spring-Security-for-stateless-webapps-with-cookie&highlight=Cookie">Spring Security Forum</a> postings asking about the same.<br />
<br />
There are still areas to explore with the Spring Security domain like <a href="http://static.springsource.org/spring-security/oauth/">Spring Security Oauth</a>, <a href="http://static.springsource.org/spring-security/site/extensions/saml/index.html">Spring Security SAML (empty page)</a>, <a href="http://static.springsource.org/spring-security/site/docs/3.0.x/reference/cas.html">Spring Security CAS</a>.</div>
Sanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.com12tag:blogger.com,1999:blog-4667121987470696359.post-75661042476079403962012-01-28T23:05:00.000-07:002015-12-20T08:09:49.836-07:00Spring MVC 3.1 with Jamon Templating<div dir="ltr" style="text-align: left;" trbidi="on">
Quite clearly as my BLOGs show, I have been working with my favorite technology stack recently, i.e., Spring. In particular, I have been working on <a href="http://static.springsource.org/spring/docs/current/spring-framework-reference/html/mvc.html">Spring MVC</a> and am looking at finding the right view technology to use with Spring. I am biased toward <a href="http://freemarker.sourceforge.net/">FreeMarker </a>as it appears to be a super set of <a href="http://velocity.apache.org/">Velocity </a> and can work with JSPs if required. That said, I have noticed on some Spring Forum postings queries on using <a href="http://www.jamon.org/">Jamon </a>with Spring. There does not appear to be a concrete example of the same on the web. There are many that talk about performance monitoring of Spring with its namesake <a href="http://jamonapi.sourceforge.net/">JAMon</a> however. So here goes my attempt at integrating Spring with Jamon.
<br />
<br />
So what is <a href="http://www.jamon.org/">Jamon</a>? Quoting the words of the creators of Jamon, "<i>Jamon is a text template engine for Java, useful for generating dynamic HTML,XML, or any text-based content. In a
typical Model-View-Controller architecture, Jamon clearly is aimed at the View (or presentation)
layer</i>".<br />
<br />
One of the constructs that is central to Jamon is<a href="http://en.wikipedia.org/wiki/Type_system"> type safety</a>. Jamon on the view tier provides compile time type safety that is quite lost when using for example, <a href="http://en.wikipedia.org/wiki/JavaServer_Pages">JSPs</a>, and the expression language libraries that are associated with it.<br />
<br />
With technologies like JSP, errors are detected by a developer at runtime and they fix the same as they develop the application. Detecting an error at compile time however is really beneficial if the cost of starting an application and testing a request path is significant.Jamon templating with the compile time type safety it provides the presentation tier, mitigates the turn around time for development and refactoring. I will not delve into the details of Jamon as the same can be <a href="http://www.jamon.org/tutorial/TutorialPath.html">read on the Jamon website</a>.
<br />
<br />
<a href="http://www.artima.com/lejava/articles/jamon.html">An interview on the web with one of the creators of Jamon</a> describes how templating languages like FreeMarker and Velocity only partially fill the void of type safety when working with a framework like Spring MVC and infact have a hole that can lead to non type safe behavior.<br />
<br />
Consider the following snippet using Spring MVC with FreeMarker for rendering the view:<br />
<pre class="java" name="code">@Controller
public class LoginController {
public ModelAndView login(User user, BindingResult result) {
ModelAndView mav = new ModelAndView("user");
......
if (result.hasErrors()) {
mav.addObject("user", user);
....
return mav;
}
...
}
}
</pre>
The corresponding FreeMarker template, <i>login.ftl</i>, might look like:
<br />
<pre class="xml" name="code"> <@spring.bind "user.*"/>
<@spring.formInput "user.username", id="username"/>
</pre>
With the above, all is well and the runtime binding works just fine. However, if a developer accidentally forgot to:<br />
<ul style="text-align: left;">
<li> Add the user object to the model</li>
<li>Or misspelled the "key" to the user object as "<i>user</i>" but as "<i>users</i>"</li>
<li>Or misspelled "<i>user.username</i>" on the FreeMarker template as "<i>user.usrname</i>"</li>
<li>Or placed a wrong type in the value for the <a href="http://static.springsource.org/spring/docs/current/api/org/springframework/web/servlet/ModelAndView.html">ModelAndView </a>instead of the User object </li>
</ul>
They would experience failures at runtime and even a strong templating language like FreeMarker would be powerless to detect the issues mentioned due to the loose coupling between the Spring MVC controller and the Template. Objects added to the ModelAndView object are of signature <i>addObject(String, Object)</i>, ie., quite a few rooms for accidents. This decoupling nature of Spring MVC would hurt even a presentation tier written using Jamon right? Well would it really ? What if type safety could be maintained all the way from the Spring Controller tier to the view tier with Jamon?
Lets say for the sake of discussion, we desire to render a login page using Jamon with Spring MVC.
A<a href="http://www.jamon.org/UserGuide.html"> Jamon Template</a> for the user login titled <i>LoginView.jamon</i> is shown below in which the User object is explicitly passed in to the Jamon template. There are no iffs or <a href="http://teslkoreanews.com/wp-content/uploads/2010/09/ephelump-butt.jpg">butts </a>regarding what is available to the Template here, it is the User object and only the User object that is made available to the template. It is possible to pass in <i>null </i>due to a programming mistake or runtime time error in which case all bets are off and this concern transcends compile time safety. At compile time a LoginView.class is made available:
<br />
<pre class="xml" name="code"> <%import>
com.welflex.model.User;
</%import>
<%args>
User user;
</%args>
<form....
...
<input type="text" id="userName" name="userName" value="<% user.getUserName() %>">
....
<input type="submit" name="Login" value="Login"/>
</form>
</pre>
The Login Controller is designed to return a Jamon Renderer rather than a ModelAndView as shown below:
<br />
<pre class="java" name="code">@Controller
public class LoginController {
/**
* @return Return a Jamon Renderer
*/
@RequestMapping(value = "/login.html", method = RequestMethod.GET, produces = MediaType.TEXT_HTML_VALUE)
@ResponseBody
public Renderer login() {
User user = new User("Enter user name", "Enter password");
return new LoginView().makeRenderer(user);
}
}
</pre>
Nice and dandy, so how does one tell Spring MVC to handle the Jamon Renderer? Look no further than below where a custom <a href="http://static.springsource.org/spring/docs/current/api/org/springframework/http/converter/HttpMessageConverter.html">HttpMessageConverter </a>is created for the same:
<br />
<pre class="java" name="code">public class JamonTemplateHttpConverter extends AbstractHttpMessageConverter<Renderer> {
@Override
protected boolean supports(Class<?> clazz) {
// Only Renderer classes are supported by this HttpMessageConverter
return Renderer.class.isAssignableFrom(clazz);
}
@Override
public boolean canWrite(Class<?> clazz, MediaType mediaType) {
return supports(clazz) && MediaType.TEXT_HTML.equals(mediaType);
}
...
@Override
protected void writeInternal(Renderer t, HttpOutputMessage outputMessage) throws IOException,
HttpMessageNotWritableException {
Charset charset = getContentTypeCharset(outputMessage.getHeaders().getContentType());
t.renderTo(new OutputStreamWriter(outputMessage.getBody(), charset));
}
...
}
</pre>
Of particular note in the above class is the fact that it knows how to handle Jamon Renderers and also writes the renderer's content to the HttpServlet output stream. The next step involves letting Spring know of the renderer. This operation is accomplished in the Web Configuration class as shown below:
<br />
<pre class="java" name="code">@EnableWebMvc
@Configuration
@Import({ControllerConfig.class})
public class WebConfig extends WebMvcConfigurerAdapter {
...
@Override
public void configureMessageConverters(List<HttpMessageConverter<?>> converters) {
converters.add(new JamonTemplateHttpConverter());
}
...
}
</pre>
The above is one way where type safety is preserved using Jamon and Spring MVC. What about if one wishes to use the ModelAndView paradigm from Spring as is standard practice ? Well I could not do the same ! At least not without some customizations. Spring Dispatcher Servlet is strongly tied to the ModelAndView class for its functioning and sadly ModelAndView is not an interface. However, the good thing is that that ModelAndView is not "final" and neither are its methods. In addition, the <a href="http://static.springsource.org/spring/docs/current/api/org/springframework/web/servlet/View.html">View </a>part of ModelAndView is an interface so we start by creating a custom implementation of the View whose primary purpose is to render the contents:
<br />
<pre class="java" name="code">public class JamonView implements View {
private final Renderer renderer;
public JamonView(Renderer renderer) { this.renderer = renderer; }
@Override
public String getContentType() { return MediaType.TEXT_HTML_VALUE; }
@Override
public void render(Map<String, ?> model, HttpServletRequest request, HttpServletResponse response) throws Exception {
// Note that the Model is never used at all here.
StringWriter writer = new StringWriter();
renderer.renderTo(writer);
response.getOutputStream().write(writer.toString().getBytes());
}
}
</pre>
The ModelAndView object is what is central to Spring's <a href="http://static.springsource.org/spring/docs/current/api/org/springframework/web/servlet/DispatcherServlet.html">DispatcherServlet </a>and as one would probably guess, the only option I had at my disposal (no cglib enhancements please) was to extend ModelAndView as shown below:
<br />
<pre class="java" name="code">public class JamonModelAndView extends ModelAndView {
public JamonModelAndView(Renderer renderer) { super(new JamonView(renderer)); }
public JamonModelAndView(JamonView view) { super(view); }
// Probably want to ensure that other methods are not invokable by throwing UnsupportedOperationException
// Sadness that ModelAndView is not an interface or one could use dynamic proxies! Also there is no
// higher level abstraction one can use
}
</pre>
On could also have the <i>JamonModelAndView </i>be a static static class that provided methods to create a ModelAndView with signatures that accept a Renderer or JamonView. With the custom extension to ModelAndView, a resulting Controller class for the <i>Login </i>operation would look like the following where two separate execution paths are demonstrated, one for a successful login and a second where validation errors were detected. God bless <a href="http://en.wikipedia.org/wiki/Liskov_substitution_principle">Liskov and his substitution principle</a> Also note that the User object is automatically bound and its properties validated with values passed in from the form:
<br />
<pre class="java" name="code">@Controller
public class LoginController {
....
@RequestMapping(value="/login.html", method = RequestMethod.POST)
public ModelAndView login(@Valid User user, BindingResult result, Map<String, Object> model) {
return result.hasErrors() ? new JamonModelAndView(new LoginView().makeRenderer(user, result))
: new ModelAndView("redirect:/welcome.html");
}
@RequestMapping(value="/welcome.html")
public ModelAndView welcome() {
return new JamonModelAndView(new WelcomeView().makeRenderer());
}
}
</pre>
With the above, we are all but done with a type safe rendering approach using Spring MVC and Jamon. What remains is the part where form submission and the input variable mapping has to be defined in the Jamon Template. If one does the following where the <i>userName </i>property is quoted as a non type safe string, we are left with a hole all over again regarding our type safe mantra:
<br />
<pre class="xml" name="code"><form....
...
<input type="text" id="userName" name="userName" value="<% user.getUserName() %>">
....
<input type="submit" name="Login" value="Login"/>
</form>
</pre>
I have broken my head trying to figure out a type safe way to describe how to specify the input form mappings in a type safe way and ended up with a cheap solution that requires the definition of a constant somewhere that describes the same. Please note that the constant itself is not safe from any refactoring done on the User bean as it not strongly typed to the actual method signature of <i>setUserName(User)</i>:
<br />
<pre class="java" name="code"><form....
...
<input type="text" id="userName" name="<% User.USER_NAME_PROPERTY %>" value="<% user.getUserName() %>">
....
<input type="submit" name="Login" value="Login"/>
</form>
</pre>
<b>Running the Example:</b>
<br />
<br />
A simplistic application <a href="https://github.com/sanjayvacharya/sleeplessinslc/tree/master/springJamon">demonstrating a Login Form with validation is available for download here</a>. Extract the same and execute a "<i>mvn jetty:run</i>" to launch the application. Access the application at <a href="http://localhost:9090/">http://localhost:9090</a> and login using any username/pass combo to a simple welcome page. If you are an eclipse user, install the eclipse plugin for Jamon for syntax highlighting and IDE support. I would recommend that any user trying this example attempt to change code and witness for themselves the benefits of compile time type safety that Jamon provides. Please note that if you use the J2EE perspective on your project, you will not be able to see the Jamon configuration options sadly. On the Java perspective however, all is visible when you goto <i>Project->Properties->Jamon</i>.
<br />
<br />
<b>Conclusion:</b>
<br />
<br />
The question remains whether Jamon is a suitable candidate for the presentation tier when using something like Spring MVC? My answer as any smart architect would say is, it depends! If the web application is primarily a read-only, i.e, whose focus is to render data for display, then yes, you cannot go wrong with the benefits of Jamon. Where Jamon for the view tier suffers is when an application has a lot of forms requiring user input and the fact that there is no way to guarantee type safety of the input variables without the use of constants as shown above. The mantra of type safety is only partial in such cases and if this is something you are willing to tolerate, more power to you. Jamon cannot be faulted for this deficiency as the Java language itself has no way of describing the property names in a type safe manner. In addition, Jamon does not provide an expression language or tag based support from what I know of. Jamon has a lot going for it and I have personally developed a couple of applications using it for my view tier. Features like macros provided by FreeMarker make development easier and if one is careful and thorough with good unit/automation/manual coverage of ones application, the fear of runtime errors due to lack of type safety can be mitigated in favor of the powerful EL and Macro support provided by the other view technologies for Spring MVC. In addition, there is strong integration support from the Spring Framework for FreeMarker and Velocity with provided macros etc. If any person who stumbles on this BLOG has used FreeMarker or Velocity with Spring, I would like to hear of their experience of same please.In addition, if anyone has integrated their Spring MVC application with Jamon in different way, I'd love to hear the approach.
<br />
<br />
Lets get the bottom line straight though, Jamon, FreeMarker, Velocity, whatever, in the end remember the following:
<br />
<br />
<i>Developing with Spring my friends is a fine thing....</i>
<br />
<i>
Your code will sing, and your projects will fly to successful wins as if they had wings....</i>
<br />
<i>There just ain't no going wrong with this thing....</i>
<br />
<i>It's not just some temporary fling....</i>
<br />
<i>Embrace it, and it will treat you like a king!</i>
<br />
<br />
Peace!
<br />
<br /></div>
Sanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.com4tag:blogger.com,1999:blog-4667121987470696359.post-37991261886178539272012-01-08T22:44:00.001-07:002012-07-08T19:11:32.069-06:00Spring 3.1 MVC Example<div dir="ltr" style="text-align: left;" trbidi="on">
I started Sleepless in Salt Lake City with an <a href="http://sleeplessinslc.blogspot.com/2008/02/introduction-software-development-and.html">example of Spring MVC that used auto-wiring</a> with <a href="http://www.springsource.org/node/561">Spring 2.5</a>. Sadly, I have lost that code. On the bright side however, Spring has made some advances that I am hoping to catch up with a re-vamp of the original BLOG. This BLOG primarily hopes to demonstrate the use of Spring Java Config spanning different tiers of an application with a simple <a href="http://blog.springsource.org/2011/06/13/spring-3-1-m2-spring-mvc-enhancements-2/">Spring MVC 3.1</a> example that one can use for learning as well.<br />
<br />
<a href="http://static.springsource.org/spring/docs/3.1.x/spring-framework-reference/html/beans.html#beans-java">Spring JavaConfig</a> is a means of configuring the Spring Container using pure-java, i.e., without XML if one so desired. It relies on the features of Java 5.X+ such as Generics and Annotations to express what was previously done via XML. Java Config has the following advantages:<br />
<br />
<b>1.</b> Injections, as it should be done using features like inheritance, polymorphism, etc. <br />
<b>2.</b> Full control on creation and initialization of beans.<br />
<b>3. </b>Makes refactoring easy without the need for something like Spring IDE<br />
<b>4.</b> Ability to refrain from Classpath scanning as that can be expensive for container initialization<br />
<b>5.</b> Use XML or property support in conjunction if desired <br />
<br />
Most of this BLOG is in the form of code with a functional examples provided as well for download and execution. The code shown below assumes that a person is familiar with Spring MVC and does not delve into the basics of the same. The examples used in this BLOG are written with the following objectives:<br />
<br />
<b>1.</b> Demonstrate Spring Java Config across the different tiers with no spring XML usage<br />
<b>2.</b> Demonstrate transaction management<br />
<b>3.</b> Demonstate a validator for the beans using JSR-303<br />
<b>4.</b> Demonstate the REST features of Spring XML by having the application double as a RESTful Web Service.<br />
<b>5.</b> Demonstate how Unit testing can be achieved across the tiers when using Java Config<br />
<b>6.</b> Demonstates use of RestTemplate in an integration-test of the Web Service<br />
<br />
In short, the making of a really long POST ;-)<br />
<br />
The app references in this BLOG is a simple Flight reservation system where flights can be searched and booked. The web application itself is separated into 3 tiers, Web, Service and Data Access. Each of the tiers has its own Java Configuration as shown below with the <i>WebConfig </i>being the aggregator or top level Config:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzEmzUdo72TkPke1ExRsC0CydOydCNzuIz62LYxGgEnWLMqlKZcOyFnYqPxaMkMk0Zjchtyyn2Zyigs_9g14JwD6gBmfauQHCjuugynuD106D2z-AnFzUyKV72sq4Eb5jzNGM10j66YhKn/s1600/dependencies.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="140" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzEmzUdo72TkPke1ExRsC0CydOydCNzuIz62LYxGgEnWLMqlKZcOyFnYqPxaMkMk0Zjchtyyn2Zyigs_9g14JwD6gBmfauQHCjuugynuD106D2z-AnFzUyKV72sq4Eb5jzNGM10j66YhKn/s640/dependencies.png" width="640" /></a></div>
With the interest of making unit testing easier, the Config classes are separated via interface/impl.<br />
<br />
The data access tier defines a Java Config module as shown below:
<br />
<pre class="java" name="code">@Configuration
@EnableTransactionManagement
public class DefaultDaoConfig implements DaoConfig, TransactionManagementConfigurer {
// Dao Initialization
@Bean
public FlightDao getFlightDao() {
return new FlightDaoImpl(getSessionFactory());
}
...other DAOs
// Session Factory configuration for Hibernate
@Bean
public SessionFactory getSessionFactory() {
return new AnnotationConfiguration().addAnnotatedClass(Ticket.class)
.addAnnotatedClass(Flight.class).addAnnotatedClass(Airport.class)
.configure().buildSessionFactory();
}
// Transaction manager being used
@Override
public PlatformTransactionManager annotationDrivenTransactionManager() {
return new HibernateTransactionManager(getSessionFactory());
}
}
</pre>
<br />
Of interest in the above is the <a href="http://static.springsource.org/spring/docs/3.1.x/javadoc-api/org/springframework/transaction/annotation/EnableTransactionManagement.html">@EnableTransactionManagement</a> annotation which enables Spring's annotation driven transaction management capabilities for services annotated with @Transactional. The<a href="http://static.springsource.org/spring/docs/3.1.x/javadoc-api/"> @Transactional</a> annotation instructs the Spring container to provide transactional semantics for the method. A service class annotated with @Transactional is shown below. Also note that the <i>FlightDao </i>and <i>TicketDao </i>beans are defined to be set explicitly via the constructor without the use of the <a href="http://static.springsource.org/spring/docs/3.1.x/javadoc-api/index.html?index-filesindex-1.html">@Autowired</a> annotation. @Autowired could be used as well if desired for having a no-args constructor with the Spring Framework reflectively initializing the DAOs.<br />
<br />
<pre class="java" name="code">@Transactional(readOnly = true)
public class ReservationServiceImpl implements ReservationService {
private final FlightDao flightDao;
private final TicketDao ticketDao;
public ReservationServiceImpl(FlightDao flightDao, TicketDao ticketDao) {...}
@Override
@Transactional(rollbackFor = { NoSeatAvailableException.class }, readOnly=false)
public Ticket bookFlight(Reservation booking) throws NoSeatAvailableException {
...
}
@Override
public List<Ticket> getReservations() {...}
}
</pre>
<br />
The Services tier also has a corresponding Config where the Services of the application are defined:
<br />
<pre class="java" name="code">@Configuration
public class DefaultServiceConfig implements ServiceConfig {
@Autowired
private DaoConfig daoConfig;
@Bean
@Override
public AirlineService getAirlineService() {
return new AirlineServiceImpl(daoConfig.getFlightDao(), daoConfig.getAirportDao());
}
.........other services ...
}
</pre>
<br />
One might ask the question as to how can one mock out the <i>DaoConfig </i>for testing purposes as it is set to be auto-wired, surely not reflection? Well, on the Configs one cannot use constructor based initialization as Spring's container expects a no-arg constructor. One can however define a setter for the Config and be able to mock the same out. The following sample of the <i>ControllerConfig</i> demonstrates the same:
<br />
<pre class="java" name="code">@Configuration
public class ControllerConfig {
private ServiceConfig serviceConfig;
@Autowired
public void setServiceConfig(ServiceConfig serviceConfig) {
this.serviceConfig = serviceConfig;
}
@Bean
public FlightsController getFlightController() {
return new FlightsController(serviceConfig.getAirlineService(), serviceConfig.getAirportService());
}
... other controllers...
}
</pre>
The FlightController is shown below:
<br />
<pre class="java" name="code">@Controller
public class FlightsController {
private final AirlineService airlineService;
private final AirportService airportService;
public FlightsController(AirlineService airlineService, AirportService airportService) {...}
// Flight search page initialization
@RequestMapping(value = "/searchFlights.html", method = RequestMethod.GET)
public ModelAndView searchFlights() throws Exception {
List<Airport> airports = airportService.getAirports();
ModelAndView mav = new ModelAndView("searchFlights");
mav.addObject("airports", airports);
return mav;
}
// Searching for a flight. Note that the Critiera object is automatically bound with
// the corresponding form post parameters
@RequestMapping(value = "/searchFlights.html", method = RequestMethod.POST)
public ModelAndView searchFlights(FlightSearchCriteria criteria) throws Exception {
ModelAndView mav = new ModelAndView("searchFlights");
....
FlightSearchResults searchResult = airlineService.getFlights(criteria);
mav.addObject("flightSearchResult", searchResult);
return mav;
}
}
</pre>
<br />
Annotating a bean with<a href="http://static.springsource.org/spring/docs/3.1.x/javadoc-api/index.html?index-filesindex-1.html"> @Controller</a> indicates to Spring that the same is part of the MVC family and to use the same for mapping request urls and servicing web requests. A few other things to note about the controller:<br />
<br />
<b>1.</b> Does not need to extend any particular base class<br />
<b>2.</b> The <a href="http://static.springsource.org/spring/docs/3.1.x/javadoc-api/index.html?index-filesindex-1.html">@RequestMapping</a> annotation defines the HTTP Method and resource serviced. In the example shown, "<i>/searchFlights</i>" responds differently to GET and POST requests<br />
<br />
All these tier configurations are wired together via <i>WebConfig </i>which also defines the different components required to setup the Spring MVC Servlet. The <a href="http://static.springsource.org/spring/docs/3.1.x/javadoc-api/org/springframework/context/annotation/Import.html">@Import </a>annotation indicates to the Spring Container which Config's need to be included. Note the same can also be done in the web.xml as part of the Servlet definition if so desired.
<br />
<pre class="java" name="code">@EnableWebMvc
@Configuration
@Import({ ControllerConfig.class, DefaultServiceConfig.class, DefaultDaoConfig.class })
public class WebConfig extends WebMvcConfigurerAdapter
// Validator for JSR 303 used across the application
@Override
public Validator getValidator() {...}
// View Resolver..JSP, Freemarker etc
@Bean
public ViewResolver getViewResolver() {...}
// Mapping handler for OXM
@Bean
public RequestMappingHandlerAdapter getHandlerAdapter() {.. }
@Override
public void configureDefaultServletHandling(DefaultServletHandlerConfigurer configurer) {
configurer.enable();
}
}
</pre>
The annotation<a href="http://static.springsource.org/spring/docs/3.1.x/javadoc-api/"> @EnableWebMvc </a> is required to tell Spring that this is a MVC application. Implementing the <a href="http://static.springsource.org/spring/docs/3.1.x/javadoc-api/">WebMvcConfigurerAdapter</a> allows for customization the Spring Servlet and path handling. Alternatively, one could also extend <a href="http://static.springsource.org/spring/docs/3.1.x/javadoc-api/index.html?index-filesindex-1.html">WebMvcConfigurationSupport </a>directly and provide the necessary customization<br />
<br />
<b>Validation of Requests using JSR-303 Bean Validation API:
</b>
<br />
<br />
<a href="http://jcp.org/en/jsr/detail?id=303">JSR-303</a> standardizes the validation for the Java Platform. Use it if you wish for validation. The annotations from the API are used to specify validation constraints and the runtime enforces the same. As this example utilizes Hibernate, adding Hibernate-Validator which is an implementation of the API is a natural choice.
In the example provided the Reservation class is annotated with the annotations from the JSR to provide runtime validation and messages. The messages could be obtained from a resource bundle if desired:
<br />
<pre class="java" name="code">public class Reservation {
@XmlElement
@NotBlank(message = "must not be blank")
private String reservationName;
@Min(1)
@XmlElement
private int quantity = 1;
@XmlElement
@NotNull(message = "Flight Id must be provided")
private Long flightId;
....
}
</pre>
The web deployment descriptor is where the application converges and the <a href="http://static.springsource.org/spring/docs/3.1.x/javadoc-api/index.html?index-filesindex-1.html">DispatcherServlet </a>is notified of the Context class to use and the location of the <i>WebConfig</i>. As previously mentioned, it is possible to provide a comma separated list of Configs in the web.xml if desired versus importing them in the <i>WebConfig</i>:
<br />
<pre class="xml" name="code"><web-app>
<servlet>
<servlet-name>springExample</servlet-name>
<servlet-class>org.springframework.web.servlet.DispatcherServlet
</servlet-class>
<init-param>
<param-name>contextClass</param-name>
<param-value>org.springframework.web.context.support.AnnotationConfigWebApplicationContext
</param-value>
</init-param>
<!-- Tell Spring where to find the Config Class
This is the hook in to the application. -->
<init-param>
<param-name>contextConfigLocation</param-name>
<param-value>com.welflex.web.WebConfig</param-value>
</init-param>
<load-on-startup>1</load-on-startup>
</servlet>
....
</web-app>
</pre>
<br />
<b>Unit Testing:</b><br />
<br />
Unit testing the tiers is very easily supported by the<a href="http://static.springsource.org/spring/docs/current/spring-framework-reference/html/testing.html"> Spring Testing framework</a>. When testing the service layer for example, one does not really need to integrate with the database and the corresponding interactions can be mocked out. For this reason, the objects in the DAO config are provided as mocks as shown below:
<br />
<pre class="java" name="code">@Configuration
@EnableTransactionManagement
public class MockDaoConfig implements DaoConfig {
@Bean
public FlightDao getFlightDao() {
return Mockito.mock(FlightDao.class);
}
..// Other mocks
}
@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = {MockDaoConfig.class,
DefaultServiceConfig.class }, loader = AnnotationConfigContextLoader.class)
public class AirportServiceTest {
@Autowired
private AirportDao airportDaoMock;
@Autowired
private ApplicationContext ctx;
@Test
public void getAirportCode() {
// Obtain Airport service from the context
AirportService a = ctx.getBean(AirportService.class);
// Alternatively, instatiate an Airport Service by providing a mock
AirportService a2 = new AirportServiceImpl(airportDaoMock);
}
...
}
</pre>
In the above test, the Config's being used are the concrete implementation of <i>ServiceConfig </i>and a mock of <i>DaoConfig</i>. This enables the testing of the Service tier by mocking the Data access tier. Some people prefer to test each class in isolation by wiring the same up with the required dependencies explicitly while others prefer to obtain the class from the container with all dependencies wired up. The choice of style of testing is a topic for another BLOG but it is sufficient to accept that either style is supported. It is to be noted that one does not need to mock all the beans for testing and can define specific Config objects as required. As with the Service tier mocking out the Data Access tier, the Controller tier can also be tested by mocking out the Service tier objects.<br />
<br />
<b>Spring MVC Restful Services:</b><br />
<b> </b> <br />
One of the goals of this example was to demonstrate how the web application doubles as a service as well. Consider the following controller which provides representations for the web application and a service consumer:
<br />
<pre class="java" name="code">@Controller
public class ReservationsController {
private final AirlineService airlineService;
private final ReservationService reservationService;
public ReservationsController(ReservationService reservationService, AirlineService airlineService) {...}
@RequestMapping("/reservations.html")
public ModelAndView getReservationsHtml() {...}
@RequestMapping(value = "/reservations/{reservationId}.html", method = RequestMethod.GET)
public ModelAndView getReservationHtml(@PathVariable Long reservationId) {...}
// Web Service Support to provide XML or JSON based of the Accept Header attribute
@RequestMapping(value = "/reservations", method = RequestMethod.GET, produces = {
MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE })
@ResponseBody
public Reservations getReservations() {...}
@RequestMapping(value = "/reservations/{reservationId}", method = RequestMethod.GET, produces = {
MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE })
@ResponseBody
public Ticket getReservation(@PathVariable Long reservationId) {...}
....
}
</pre>
In the above shown controller, the <i>getReservationsHtml()</i> returns a HTML representation of the reservations while the call to<i> getReservations() </i>returns an XML or JSON representation of the reservations. The type of representation returned depends on the Accept header provided to the service. The @ResponseBody annotation indicates that a view is not targetted but that the resulting object should be translated directly. The example itself uses <a href="http://static.springsource.org/spring-ws/site/reference/html/oxm.html">Spring OXM</a> for marshalling and unmarshalling XML via JAXB.
An integration test for the above controller using <a href="http://static.springsource.org/spring/docs/3.1.x/javadoc-api/index.html?index-filesindex-1.html">RestTemplate </a>is shown below:
<br />
<pre class="java" name="code">public class WebServiceIntegrationTest {
private RestTemplate template;
private static final String BASE_URL = "http://localhost:9091";
@Before
public void setUp() {
// Create Rest Template with the different converters
template = new RestTemplate();
List<HttpMessageConverter<?>> converters = ...;
template.setMessageConverters(converters);
}
private static final String FLIGHT_NUMBER = "LH 235";
@Test
public void reservations() {
Flight singleFlight = template.getForObject(BASE_URL + "/flights/" + FLIGHT_NUMBER,
Flight.class);
Reservation res = new Reservation();
res.setFlightId(singleFlight.getId());
res.setQuantity(10);
res.setReservationName("Sanjay Acharya")
// Post to create a ticket
Ticket ticket = template.postForObject(BASE_URL + "/bookFlight", res, Ticket.class);
assertNotNull(ticket);
System.out.println("Ticket reserved:" + ticket);
// All Reservations
Reservations reservations = template.getForObject(BASE_URL + "/reservations", Reservations.class);
assertTrue(reservations.getTickets().size() == 1);
// Single reservation
assertNotNull(template.getForObject(BASE_URL + "/reservations/" + ticket.getId(), Ticket.class));
}
}
</pre>
<br />
For more information on the RestTemplate, check out my other BLOG on<a href="http://sleeplessinslc.blogspot.com/2010/02/rest-client-frameworks-your-options.html"> Rest Client Frameworks</a>.<br />
<br />
<b>Side Note:</b><br />
<br />
In this example, I have used a framework called <a href="http://pojomatic.sourceforge.net/">Pojomatic</a> for the convenience it provides in implementing <a href="http://pojomatic.sourceforge.net/pojomatic/index.html">equals(), hashCode() and toString()</a> for my POJOs. I prefer its brevity over using the the tools provided by the Apache commons project which I have used in my previous BLOGs. Check the same out.<br />
<br />
<b>Running the Example:</b>
<br />
<br />
The example provided herewith demonstrates the Flight Spring MVC example. It uses an in memory database which is primed with data on start up. The model package in the example is overloaded as far as its responsibilities go. The POJO's there in serve as database model objects and data transfer objects. The example, unit tests or integration tests are by no means designed to
be comprehensive and are only present only for demonstration purposes.<br />
<br />
<b>1.</b><a href="http://home.comcast.net/~acharya.s/java/springMvcExample.zip"> Download the example from HERE</a><br />
<b>2.</b> Execute a<b> mvn jetty:run</b> to start the application.<br />
<b>3.</b> Execute a <b>mvn integration:test</b> to see the Web Service calls being exercised <br />
<b>4.</b> The Flight Web Application is available at <a href="http://localhost:8080/">http://localhost:8080</a>
<br />
<b>5.</b> Web service url's can be accessed directly via:
<a href="http://localhost:8080/reservations">http://localhost:8080/reservations</a>
, <a href="http://localhost:8080/reservations/%7Bid">http://localhost:8080/reservations/{id</a>}, <a href="http://locahost:8080/flights">http://locahost:8080/flights</a>, <a href="http://localhost:8080/airports">http://localhost:8080/airports</a><br />
<br />
<b>Conclusion:</b><br />
<br />
I find the clarity and control of object initialization and wiring really convenient with Spring Java Config. Walking the dependency tree is also very intuitive. In addition, due to the strong typing, refactoring is made easy even without a tool like<a href="http://www.springsource.org/springide/release-20"> Spring IDE</a>. What I like about Spring MVC versus other web frameworks is its simplicity with the Front Controller/Dispatcher pattern. One can use whatever Ajaxian technology they like to enhance the user experience. The framework itself is very non-invasive, i.e., it does not require the use of Spring across all the tiers. The Restful support facilitates easy interaction with Ajax based technologies and makes the web app a true resource based system. I do not use Spring MVC on any production application sadly. If anyone who does have experience with the same stumbles upon this BLOG, I would love to hear of the same.<br />
<br />
Take a look at the<a href="http://sleeplessinslc.blogspot.com/2012/02/spring-security-stateless-cookie-based.html"> Spring Security Stateless Cookie Authentication BLOG</a> for an example of the same MVC application but using Twitter Bootstrap and integrated with Spring Security.<br />
<br />
Update - I have uploaded a simple presentation that I gave along with supporting code. You can view the same at <a href="http://sleeplessinslc.blogspot.com/2012/07/spring-mvc-31-presentation-and-tutorial.html">Spring MVC 3.1 Presentation/Tutorial</a>.</div>Sanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.com9tag:blogger.com,1999:blog-4667121987470696359.post-45984829912305692392012-01-03T22:47:00.001-07:002015-12-20T08:18:11.123-07:00Apache ZooKeeper - A maven example<div dir="ltr" style="text-align: left;" trbidi="on">
Let me begin with - <b>Happy New Year</b>! I am starting this years BLOG entry with some playing I did over the holidays with <a href="http://zookeeper.apache.org/">Apache ZooKeeper</a>. So what is Apache ZooKeeper? Broadly put, ZooKeeper provides a service for maintaining distributed naming, configuration and group membership. Zookeeper provides a centralized coordination service which is distributed, consistent and highly available. In particular, it specializes in coordination tasks such as leader election, status propagation and rendezvous. Quoting the ZooKeeper documentation, " <i>The
motivation behind ZooKeeper is to relieve distributed applications the
responsibility of implementing coordination services from scratch.</i>"<br />
<br />
Zookeeper facilitates the coordination of distributed systems via a hierarchical name space called <a href="http://zookeeper.apache.org/doc/current/zookeeperOver.html#sc_dataModelNameSpace">ZNodes</a>. A ZNode can have children Znodes as well as have data associated with it (figure linked from official ZooKeeper documentation): <br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://zookeeper.apache.org/doc/current/images/zknamespace.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://zookeeper.apache.org/doc/current/images/zknamespace.jpg" height="183" width="320" /></a></div>
<br />
ZooKeeper also has the concept of <i>Nodes </i>and<i> Ephermeral Nodes</i>. The former survives the death of the client that created it while the latter does not. Nodes can also be SEQUENTIAL where a SEQUENCE number is associated with Node. Sequences clearly lend themselves for queue type or ordered behavior when required. ZNodes cannot be deleted as long as they have children under it. <br />
<br />
The <a href="http://zookeeper.apache.org/doc/current/zookeeperOver.html#Simple+API">Zookeeper API </a>itself it a very simple programming interface that contains methods for CRUD of a ZNode and data along with a way to listen for changes to the node and/or its data.<br />
<br />
One can read quite a bit about Zookeeper from their documentation and I will not delve into details. My goal in this BLOG is to provide some simple examples of how Zookeeper can be used for:<br />
<br />
<b>1.</b> Leader Election<br />
<b>2.</b> Rendezvous or Barrier<br />
<b><br /></b>
<b>1. Leader Election Example:</b><br />
<br />
As mentioned previously, Zoo Keeper specializes in <a href="http://en.wikipedia.org/wiki/Leader_election">Leader Election</a>, where a particular server of a type is elected the leader and upon its demise, another stands up to take its place. The concept of Ephemeral nodes is pretty useful in such a scenario where a service upon start up registers itself as a candidate in the pool of resources to be used. For the sake of discussion, consider an Echo Service with the fictional constraint that there can be only one Echo Service at any given time servicing requests but should that robust service die, another one is ready to resume the responsibility of servicing clients. The EchoService on start up registers an ephemeral node at "<i><b>/echo/n_000000000X</b></i>". It further registers a watcher on its predecessor Echo Service, denoted by the ZNode, "<i><b>/echo/n_000000000X</b></i>-1", with the intent that should the predecessor die, then the node in question can assume leadership if it is the logical successor as shown in the picture below:<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9yUrAgjCz_3FEHDbNoJii3E9fTtjRhCTscHSvtSxew6D7oeaJBb23rc1-R_FWyWrqQJQ0FpnYMCVgcpdx76bD3CeKpr8TdZdYGIe3dX0WzQKUsP5HsoP_l8kNad0Aw5Nbam8eCJHoUs6m/s1600/leader.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="302" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9yUrAgjCz_3FEHDbNoJii3E9fTtjRhCTscHSvtSxew6D7oeaJBb23rc1-R_FWyWrqQJQ0FpnYMCVgcpdx76bD3CeKpr8TdZdYGIe3dX0WzQKUsP5HsoP_l8kNad0Aw5Nbam8eCJHoUs6m/s640/leader.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Leader Election of Echo Server</td></tr>
</tbody></table>
<br />
<br />
<br />
The reason for registering a watch on the predecessor versus the root of "<b>/<i>echo</i></b>" is to prevent <a href="http://zookeeper.apache.org/doc/current/recipes.html">herding as described in the Zookeeper recipes</a>. In other words in the event of a leader being decommisioned, we wouldn't want all the Echo Server leadership contenders to stress the system by requesting ZooKepper for the children of the "<i>/echo</i>" node to determine the next leader. With the chosen strategy, if the predecessor dies, then only its follower is the one that executes the call to get the children of "<i>/echo</i>". The example provided herewith spawns two Echo Servers and has a client that calls the server. One of the two servers is considered the leader and assumes the responsibility of servicing the requests of the client. If the leader dies, then the successor Echo Server assumes leadership and starts servicing client requests. The client will connect to the next leader automatically upon the death of the current leader. The Echo Server code is shown below:
<br />
<pre class="java" name="code">public class EchoServer extends Thread implements IZkDataListener, IZkStateListener {
private ZkClient zkClient;
// My ZNode
private SequentialZkNode zkNode;
// My leader ZNode
private SequentialZkNode zkLeaderNode;
private ServerSocket serverSocket;
// Port to start server on
private final Integer port;
private final List<SlaveJob> slaves;
private final Object mutex = new Object();
private final AtomicBoolean start = new AtomicBoolean(false);
private final Random random = new Random();
public EchoServer(int port, int zkPort) {
this.port = port;
this.zkClient = new ZkClient("localhost:" + zkPort);
this.slaves = new ArrayList<SlaveJob>();
}
@Override
public void run() {
try {
// Sleep for some randomness leader selection
Thread.sleep(random.nextInt(1000));
// Create Ephermal node
zkNode = ZkUtils.createEphermalNode("/echo", port, zkClient);
// Find all children
NavigableSet<SequentialZkNode> nodes = ZkUtils.getNodes("/echo", zkClient);
// Find my leader
zkLeaderNode = ZkUtils.findLeaderOfNode(nodes, zkNode);
// If I am leader, enable me to start
if (zkLeaderNode.getPath().equals(zkNode.getPath())) {
start.set(true);
} else {
// Set a watch on next leader path
zkClient.subscribeDataChanges(zkLeaderNode.getPath(), this);
zkClient.subscribeStateChanges(this);
}
synchronized (mutex) {
while (!start.get()) {
LOG.info("Server on Port:" + port + " waiting to spawn socket....");
mutex.wait();
}
}
// Start accepting connections and servicing requests
this.serverSocket = new ServerSocket(port);
Socket clientSocket = null;
while (!Thread.currentThread().isInterrupted() && !shutdown) {
clientSocket = serverSocket.accept();
// ... start slave job
}
}
catch (Exception e) {
....
}
}
private void electNewLeader() {
final NavigableSet<SequentialZkNode> nodes = ZkUtils.getNodes("/echo", zkClient);
if (!zkClient.exists(zkLeaderNode.getPath())) {
// My Leader does not exist, find the next leader above
zkLeaderNode = ZkUtils.findLeaderOfNode(nodes, zkNode);
zkClient.subscribeDataChanges(zkLeaderNode.getPath(), this);
zkClient.subscribeStateChanges(this);
}
// If I am the leader then start
if (zkNode.getSequence().equals(nodes.first().getSequence())) {
LOG.info("Server on port:" + port + " will now be notified to assume leadership");
synchronized (mutex) {
start.set(true);
mutex.notify();
}
}
}
.....
@Override
public void handleDataDeleted(String dataPath) throws Exception {
if (dataPath.equals(zkLeaderNode.getPath())) {
// Leader gone away
LOG.info("Recieved a notification that Leader on path:" + dataPath
+ " has gone away..electing new leader");
electNewLeader();
}
}
}
</pre>
<br />
The Echo Client is shown below which connects to the leader of the Echo Server ensemble:
<br />
<pre class="java" name="code">public class EchoClient implements IZkDataListener, IZkStateListener, IZkChildListener {
private Connection connection;
private Object mutex = new Object();
private final ZkClient zkClient;
private SequentialZkNode connectedToNode;
public EchoClient(int zkPort) {
this.zkClient = new ZkClient("localhost:" + zkPort);
}
private Connection getConnection() throws UnknownHostException, IOException, InterruptedException {
synchronized (mutex) {
if (connection != null) {
return connection;
}
NavigableSet<SequentialZkNode> nodes = null;
// Check to see if there an Echo server exists, if not listen on /echo for
// a Server to become available
while ((nodes = ZkUtils.getNodes("/echo", zkClient)).size() == 0) {
LOG.info("No echo service nodes ...waiting...");
zkClient.subscribeChildChanges("/echo", this);
mutex.wait();
}
// Get the leader and connect
connectedToNode = nodes.first();
Integer port = zkClient.readData(connectedToNode.getPath());
connection = new Connection(new Socket("localhost", port));
// Subscribe to Server changes
zkClient.subscribeDataChanges(connectedToNode.getPath(), this);
zkClient.subscribeStateChanges(this);
}
return connection;
}
public String echo(String command) throws InterruptedException {
while (true) {
try {
return getConnection().send(command);
}
catch (InterruptedException e) {
throw e;
}
catch (Exception e) {
synchronized (mutex) {
if (connection != null) {
connection.close();
connection = null;
}
}
}
}
}
private static class Connection {
// Code that connects and sends data to Echo Server
public String send(String message) {
....
}
}
@Override
public void handleDataDeleted(String dataPath) throws Exception {
synchronized (mutex) {
if (dataPath.equals(connectedToNode.getPath())) {
LOG.info("Client Received notification that Server has died..notifying to re-connect");
if (connection != null) {
connection.close();
}
if (connectedToNode != null) {
connectedToNode = null;
}
}
mutex.notify();
}
}
@Override
public void handleChildChange(String parentPath, List<String> currentChilds) throws Exception {
synchronized (mutex) {
LOG.info("Got a notification about a Service coming up...notifying client");
mutex.notify();
}
}
}
</pre>
<br />
A simple test of the example starts an EchoClient and two servers. One of the servers assumes leadership and starts servicing request while the other server sits waiting to assume leadership. The client in turn sends messages which is serviced by the leader. When the leader terminates, messages are then serviced by the successor that assumes leadership.
<br />
<pre class="java" name="code"> @Test
public void simpleLeadership() throws InterruptedException {
EchoClient client = new EchoClient(ZK_PORT);
EchoServer serverA = new EchoServer(5555, ZK_PORT);
EchoServer serverB = new EchoServer(5556, ZK_PORT);
serverA.start();
serverB.start();
// Send messages
for (int i = 0; i < 10; i++) {
System.out.println("Client Sending:Hello-" + i);
System.out.println(client.echo("Hello-" + i));
}
if (serverA.isLeader()) {
serverA.shutdown();
} else {
serverB.shutdown();
}
for (int i = 0; i < 10; i++) {
System.out.println("Client Sending:Hello-" + i);
System.out.println(client.echo("Hello-" + i));
}
serverA.shutdown();
serverB.shutdown();
}
</pre>
On running the test, you can see output similar to:
<br />
<pre class="java" name="code">20:18:49 INFO - com.welflex.zookeeper.EchoClient.getConnection(44) | No echo service nodes ...waiting...
20:18:49 INFO - com.welflex.zookeeper.EchoClient.handleChildChange(171) | Got a notification about a Service coming up...notifying client
20:18:49 INFO - com.welflex.zookeeper.EchoServer.run(85) | Server on Port:5556 is now the Leader. Starting to accept connections...
Client Sending:Hello-0
Echo:Hello-0
Client Sending:Hello-1
Echo:Hello-1
.....
20:18:49 INFO - com.welflex.zookeeper.EchoServer.run(122) | Server on Port:5556 has been shutdown
20:18:49 INFO - com.welflex.zookeeper.EchoClient.getConnection(44) | No echo service nodes ...waiting...
Server has died..notifying to re-connect
20:18:49 INFO - com.welflex.zookeeper.EchoClient.getConnection(44) | No echo service nodes ...waiting...
20:18:49 INFO - com.welflex.zookeeper.EchoClient.handleChildChange(171) | Got a notification about a Service coming up...notifying client
20:18:49 INFO - com.welflex.zookeeper.EchoServer.run(85) | Server on Port:5555 is now the Leader. Starting to accept connections...
20:18:49 INFO - com.welflex.zookeeper.EchoClient.handleChildChange(171) | Got a notification about a Service coming up...notifying client
....
Echo:Hello-7
Client Sending:Hello-8
Echo:Hello-8
Client Sending:Hello-9
Echo:Hello-9
....
</pre>
<br />
<b>2. Rendezvous/Barrier Example: </b><br />
<br />
Distributed systems can use the concept of a<a href="http://zookeeper.apache.org/doc/current/zookeeperTutorial.html#sc_barriers"> barrier </a>to block the processing of a certain task until other members of the system ensemble are available after which all the systems participating can proceed with their tasks. Having a little fun here :-) with a fictional case where <a href="http://en.wikipedia.org/wiki/00_Agent">Double O agents</a> of <a href="http://en.wikipedia.org/wiki/Secret_Intelligence_Service">MI-6</a> meet to present their reports, and reports are not edited here if you know what I mean ;-). Agents join the meeting but cannot start presenting until all the expected agents join the meeting after which they are free to present in parallel (agents are designed to grasp from multiple presenters; after all the 007 variety are trained at such things). Agents can present but cannot leave the briefing until all presenters have completed their presentation. The agent upon start up creates a ZNode under the meeting identifier, say, "<i><b>/M-debrief</b></i>", and then waits for all other agents to join before beginning their presentation, i.e.,<i><b> barrier entry</b></i>. After each presentation, the agent removes themselves from the list of ZNodes and then waits for other agents to do the same before leaving the briefing, i.e., <i><b>barrier exit</b></i>. With that said, the Agent code looks like:
<br />
<pre class="java" name="code">public class Agent extends Thread implements IZkChildListener {
// Number of agents total
private final int agentCount;
private final String agentNumber;
private final Object mutex = new Object();
private final Random random = new Random();
private SequentialZkNode agentRegistration;
private final String meetingId;
public Agent(String agentNumber, String meetingId, int agentCount, int zkPort) {
this.agentNumber = agentNumber;
this.meetingId = meetingId;
this.agentCount = agentCount;
this.zkClient = new ZkClient("localhost:" + zkPort);
zkClient.subscribeChildChanges(meetingId, this);
}
private void joinBriefing() throws InterruptedException {
Thread.sleep(random.nextInt(1000));
agentRegistration = ZkUtils.createEpheremalNode(meetingId, agentNumber, zkClient);
LOG.info("Agent:" + agentNumber + " joined Briefing:" + meetingId);
// Wait for all other agents to join the meeting
while (true) {
synchronized (mutex) {
List<String> list = zkClient.getChildren(meetingId);
if (list.size() < agentCount) {
LOG.info("Agent:" + agentNumber + " waiting for other agents to join before presenting report...");
mutex.wait();
}
else {
break;
}
}
}
}
private void presentReport() throws InterruptedException {
LOG.info("Agent:" + agentNumber + " presenting report...");
Thread.sleep(random.nextInt(1000));
LOG.info("Agent:" + agentNumber + " completed report...");
// Completion of a report is identified by deleting their ZNode
zkClient.delete(agentRegistration.getPath());
}
private void leaveBriefing() throws InterruptedException {
// Wait for all agents to complete
while (true) {
synchronized (mutex) {
List<String> list = zkClient.getChildren(meetingId);
if (list.size() > 0) {
LOG.info("Agent:" + agentNumber
+ " waiting for other agents to complete their briefings...");
mutex.wait();
}
else {
break;
}
}
}
LOG.info("Agent:" + agentNumber + " left briefing");
}
@Override
public void run() {
joinBriefing();
presentReport();
leaveBriefing();
}
@Override
public void handleChildChange(String parentPath, List<String> currentChilds) throws Exception {
synchronized (mutex) {
mutex.notifyAll();
}
}
}
</pre>
Running the test, one would see something like the following where agents join the meeting, wait for the others before presenting, present at the meeting and then leave after everyone completes. Note that output is edited in the interest of real estate:
<br />
<pre class="java" name="code">21:02:17 INFO - com.welflex.zookeeper.Agent.joinBriefing(38) | Agent:003 joined Briefing:/M-debrief
21:02:17 INFO - com.welflex.zookeeper.Agent.joinBriefing(45) | Agent:003 waiting for other agents to join before presenting report...
21:02:17 INFO - com.welflex.zookeeper.Agent.joinBriefing(38) | Agent:005 joined Briefing:/M-debrief
21:02:17 INFO - com.welflex.zookeeper.Agent.joinBriefing(45) | Agent:005 waiting for other agents to join before
.....
:17 INFO - com.welflex.zookeeper.Agent.joinBriefing(45) | Agent:005 waiting for other agents to join before presenting report...
......
21:02:17 INFO - com.welflex.zookeeper.Agent.joinBriefing(45) | Agent:006 waiting for other agents to join before presenting report...
21:02:18 INFO - com.welflex.zookeeper.Agent.joinBriefing(38) | Agent:007 joined Briefing:/M-debrief
21:02:18 INFO - com.welflex.zookeeper.Agent.joinBriefing(45) | Agent:007 waiting for other agents to join before presenting report...
21:02:18 INFO - com.welflex.zookeeper.Agent.presentReport(57) | Agent:002 presenting report...
21:02:18 INFO - com.welflex.zookeeper.Agent.presentReport(57) | Agent:005 presenting report...
21:02:18 INFO - com.welflex.zookeeper.Agent.presentReport(57) | Agent:007 presenting report...
21:02:18 INFO - com.welflex.zookeeper.Agent.presentReport(57) | Agent:003 presenting report...
21:02:18 INFO - com.welflex.zookeeper.Agent.presentReport(57) | Agent:001 presenting report...
....
21:02:18 INFO - com.welflex.zookeeper.Agent.leaveBriefing(69) | Agent:002 waiting for other agents to complete their briefings...
21:02:18 INFO - com.welflex.zookeeper.Agent.leaveBriefing(69) | Agent:002 waiting for other agents to complete their briefings...
21:02:18 INFO - com.welflex.zookeeper.Agent.presentReport(59) | Agent:000 completed report...
21:02:18 INFO - com.welflex.zookeeper.Agent.leaveBriefing(69) | Agent:000 waiting for other agents to complete their briefings...
21:02:18 INFO - com.welflex.zookeeper.Agent.leaveBriefing(69) | Agent:002 waiting for other agents to complete their briefings...
21:02:18 INFO - com.welflex.zookeeper.Agent.leaveBriefing(69) | Agent:000 waiting for other agents to complete their briefings...
21:02:18 INFO - com.welflex.zookeeper.Agent.presentReport(59) | Agent:007 completed report...
21:02:18 INFO - com.welflex.zookeeper.Agent.leaveBriefing(69) | Agent:007 waiting for other agents to complete their briefings...
....
21:02:19 INFO - com.welflex.zookeeper.Agent.leaveBriefing(78) | Agent:007 left briefing
21:02:19 INFO - com.welflex.zookeeper.Agent.leaveBriefing(78) | Agent:003 left briefing
21:02:19 INFO - com.welflex.zookeeper.Agent.leaveBriefing(78) | Agent:002 left briefing
...
21:02:19 INFO - com.welflex.zookeeper.Agent.leaveBriefing(78) | Agent:000 left briefing
</pre>
<br />
<b>Running the code:</b><br />
<br />
<a href="https://github.com/sanjayvacharya/sleeplessinslc/tree/master/zkexample">Download a maven example of the above code from HERE</a>. Execute a "<b>mvn test</b>" to see the example in action or import into eclipse to dissect the code. The example itself runs its tests on a single ZooKeeper server. There is no reason why the same cannot be expanded to try on a larger ZooKeeper ensemble.<br />
<br />
<b>Conclusion and Credits:
</b><br />
<br />
The examples shown are highly influenced by the <a href="http://zookeeper.apache.org/doc/current/zookeeperTutorial.html">examples provided at the ZooKeeper</a> site. The code does not use the ZooKeeper code directly but instead uses a wrapper around the same, i.e., <a href="https://github.com/sgroschupf/zkclient">ZkClient</a>, which provides a much easier experience to using ZooKeeper than the default ZooKeeper API. The examples also rely heavily on the <a href="http://techblog.outbrain.com/2011/07/leader-election-with-zookeeper/">excellent article and supporting code by Erez Mazor</a> who utilizes Spring Framework very nicely to demonstrate Leader Election and control of the same. I simply loved his code and images and have used parts of the same in the above demonstration. I am sure I have only touched the surface of ZooKeeper and am hoping I can learn more about the same. If my understanding about any of the concepts mentioned above is flawed, I would love to hear about the same. Again, a very Happy 2012! Keep the faith, the world won't end on December 21'st.</div>
Sanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.com1tag:blogger.com,1999:blog-4667121987470696359.post-42794488691949615572011-12-28T20:52:00.000-07:002011-12-28T20:52:41.192-07:00WebLogic JMS Partitioned Distributed Topics and Shared Subscriptions<div dir="ltr" style="text-align: left;" trbidi="on">
I had previously blogged about the challenges of <a href="http://sleeplessinslc.blogspot.com/2009/06/scaling-in-jms-worldin-particular.html">scaling topics with durable subscribers</a>. In JMS, with topics, one can typically have a single topic subscriber for a particular [<b>Client ID, Subscription Name</b>] tuple.<br />
<br />
In the example shown below the Order Topic has two durable subscribers, the <b>Order Processor</b> and the <b>Auditor</b>. Both of these get the message sent to the topic but at any give time, there can be exactly one of each consuming the message. There cannot be completing durable subscriptions as in the case of queues. The same translates to a scalability and availability challenge:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhv56m2IcOboJTN6xeCNGn3fVHmvdiYcG1ZEwUc-0zBTYCYvC0CTm2uRpuNk-cgdwtRA50jbaopdAMu37aLCvuPVpZA5vzgs3Q5xlSr9i3o47kHXjH25AS5HkDHOStsRcI9rpbKOHybblcu/s1600/simpleTopic.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="299" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhv56m2IcOboJTN6xeCNGn3fVHmvdiYcG1ZEwUc-0zBTYCYvC0CTm2uRpuNk-cgdwtRA50jbaopdAMu37aLCvuPVpZA5vzgs3Q5xlSr9i3o47kHXjH25AS5HkDHOStsRcI9rpbKOHybblcu/s640/simpleTopic.png" width="640" /></a></div>
<a href="http://www.oracle.com/us/products/middleware/application-server/index.html">Oracle WebLogic</a> has introduced the concept of a<a href="http://docs.oracle.com/cd/E21764_01/web.1111/e13727/dds.htm"> Partitioned distributed topic</a> and shared connection subscriptions that will allow for more than one durable to subscription to be made available for a particular subscriber. In the figure shown below, with a shared subscription, one can have consumers of an "<b>application</b>" compete for messages of a Topic and this feature promotes scalability and availability. In the figure shown below there are two Order Processors which are competing for messages and so are the two Auditors. At no given time will a message be delivered to both the consumers of the same "application":<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgG0hf9wEcYfBueeb_6wXe5HEB8NxYY1OdyAZiNL6RiaoQCGxHvjfa-Bkf7S6LCqgJKVWYRRFdx9fyp_Pnwu0snkpbj-N7CuKUJ1xdDZ1K9u4-ZNAICvAk_ahLPmSVm0T2b1_uBO0Dtqcre/s1600/pdistTopicSimple.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="307" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgG0hf9wEcYfBueeb_6wXe5HEB8NxYY1OdyAZiNL6RiaoQCGxHvjfa-Bkf7S6LCqgJKVWYRRFdx9fyp_Pnwu0snkpbj-N7CuKUJ1xdDZ1K9u4-ZNAICvAk_ahLPmSVm0T2b1_uBO0Dtqcre/s640/pdistTopicSimple.png" width="640" /></a></div>
WebLogic introduces the concept of a <b>UNRESTRICTED </b>Client ID policy. With this setting on a topic, more than one connection in a cluster can share the same Client ID. The standard option of <b>RESTRICTED </b>enforces that only a single connection with a particular Client ID can exist in a cluster.<br />
<br />
<b>Sharing Subscriptions:</b><br />
There are two policies for Subscription sharing:<br />
<br />
<b>1. Exclusive</b> - This is the default and all subscribers created using the connection factory cannot share subscriptions with any other subscribers.<br />
<br />
<b>2. Sharable</b> - Subscribers created using this connection factory can share their subscriptions with other subscribers. Consumers can share a durable subscription if they have the same [<b>Client ID, Client ID policy and subscription name</b>].<br />
<br />
To promote HA, set the Subscription on the Topic as Shareable.<br />
<br />
When creating a Topic in Weblogic set the Forwarding Policy to be Partitioned. This causes a message sent to a partitioned distributed topic to be sent to a single physical member. In addition, the message will not be forwarded to other members of the cluster if there are no consumers in the current physical member.<br />
<br />
If we want a HA solution, then the listeners will need to connect to
each and every physical member across the cluster. Consider the
following figure, on Consumer Machine 1, there are two consumers of Type A
and two Consumers of Type B, the same applies to the Consumer Machine
2. It is important to note that consumers of a type or application connect to both the
physical members of the cluster. Doing so ensures that in the event a consumer machine
dies unexpectedly, the other consumer machine can still continue to
function ensuring availability:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgigP4N3x-s5y9maV9PsBh3yKs3X89wzZs5_WgxtpHwmoc5pD0uZpq_E8Vf0p0NbopsdH-JOyF9txwQH_yJbplrxvmuzY3e28gYLBOibDwRTdDD4iNyixsbeHanTc21kYp92XHS6DQ4cBvr/s1600/pdisttopic.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="336" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgigP4N3x-s5y9maV9PsBh3yKs3X89wzZs5_WgxtpHwmoc5pD0uZpq_E8Vf0p0NbopsdH-JOyF9txwQH_yJbplrxvmuzY3e28gYLBOibDwRTdDD4iNyixsbeHanTc21kYp92XHS6DQ4cBvr/s400/pdisttopic.png" width="400" /></a></div>
<br />
The above can be achieved using <a href="http://static.springsource.org/spring/docs/2.5.x/reference/jms.html">Spring Framework's Message Listener Container</a> with some wrapper code as shown below where the PartitionedTopicContainer is a container of containers connecting to each physical member of the topic with the same client ID and subscription name:<br />
<pre class="java" name="code">public class PartitionedTopicContainer {
private final List<DefaultMessageListenerContainer> containers;
public PartitionedTopicContainer(String clientID, String subscriptionName, ConnectionFactory connectionFactory, Destination ...physicalTopicMembers) {
this.containers = Lists.newArrayList();
for (Destination physicalMember : physicalTopicMembers) {
DefaultMessageListenerContainer container = new DefaultMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setDestination(physicalMember);
container.setClientId(clientID);
container.setDurableSubscriptionName(subscriptionName);
}
}
public void setMessageListener(Object listener) {
for (DefaultMessageListenerContainer container : containers) {
container.setMessageListener(listener);
}
}
public void start() {
for (DefaultMessageListenerContainer container : containers) {
container.start();
}
}
public void shutdown() {
for (DefaultMessageListenerContainer container : containers) {
container.shutdown();
}
}
}
</pre>
The above container could then be used as follows:
<br />
<pre class="java" name="code">
// Obtain each physical member of the partitioned distributed topic
Destination physicalMember1 = (Destination) context.lookup("Server1@orderTopic");
Destination physicalMember2 = (Destination) context.lookup("Server2@orderTopic");
// Container for the Order Processor
PartitionedTopicContainer subscriber1 = new PartitionedTopicContainer("orderConnectionId", "orderProcSubscription", connectionFactory, physicalMember1, physicalMember2);
subscriber1.setMessageListener(new SessionAwareMessageListener<TextMessage>() {
public void onMessage(...) {
System.out.println(Thread.currentThread().getId() + " of subscriber order processor got a message...");
...
}
});
// Container for the Auditor
PartitionedTopicContainer subscriber2 = new PartitionedTopicContainer("auditorConnectionId", "auditorSubscription", connectionFactory, physicalMember1, physicalMember2);
subscriber2.setMessageListener(new SessionAwareMessageListener<TextMessage>() {
public void onMessage(...) {
System.out.println(Thread.currentThread().getId() + " of subscriber auditor got a message...");
...
}
});
subscriber1.start();
subscriber2.start();
</pre>
<br />
<br />
The parts of the code above where a consumer of the API has to look up each and every physical member and provide the same to the container is a lot of boiler plate and does not account well for cases when a physical member becomes available/unavailable. Luckily, WebLogic provides the <a href="http://docs.oracle.com/cd/E17904_01/web.1111/e13727/dahelper.htm#CHDJFAEJ">JmsDestinationAvailabilityHelper </a>API which is a way to listen to events relating to physical member availability and unavailability. The <i>PartitionedTopicContainer </i>shown above could easily be augmented with the availability helper API and get notified of physical destination availability and unavailability to correspondingly start and stop the internal container to the physical destination. Psuedo-code of how this can be achieved with the above container is shown below:<br />
<br />
<pre class="java" name="code">public class PartitionedTopicContainer implements DestinationAvailabilityListener {
private final String partDistTopicJndi;
private final ConnectionFactory connectionFactory;
@GuardedBy("containerLock")
private final Map<String, DefaultMessageListenerContainer> containerMap;
private final Object containerLock = new Object();
// WebLogic Handle
private RegistrationHandle registrationHandle;
private final CountdownLatch startLatch = new CountdownLatch(1);
public PartitionedTopicContainer(String clientID, String subscriptionName, String clusterUrl,
ConnectionFactory connectionFactory, String partDistTopicJndi) {
this.clusterUrl = clusterUrl;
this.clientID = clientID;
this.subscriptionName = subscriptionName;
this.partDistTopicJndi = partDistTopicJndi;
this.containerMap = Maps.newHashMap();
this.connectionFactory = connectionFactory;
}
public void start() throws InterruptedException {
Hashtable<String, String> jndiProperties = ...;
jndiProperties.put(Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory");
jndiProperties.put(Context.PROVIDER_URL, clusterUrl);
JMSDestinationAvailabilityHelper dah = JMSDestinationAvailabilityHelper.getInstance();
// Register this container as a listener for destination events
registrationHandle = dah.register(jndiProperties, partDistTopicJndi, this);
// Wait for listener notification to start container
startLatch.await();
}
@Override
public void onDestinationsAvailable(String destJNDIName, List<DestinationDetail> physicalAvailableMembers) {
synchronized (containerLock) {
// For all Physical destinations, start a container
for (DestinationDetail detail : physicalAvailableMembers) {
Destination physicalMember = lookupPhysicalTopic(detail.getJNDIName());
DefaultMessageListener container = new DefaultMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setDestination(physicalMember);
container.setClientId(clientID);
container.setDurableSubscriptionName(subscriptionName);
System.out.println("Starting Container for physical Destination:" + detail);
container.start();
containerMap.put(detail.getJNDIName(), container);
}
}
startLatch.countdown();
}
@Override
public void onDestinationsUnavailable(String destJNDIName, List<DestinationDetail> physicalUnavailableMembers) {
synchronized (containerLock) {
// Shutdown all containers whose physical members are no longer available
for (DestinationDetail detail : physicalUnavailableMembers) {
DefaultMessageListenerContainer container = containerMap.remove(detail.getJNDIName());
container.shutdown();
}
}
}
@Override
public void onFailure(String destJndiName, Exception exception) {
// Looks like a cluster wide failure
shutdown();
}
public void shutdown() {
// Unregister for events about destination availability
registrationHandler.unregister();
// Shut down containers
synchronized (containerLock) {
for (Iterator<Map.Entry<String, DefaultMessageListenerContainer>> i = containerMap.entrySet().iterator(); i
.hasNext();) {
Map.Entry<String, DefaultMessageListenerContainer> entry = i.next();
System.out.println("Shutting down container for:" + entry.getKey());
entry.getValue().shutdown();
i.remove();
}
}
}
}
</pre>
<br />
Some of the things to remember when creating a partitioned distributed topic and shared subscription:
<br />
<br />
<b>1.</b> Connection Factory being used should have "Subscription Sharing Policy" set as "Shareable"
<br />
<b>2.</b> Forwarding policy on the Partitioned Distributed Topic should be set as "Partitioned"
<br />
<b>3.</b> Message forwarding will not occur, so subscribers must ensure connections exist to every physical member else messages can pile up for the subscription on that topic<br />
<b>4.</b> If a server hosting a physical member is unavailable then messages from that physical topic will be unavailable until server is made available.<br />
<br />
Partitioned Distributed Topics and Shared Subscriptions looks promising. One thing I need to sort out is how does one handle error destinations on a per subscription level with WebLogic. Any passer by with thoughts, please do shoot it my way.</div>Sanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.com0tag:blogger.com,1999:blog-4667121987470696359.post-1576972698704021092011-10-04T20:02:00.000-06:002011-10-04T20:22:48.034-06:00Jersey JAX-RS and JAXB Schema Validation<div dir="ltr" style="text-align: left;" trbidi="on">
This BLOG is about <a href="http://jersey.java.net/nonav/documentation/latest/user-guide.html">Jersey</a> web services and Schema based validation with the same. I have also been playing with <a href="http://code.google.com/p/google-guice/">Google Guice</a> (I hear it, Spring Traitor, err not quite) :-) and figured I'd use Guice instead of Spring for once.
<br />
<br />
When un-marshalling XML using JAXB, schema based validation facilitates stricter validation. The<a href="http://jersey.java.net/nonav/documentation/latest/user-guide.html#d4e873"> Jersey recommended approach</a> to enable Schema based validation is to create a <i> javax.ws.rs.ext.ContextResolver</i>. I have seen examples of using a <i>javax.ws.rs.ext.MessageBodyReader</i> as well. The code demonstrated is largely based and influenced by the <a href="http://jersey.java.net/nonav/documentation/latest/user-guide.html#d4e873">discussion on XSD validation between Andrew Cole and Paul Sandoz</a>.
The goals of the resolver are:
<br />
<ol style="text-align: left;">
<li>Enable schema based validation if desired</li>
<li>Provide the ability to enable a custom validation event handler</li>
<li>Enable formatted JAXB and the ability to set the character encoding</li>
</ol>
It is said that JAXB contexts are better of cached as they are expensive to create. I am only going by what I have read in different posts and/or discussions and do not have any metrics to claim the same.
A generic JAXB Context resolver is shown here that accomplishes the above:
<br />
<pre class="java" name="code">public class JaxbContextResolver implements ContextResolver&<JAXBContext> {
static final ConcurrentMap<String, JAXBContext> CONTEXT_MAP = new MapMaker()
.makeComputingMap(new Function<String, JAXBContext>() {
@Override
public JAXBContext apply(String fromPackage) {
try {
return JAXBContext.newInstance(fromPackage);
}
catch (JAXBException e) {
throw new RuntimeException("Unable to create JAXBContext for Package:" + fromPackage, e);
}
}
});
private Schema schema;
private ValidationEventHandler validationEventHandler;
public JaxbContextResolver withSchema(Schema schema) {
this.schema = schema;
return this;
}
...
public JaxbContextResolver withValidationEventHandler(ValidationEventHandler validationEventHandler) {
this.validationEventHandler = validationEventHandler;
return this;
}
@Override
public JAXBContext getContext(Class<?> type) {
return new ValidatingJAXBContext(CONTEXT_MAP.get(type.getPackage().getName()),
schema, formattedOutput, encoding, validationEventHandler);
}
...
public static class ValidatingJAXBContext extends JAXBContext {
private final JAXBContext contextDelegate;
....
private final ValidationEventHandler validationEventHandler;
@Override
public javax.xml.bind.Unmarshaller createUnmarshaller() throws JAXBException {
javax.xml.bind.Unmarshaller unmarshaller = contextDelegate.createUnmarshaller();
// Set the Validation Handler
if (validationEventHandler != null) {
unmarshaller.setEventHandler(validationEventHandler);
}
// Set the Schema
if (validatingSchema != null) {
unmarshaller.setSchema(validatingSchema);
}
return unmarshaller;
}
@Override
public javax.xml.bind.Marshaller createMarshaller() throws JAXBException {
javax.xml.bind.Marshaller m = contextDelegate.createMarshaller();
m.setProperty("jaxb.formatted.output", formattedOutput);
if (encoding != null) {
m.setProperty("jaxb.encoding", encoding);
}
return m;
}
@Override
public Validator createValidator() throws JAXBException {
return contextDelegate.createValidator();
}
}
}
</pre>
The Context resolver itself is registered as a Singleton with Jersey in an Application class. For example:
<br />
<pre class="java" name="code">public class NotesApplication extends Application {
private Set<Object> singletons;
public NotesApplication() {
....
Schema schema = null;
try {
schema = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI).newSchema(
getClass().getResource("/notes.xsd"));
}
catch (SAXException e) {
throw new RuntimeException("Error obtaining Schema", e);
}
JaxbContextResolver resolver = new JaxbContextResolver().withSchema(schema)
.formatOutput(true);
Set<Object> single = Sets.newHashSet();
single.add(resolver);
singletons = Collections.unmodifiableSet(single);
}
public Set&lt;Object> getSingletons() {
return singletons;
}
}
</pre>
With the above, Unmarshaller's are provided that utilize the provided schema when unmarshalling received payload. When an error occurs during Unmarshalling, Jersey catches the <i>javax.xml.bind.UnmarshallException</i>, wraps it in a <i>javax.ws.rs.WebApplicationException</i> and throws the same. If one desires to customize the response sent back to the consumer the only option is to create an <i>javax.ws.rs.ext.ExceptionMapper</i> for <i>javax.ws.rs.WebApplicationException</i> and interrogate the cause to determine if it were a <i>javax.xml.bind.UnmarshallException</i>. I could not find a way to map the <i>Unmarshall</i> exception thrown by JAXB directly to an <i>ExceptionMapper</i>. If anyone has done so, I would love to hear about their solution.
<br />
<pre class="java" name="code">
class WebApplicationExceptionMapper imlements ExceptionMapper<WebApplicationException> {
public Response toResponse(WebApplicationException e) {
if (e.getCause() instanceof UnmarshallException) {
return Response.status(404).entity("Tsk Tsk, XML is horrid and you provide the worst possible one?").build();
}
else {
return Response.....
}
}
}
</pre>
Sometimes, one does not need the full fledged validation benefits of a schema and can make do with a <i>ValidationEventHandler</i>. In such a case, one can provide the JaxbContextResolver with an instance of a <i>javax.xml.bind.ValidationEventHandler</i>. The handler could then be configured to throw a custom exception which can be mapped to a custom response using an <i>ExceptionMapper</i> as shown below. This approach is what appears on the <a href="http://jersey.java.net/nonav/documentation/latest/user-guide.html#d4e873">Jersey mailing list entry</a>:
<br />
<pre class="java" name="code">
JaxbContextResolver resolver = new JaxbContextResolver().withValidationEventHandler(new ValidationEventHandler() {
public boolean handleEvent(ValidationEvent event) {
if (event.getSeverity() == ValidationEvent.WARNING) {
// Log and return
return true;
}
throw new CustomRuntimeException(event);
});
@Provider
class CustomRuntimeExceptionMapper implements ExceptionMapper<CustomRuntimeException> {
public Response toResponse(CustomRuntimeException e) {
return Response.status(400).entity("Oooops:" + e).build();
}
}
</pre>
Note that throwing a custom Exception and catching the same via an <i>ExceptionMapper</i> will work only if one does NOT provide a Schema. Once a schema is in place, the exception will be caught by JAXB and swallowed and one has to catch <i>WebApplicationException</i> and provide a custom response as described earlier.
<br />
<br />
An example provided herewith demonstrates a simple Notes service that manages the life cycle of a Note. It employs Guice to inject dependencies into Resource and Sub-Resource classes. The application also demonstrates the JaxbContextResolver and the registration of a schema for validating a received Note payload. Quite sweet actually. The details of integrating Guice and Jersey is not being detailed in this BLOG as there is already a pretty thorough <a href="http://blog.iparissa.com/google%E2%80%99s-app-engine-java/jersey-guice-on-google-app-engine-java/">BLOG by Iqbal Yusuf </a> that describes the same.
<br />
<br /><a href="http://home.comcast.net/~acharya.s/java/Notes.zip">
Download the maven example by clicking HERE</a>, extract the archive and simply execute <i>"mvn install"</i> at the root level to see it a client-server interaction. If any passer by is a JAXB Guru or has any tips on the Resolver, I would love to hear. Enjoy!</div>
Sanjay Acharyahttp://www.blogger.com/profile/01933976956977901677noreply@blogger.com3