Search This Blog

Saturday, September 9, 2017

Service mesh examples of Istio and Linkerd using Spring Boot and Kubernetes

Introduction

When working with Microservice Architectures, one has to deal with concerns like Service Registration and Discovery, Resilience, Invocation Retries, Dynamic Request Routing and Observability.  Netflix pioneered a lot of frameworks like EurekaRibbon an Hystrix  that enabled Microservices to function at scale addressing the mentioned concerns. Below is an example communication between two services written in Spring Boot that utilize Netflix Hystrix and other components for resiliency, observability, service discovery, load balancing and other concerns.



While the above architecture does solve some problems around resiliency and observability it still is not ideal:

a.  The solution does not apply for a Polyglot service environment as the libraries mentioned are catering to a Java stack.  For example, one would need to find/build libraries for Node.js to participate in service discovery and support observability.

b. The service is burdened with  'communication specific concerns' that it really should not have to deal with like service registration/discovery, client side load balancing, network retries and resiliency.

c. It introduces a tight coupling with the said technologies around resiliency, load balancing, discovery etc making it very difficult to change in the future.

d.  Due to the many dependencies in the libraries like Ribbon, Eureka and Hystrix, there is an invasion of your application stack. If interested, I discussed this in my BLOG around: Shared Service Clients

Sidecar Proxy

Instead of the above, what if you could do the following where a Side Car is introduced that takes on all the responsibilities around Resiliency, Registration, Discovery, Load Balancing, Reporting client metrics and Telemetry etc?:

The benefit of the above architecture is that it facilitates the existence of a Polyglot Service environment with low friction as a majority of the concerns around networking between services are abstracted away to a side car thus allowing the service to focus on business functionality. Lyft Envoy is a great example of a Side car Proxy (or Layer 7 Proxy) that provides resiliency and observability to a Microservice Architecture.

Service Mesh

"A service mesh is a dedicated infrastructure layer for handling service-to-service communication. It’s responsible for the reliable delivery of requests through the complex topology of services that comprise a modern, cloud native application. In practice, the service mesh is typically implemented as an array of lightweight network proxies that are deployed alongside application code, without the application needing to be aware. (But there are variations to this idea, as we’ll see.)"  - Buoyant.io

In a Microservice Architecture deployed on a Cloud Native Model, one would deal with 100s of services each running multiple instances of them with instances being created and destroyed by the orchestrator. Having the common cross cutting functionalities like Circuit Breaking, Dynamic routing, Service Discovery, Distributed Tracing, Telemetry being managed by an abstracted into a Fabric with which services communicate appears the way forward.

Istio and Linkerd are two Service meshes that I have played with and will share a bit about them.

Linkerd


Linkerd is an open source service mesh by Buoyant developed primarily using Finagle and netty. It can run on Kubernetes, DC/OS and also a simple set of machines.

Linkerd service mesh, offers a number of features like:
  • Load Balancing
  • Circuit Breaking
  • Retries and Deadlines
  • Request Routing
It instruments top line service metrics like Request Volume, Success Rates and Latency Distribution. With its Dynamic Request Routing, it enables Staging Services, Canaries, Blue Green Deploys with minimal configuration with a powerful language called DTABs.
There are a few ways that Linkerd can be deployed in Kubernetes. This blog will focus on how Linkerd is deployed as a Kubernetes Daemon Set, running a pod on each node of the cluster. 



Istio


Istio (Greek for Sail) is an open platform sponsored by IBM, Google and Lyft that provides a uniform way to connect, secure, manage and monitor Microservices. It supports Traffic Shaping between micro services while providing rich telemetry.

Of note:
  • Fine grained control of traffic behavior with routing rules, retires, failover and fault injection
  • Access Control, Rate Limits and Quota provisioning
  • Metrics and Telemetry
At this point, Istio currently supports only Kubernetes but their goal in the future is to support additional platforms as well. An Istio service mesh can be considered of logically consisting of:
  • A Data Plane of Envoy Sidecars that mediate all traffic between services
  • A Control Plane whose purpose is to manage and configure proxies to route and enforce traffic policies.

Product-Gateway Example

The Product Gateway that I have used in many previous posts is used here as well to demonstrate Service Mesh. The one major difference is that the service uses GRPC instead of REST.  From a protocol perspective a HTTP 1.X REST call is made to /product/{id} of the Product gateway service. The Product Gateway service then fans to  base product, inventory, price and reviews using GRPC to obtain the different data points that represents the end Product. Proto schema elements from the individual services are used to compose the resulting Product proto. The gateway example is used for the Linkerd and Isitio examples. The project provided does not explore all the features of the service mesh but instead gives you enough of an example to try Istio and Linkerd with GRPC services using Spring Boot.

The code for this example is available at: https://github.com/sanjayvacharya/sleeplessinslc/tree/master/product-gateway-service-mesh

You can import the project into your favorite IDE's and run each of the Spring Boot services. Invoking http://localhost:9999/product/931030.html.will call the Product gateway which will then invoke the other service using GRPC to finally return back a minimalistic product page.

For running the project on Kubernetes, I had installed Minikube on my Mac.  Ensure that you can dedicate sufficient memory to your Minikube instance. I did not set up a local Docker registry but chose to use my local docker images.  Stack Overflow has a good posting on how to use local docker images.  In the Kubernetes deployment descriptors, you will notice that for the product gateway images, the settings for imagePullPolicy are set to Never.  Before you proceed, to be able to use the Docker Daemon, ensure that you have executed:

>eval $(minikube docker-env)

Install the Docker images for the different services of the project by executing the below from the root folder of the project:
>mvn install

Linkerd


In the sub-folder titled linkerd there are a few yaml files that are available.  The linkerd-config.yaml will set up linkerd and define the routing rules:
>kubctl apply -f ./linkerd-config.yaml

Once the above is completed you can access the Linkerd Admin application by doing the following:
>ADMIN_PORT=$(kubectl get svc l5d -o jsonpath='{.spec.ports[?(@.name=="admin")].nodePort}')
>open http://$(minikube ip):$ADMIN_PORT

The next step is to install the different product gateway services. The Kubernetes definitions for these services are present in the product-linkerd-grpc.yaml file.
>kubectl apply -f ./product-linkerd-grpc.yaml

Wait for a bit for Kubernetes to spin up the different pods and services. You should be able to execute 'kubectl get svc' and see something like the below showing the different services up:

We can now execute a call on the product-gateway and see it invoke the other services via the service mesh. Execute the following:

>HOST_IP=$(kubectl get po -l app=l5d -o jsonpath="{.items[0].status.hostIP}")
>INGRESS_LB=$HOST_IP:$(kubectl get svc l5d -o 'jsonpath={.spec.ports[2].nodePort}')
>http_proxy=$INGRESS_LB curl -s http://product-gateway/product/9310301

The above is also demonstrating linkerd as a HTTP proxy. With the http_proxy set, curl sends the proxy request to linkerd which will then lookup product-gateway via service discovery and route the request to an instance.
The above call should result in a JSON representation of the Product as shown below:
{"productId":9310301,"description":"Brio Milano Men's Blue and Grey Plaid Button-down Fashion Shirt","imageUrl":"http://ak1.ostkcdn.com/images/products/9310301/Brio-Milano-Mens-Blue-GrayWhite-Black-Plaid-Button-Down-Fashion-Shirt-6ace5a36-0663-4ec6-9f7d-b6cb4e0065ba_600.jpg","options":[{"productId":1,"optionDescription":"Large","inventory":20,"price":39.99},{"productId":2,"optionDescription":"Medium","inventory":32,"price":32.99},{"productId":3,"optionDescription":"Small","inventory":41,"price":39.99},{"productId":4,"optionDescription":"XLarge","inventory":0,"price":39.99}],"reviews":["Very Beautiful","My husband loved it","This color is horrible for a work place"],"price":38.99,"inventory":93}
You can subsequently install linkerd-viz, a monitoring application based of Prometheus and Grafana that will provide metrics from Linkerd. There are three main categories of metrics that are visible on the dashboard, namely, Top Line (Cluster Wide success rate and request volume), Service Metrics (A section for each service deployed on success rate, request volume and latency) and Per-instance metrics (Success rate, request volume and latency for every node in the cluster).
>kubectl apply -f ./linkerd-viz.yaml
Wait for a bit for the pods to come up and then you can view the Dashboard by executing:
>open http://$HOST_IP:$(kubectl get svc linkerd-viz -o jsonpath='{.spec.ports[0].nodePort}')

The dashboard will show you metrics around Top line service metrics and also metrics around individual monitored services as shown in the screen shots below:


You could also go ahead and install linkerd-zipkin to capture tracing data.

Istio


The Istio page on installation is pretty thorough. There are a few options you are presented with for installation on the installation page. You should select which ones make sense for your deployment. For my demonstration case, I selected the following
>kubectl apply -f $ISTIO_HOME/install/kubernetes/istio-rbac-beta.yaml
>kubectl apply -f $ISTIO_HOME/install/kubernetes/istio.yaml

You can install metrics support by installing Prometheus, Grafana and Service Graph.
>kubectl apply -f $ISTIO_HOME/install/kubernetes/addons/prometheus.yaml
>kubectl apply -f $ISTIO_HOME/install/kubernetes/addons/grafana.yaml
>kubectl apply -f $ISTIO_HOME/install/kubernetes/addons/servicegraph.yaml

Install the product-gateway artifacts using the below command which uses istioctl kube-inject to automatically inject Envoy Containers in the different pods:
>kubectl apply -f <(istioctl kube-inject -f product-grpc-istio.yaml)

>export GATEWAY_URL=$(kubectl get po -l istio=ingress -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc istio-ingress -o 'jsonpath={.spec.ports[0].nodePort}')

If your configuration is deployed correctly, it should resemble something like the below:

You can now view a HTML version of the product by going to http://${GATEWAY_URL}/product/931030.html to see a primitive product page or access a JSON representation by going to http://${GATEWAY_URL}/product/931030

Ensure you have set the ServiceGraph and Grafana port-forwarding set as described in the Installation instructions and also shown below:

>kubectl port-forward $(kubectl get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000 &
>kubectl port-forward $(kubectl get pod -l app=servicegraph -o jsonpath='{.items[0].metadata.name}') 8088:8088 &

To see the service graph and Istio Viz dashboard, you might want to send some traffic to the service.

>for i in {1..1000}; do echo -n .; curl -s http://${GATEWAY_URL}/product/9310301 > /dev/null; done

You should be able to access the Istio-Viz dashboard for Topline and detailed service metrics like shown below at http://localhost:3000/dashboard/db/istio-dashboard:



Similarly you can access the Service graph via at the address: http://localhost:8088/dotviz to see a graph similar to the one shown below depicting service interaction:

Conclusion & Resources


A service mesh architecture appears the way forward for Cloud Native deployments. The benefits provided by the mesh does not need any re-iteration.  Linkerd is more mature when compared to Istio but Istio although newer, has the strong backing of IBM, Google and Lyft to take it foward.  How do they compare with each other? The  BLOG post by Abhishek Tiwari comparing Linkerd and Istio features is a great read on the topic of service mesh and comparisons. Alex Leong has a nice youtube presentation on Linkerd that is a good watch. Kelsey Hightower has a nice example driven presentation on Istio

Wednesday, August 16, 2017

SpringBoot Microservice Example using Docker,Kubernetes and fabric8

Introduction

Kubernetes is container orchestration and scaling system from google. It provides you the ability to deploy, scale and manage container based applications and provides mechanism such as service discovery, resource sizing, self healing etc. fabric8 is a platform for creating, deploying and scaling Microservices based of Kubernetes and Docker. As with all frameworks nowadays, it follows an 'opinionated' view around how it does these things. The platform also has support for Java Based (Spring Boot) micro services. OpenShift, the Hosted Container Platform by RedHat is based of fabric8. fabric8 promotes the concept of Micro 'Service teams' and independent CI/CD pipelines if required by providing Microservice supporting tools like Logging, Monitoring and Alerting as readily integrateable services.
From Gogs (git), Jenkins (build/CI/CD pipeline) to ELK (Logging)/Graphana(Monitoring) you get them all as add ons. There are many other add-ons that are available apart from the ones mentioned.

There is a very nice presentation on youtube around creating a Service using fabric8. This post looks at providing a similar tutorial by using the product-gateway example I had shared in a previous blog.

SetUp

For the sake of this example, I used Minikube version v0.20.0. and as my demo was done on a Mac, I used the xhyve driver as I found it the most resource friendly of the options available. The fabric8 version used was: 0.4.133. I first installed and started Minikube and ensured that everything was functional by issuing the following commands:
>minkube status
minikube: Running
localkube: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.64.19
I then proceeded to install fabric8 by installing gofabric8. If for whatever reason the latest version of fabric8 does not work for you, you can try to install the version I used by going to the releases site. Start fabric8 by issuing the command
>gofabric8 start
The above will result in the downloading of a number of package and then result in the launching of the fabric8 console. Be patient as this takes some time. You could issue the following command on a different window to see the status of fabric8 set up:
>kubectl get pods -w
NAME                                      READY     STATUS    RESTARTS   AGE
configmapcontroller-4273343753-2g03x      1/1       Running   2          6d
exposecontroller-2031404732-lz7xs         1/1       Running   2          6d
fabric8-3873669821-7gftx                  2/2       Running   3          6d
fabric8-docker-registry-125311296-pm0hq   1/1       Running   1          6d
fabric8-forge-1088523184-3f5v4            1/1       Running   1          6d
gogs-1594149129-wgsh3                     1/1       Running   1          6d
jenkins-56914896-x9t58                    1/1       Running   2          6d
nexus-2230784709-ccrdx                    1/1       Running   2          6d
Once fabric8 has started successfully, you can also issue a command to validate fabric8 by issuing:
>gofabric8 validate
If all is good you should see something like the below:
Also note that if your fabric8 console did not launch, you could also launch it by issuing the following command which will result the console being opened in a new window:
>gofabric8 console

At this point, you should be presented with a view of the default team:

Creating a fabric8 Microservice

There are a few ways you can generate a fabric8 micro service using Spring Boot. The fabric8 console has an option to generate a project that pretty much does what Spring Initializer does. Following that wizard will take you to through completion of your service. In my case, I was porting existing product-gateway example over so I followed a slightly different route. I did the following for each of the services:

Setup Pom


Run the following command on the base of your project:
>mvn io.fabric8:fabric8-maven-plugin:3.5.19:setup
This will result in your pom being augmented with fabric8 plugin (f8-m-p). The plugin itself is primarily focussed on Building Docker images and creating Kubernetes resource descriptors. If you are simply using the provided product-gateway as an example, you don't need to run the setup step as it has been pre-configured with the necessary plugin.

Import Project


From the root level of your project, issue the following:
>mvn fabric8:import
You will be asked for the git user name and password. You can provide gogsadmin/RedHat$1. Your project should now be imported into fabric8. You can access the Gogs(git) repository from the fabric8 console or by issuing the following on the command line:
>gofabric8 service gogs
You can login using the same credentials mentioned above and see that your project has been imported into git.

Configuring your Build


At this point on the fabric8 console if you click on Team Dashboard, you should see your project listed.
Click on the hyperlink showing the project to open the tab where you are asked to configure secrets.
For the scope of this example select the default-gogs-git and click Save Selection on the top right. You will then be presented with a page that allows you to select the build pipeline for this project. Wait for a bit for this page to load.

Select the last option with Canary test and click Next. The build should kick off and during the build process, you should be prompted to approve the next step to goto production, something like the below:

Once you select 'Proceed' your build will move to production. At this point what you have is a project out in production ready to received traffic. Once you import all the applications in the product-gateway example, you should a view like the below showing the deployments across environments:


Your production environment should look something like the below showing your application and other supporting applications:


If your product-gateway application is deployed successfully, you should be able to open it up and access the product resource of the one product that the application supports at :
http://192.168.64.19:31067/product/9310301 The above call will result in the invoking of the base product, reviews, inventory and pricing services to provide a combined view of the product.
So how is the product-gateway discovering the base product and other services to invoke?
Kubernetes provides for service discovery natively using DNS or via Environment variables. For the sake of this example I used Enviroment variables but there is no reason you could not use the former. The following is the code snippet of the product-gateway showing the different configurations being injected:
@RestController
public class ProductResource {
  @Value("${BASEPRODUCT_SERVICE_HOST:localhost}")
  private String baseProductServiceHost;
  @Value("${BASEPRODUCT_SERVICE_PORT:9090}")
  private int baseProductServicePort;

  @Value("${INVENTORY_SERVICE_HOST:localhost}")
  private String inventoryServiceHost;
  @Value("${INVENTORY_SERVICE_PORT:9091}")
  private String inventoryServicePort;

  @Value("${PRICE_SERVICE_HOST:localhost}")
  private String priceServiceHost;
  @Value("${PRICE_SERVICE_PORT:9094}")
  private String priceServicePort;

  @Value("${REVIEWS_SERVICE_HOST:localhost}")
  private String reviewsServiceHost;
  @Value("${REVIEWS_SERVICE_PORT:9093}")
  private String reviewsServicePort;

  @RequestMapping(value = "/product/{id}", method = RequestMethod.GET, produces = MediaType.APPLICATION_JSON_VALUE)
  @ResponseBody
  .....
}
At this point what you have is a working pipe line. What about Logging, Monitoring etc etc. These are available for you as add-ons to your environment. For Logging, I went ahead and installed the Logging Template that put fluentd + ELK stack for searching through logs.  You should then be able to search for events through Kibana.

Conclusion

The example should have given you an good start into of how dev pipe-lines could work with Kubernetes and containers and how fabric8's opinionated approach facilitates the same. The ability to push out immutable architectures using a Dev Ops pipeline like fabric8 is great for a team developing Microservices with Service Teams. I ran this on my laptop and must admit, it was not always smooth sailing. Seeing the forums around fabric8, the folk seem to be really responsive to the different issues surfaced. The documentation is quite extensive around the product as well. Kudos to the team behind it. I wish I had the time to install Chaos Monkey and the Chat support to see those Add-In's in action. I am also interested in understanding what Openshift software adds on top of fabric8 for their managed solution.. The code backing this example is available to clone from git hub.

Friday, August 11, 2017

Shared Client Libraries - A Microservices Anti-Pattern, a study into why?

Introduction


As I continue my adventures with demystifying Microservices for myself, I keep hearing of this anti-pattern, 'Shared Service Client' or 'Shared Binary Client' or even 'Client Library'. Sam Newman in his book about Microservices says, "I've spoken to more than one team who has insisted that creating libraries for your services is an essential part of creating services in the first place". I must admit that I have seen and implemented this pattern right through my career. The driver for generating a client has been DRY (Don't repeat yourself).  The creation of Service Clients and making them available for consumption by other applications and services is a prevalent pattern. An example is shown below where a shareable Notes-client is created by a Team as part of the project for the Notes Web Service. They might also provide a DTO (Data Transfer Object) library representing exchanged payload.

This BLOG is my investigation into why this concept of a Service Client or a Client library causes heart burn to Microservice advocates.

Service Clients over time


Feel free to skip this section totally, it's primarily me reminiscing.

In my university days, when we used to work on network assignments, the interaction between a server and client would primarily involve creating a Socket, connecting to the server and sending/receiving data. There really was no explicit contract that someone wishing to communicate with the server could use to generate a client. The remote invocation however was portable across language/platform. Any client that can open a socket could invoke the server method.

I subsequently worked with Java and RMI. One would take a stub, then generate client and server code from it. Callers higher in the stack would invoke 'methods' using the client which would then run across the wire and be invoked on the server. From a client's perspective, it was as if the call was happening in the local JVM. Magical! A contract of sorts was established. Heterogeneous clients did not really have a place in this universe unless you got dirty with JNI. Another model existed at roughly the same time with CORBA and ORBs. The IDL or Interface Definition Language served to provide a contract by which client and server could be generated for a language and platform that could generate corresponding code from the IDL. Heterogeneous Clients could thrive in this ecosystem. Later down the chain came WS*, SOAP, Axis, Auto WSDL to Whatever generation tools like RAD (Rational Application Developer) that created a Client for a SOAP service.I had used a tool called Rational Application Developer which was a 4 C/D install which would fail often but when it worked, you could click on a WSDL and see it generate these magical code snippets for the client.

The WS* bubble was followed by the emergence of an Architecture style known as REST and its poster child, HTTP. Multiple representation types were a major selling point. To define Resources and their supported operations, there was WADL. Think of WADL as a poor cousin of WSDL that even the W3C preferred to not standardize. REST like it not, many a time is used for making remote procedure calls than walking a Hypermedia graph via Links as prescribed by Richardson Maturity model and HATEOS. Many developers of a REST service create a Client library that communicates via HTTP to the REST resources and due to the simplicity of REST, writing the client code was not arduous and did not require generation tools. REST itself was a contract. WADL if you want to stretch it and HATEOS if you want to argue about it made their ways to provide something like an IDL. Heterogeneous clients thrived with REST. The point to note is IDL seems to have taken a back seat here. Enter GRPC, a high performance open source framework from google. Has an IDL via .proto files. Supports Heterogeneous/Polyglot clients. It's just fascinating how IDL's are here to stay.

Reasons why Service Clients get a bad name


To facilitate the discussion of Service Clients and why they do not work, consider the following Customer Service Application written in Java that uses Service clients for Notes, Order management, Product Search and Credit Card. These Clients are provided by individual service maintainers.

The Not-So-Thin Service Client or The Very "Opinionated Client"


Service Clients many times start serving a higher purpose than what they were originally designed for. Consider for example the following hypothetical Notes Client:


The Notes Client seems to do a lot of things:
  • Determining strategy around whether to call the Notes Service or fall back to the Database
  • Determining whether to invoke calls in parallel or serial
  • Providing a Request Cache and caching data
  • Translating the response from the service into something the client will understand and absorb
  • Providing functionality to Audit note creation (stretching here:-))
  • Providing Configuration defaults around Connection pools, Read Timeouts and other client properties
The argument is "Who but the service owner knows best how to interact with the service and the controls to provide?". There are a few things with this design that are happening:
  • It does not account for non-Java clients
  • Bundles in a bunch of logic into the client around Notes (yes stretching it here with a Notes example :-))
  • Imposes a Request Cache on the Consumer. Resource imposition.
  • Assumes that it knows best regarding how you would like to invoke the service from a serial/parallel perspective
  • Makes resource decisions for you regarding threads etc
  • Provides defaults that it thinks are best without making you even think of your SLAs

Logic Creep into the Client

I have seen this happen a few times where business logic creeps into the binary client. This is just bad design that really negates the benefits of the services architecture. Sometimes the logic is around things like fall backs, counters, request caching etc. While these make sense to abstract away from a DRY perspective, they introduce a coupling or dependency on the client from the consumer's perspective that making changes might could require all consumers having to update. There are some clients that have 'modes' of operation and logic around what will be done in a given mode. These can be painful if they needed to change.

Resource Usage

The library provides a Request Cache to cache request data. It will use threads, memory, connection pools and many other resources while performing its job. The Service consumer might not need a multi-threaded connection pool, it might only need to obtain Notes every once in a while and does not even need to keep the connection alive. The Service consumer might not even care about paralleling calls to obtain data, it might be fine with serial. The consumer is however automatically opted-in to these invading resource usages.

Black Box

The Service Client is not something the Service Consumer wrote. On one hand service consumers care grateful that they have a library that takes away the effort around remote communication  but on the other hand, they have absolutely no control over what the library is doing. When something goes wrong, the consumer team needs to start debugging the Service Client logic and  other details, something they never owned or have in depth knowledge about. Phew!

Overloaded Responsibility

The Service owners perspective is really to generate a service that fulfills the business purpose. Their job is not to provide clients for the multitude of consumers that might want to talk to them. Writing client code to provide resiliency, monitoring and logging on communicating with the service just does not appear to be the responsibility for the service owner.

Hydra Client and Stack Invasion


Consider the following figure for this discussion:
A service client often brings along with it dependencies on libraries that the service maintainers have chosen for their stack. A consumer of the Service client are often forced to deal with managing out dependencies that would probably have never wanted in their stack but are forced to absorb them. As an example, consider a team that is invested in Spring and using Spring MVC and RestTemplate as their stack but have to suffer dependency management hell because they need to interface with a team whose Service Client is written using Jersey and need to deal with dependencies that might not be compatible. Multiply this with other Service Clients and the dependencies they bring in or go ahead and run with a Jersey 1 and Jersey 2 Client dependent libraries in your classpath to experience first hand how it would feel :-).

Heterogeneity and Polyglot Clients


"First you lose true technology heterogeneity. The library typically has to be in the same language, or at the very least run the same platform" - Sam Newman

Expecting the team generating a service to have to maintain service clients for different languages and platforms is a maintenance hell and simply not scalable. If they do not provide a client, then the ability to support a polyglot environment is stifled. When service clients start having business logic in them, it becomes really hard for service consumers not using the service team provided client to generate their own due to the complexity of logic involved. This can be a serious problem as it curbs innovation and technological advances.

Strong Coupling


The Service Client when done wrong can lead to a strong coupling between the service and its consumer. This coupling would mean that making even the smallest of changes to the service would involve significant upgrade effort from the consumers. What this does is stymie innovation on the service and taxes the service consumers into some upgrade cycle advocated by the service providers.

Benefits of the Service Client


The primary benefits that Service clients provide is DRY (Don't repeat yourself). A team that develops a service might quite well develop a client to test their service.  They might also provide integration tests with multiple versions of binary clients to ensure backward compatibility of their APIs. Their responsibility though ends there really, they really should have no obligation to distribute this client to others but the question we find a service consumer asking is; If this client is already created by the service owners and is available for free, why should I need to build one?

Remember that the Service creators are knowledgeable around the expected SLA of their service end points and the client they provide has meaningful defaults and optimizations for the same. A Service owner might augment the client library with things like Circuit Breakers, Bulkheading, Metrics, Caching and Defaults that will make the Client work really well with the service.

If a companies stack is very opinionated, for example, a non polyglot java and Spring Boot stack where every team uses RestTemplate for HTTP invocation with some discipline around approved versions of third party jars considering backward compatibility etc, then this model could very well hum along.

Client Generation


To facilitate the choice for service consumers to generate their own client, an IDL needs to be made available.  Tools like Swagger Code Gen and GRPC gen are great examples where these could be made available.  The auto-generated clients are good but as a word of caution, they aren't always optimized for high load and/or might need customization for sending custom company specific data anyways, diminishing their value as generated clients. Examples of company specific data might be things like custom trace identifiers, security info etc etc. Generated client code also will not have Circuit Breaking, Bulkheading and Monitoring features available on it effectively requiring each service consumer to write their own and with the tools available for them.

Parting Thoughts and Resources


We want to promote a loosely coupled architecture with distributed systems, i.e., between services and their consumers. This enables changes on the service to evolve without impacting consumers. A Service itself must NOT be platform/language biased and support a polyglot consumer ecosystem.

If Service owners were to design their services with the mentality that they have no control around the stack of the consumer apart from a transport protocol, it promotes better design of the service where logic does not creep into the client.

Service Clients, if created by Service owners must be 'thin' libraries devoid of any business logic that can safely be absorbed by consumers. If the client libraries are doing too much, you ought to re-think your service responsibilities and if needed consider introducing an intermediary service. I would also recommend that if a Service client is created for consumption by consumers, then limit the third party dependencies used by the client to prevent the Hydra Client effect.

It is also important to note that if a Services team creates a Client, that they do not force the consumers to use it. If the choice to use a different stack exists, then the Consumer is empowered to use the client generated by the Service maintainer or develop one independently. To a great degree service consumer teams maintaining their own clients keeps the service maintainers in check regarding compatibility and prevents logic creeping into client tier.
  • Ben Christiansen  has a great presentation on the Distributed Monolith that is must see for anyone interested in this subject. 
  • Read the Microservices book by Sam Newman. Pretty good data!
Keep following these postings as we walk into the next generation of services and tools to enable them!

If you have thoughts for or against Service Clients, please do share in the comments. 

Sunday, July 16, 2017

Interviewing -The undiscovered and maybe never found guide for job changers

Introduction


I recently changed employers. As an individual who had spent a long time in one organization (9+ years), I was very unsure of how best to prepare for the interview process and what to expect. I was employed in a role that required me to conduct interviews for many people from different backgrounds like Developers, Testers and Software Directors. These experiences should have made me feel quite knowledgeable and confident entering an interview but truth be said...not really. I wanted to share my thoughts around this process with you and figured, should you find yourself in a similar boat, you might find this a tad useful.

I have some links to video's as I feel they describe my sentiments as I stepped through the process.

Please note I am an NOT an authority on interviewing, just someone sharing his experience and thoughts around leaving a job and what to expect in interviews.

The Sphere of Workplace Quan


If you are currently employed, before you consider going through interview processes, you might want to evaluate your Sphere of Workplace Quan (defined here!)

As an employee of any organization, you will face some forces that are influencing your decision to stay or look else where. These forces are not independent but tightly tug at each other. It is very important that you see the below video to understand what I am saying about Quan:


Now that you are well educated on the Quan or simply Quan, lets delve into the factors that affect your Quan.

Compensation

If the market is offering considerably more for what you bring to the table or you see your counterparts exceeding your income, you need to either get your current employer to own up and bring your compensation to market or you should consider market options, your Quan is disturbed!

Work Duties and Responsibilities

When you take on a role, you sign on to a profile of what is expected of that role. Many a time, due to different forces, these start to diverge. Every one of us would like to step in and help and are ready to wear different hats as team players. If the duties and responsibilities grow and you see potential for your extra contributions to be rewarded, you should definitely take the added responsibilities and contribute. If on the other hand, you find the job responsibilities have somehow morphed into something your are not comfortable with, not interested in, you should bring it up with your manager to help address, and if that is not fruitful, you have a case for looking elsewhere, you Quan is disturbed!

Career Trajectory and Growth

Having a career trajectory is very important. An upward trend is healthy both from your own growth as well as that of the company that invests in its employees. Avenues for learning and knowing your skill set and experience are evolving with the industry is really important.  If you desire to grow and do not see a trajectory or opportunity ahead (assuming you have done the needful), your Quan is disturbed

The Mission or Vision

Working in an organization is not like when you go at something alone in your free time. It is team work. In order to do something, you need to believe in the mission. You need to commit, when you start to have doubts and are unable to control the direction, frustration has a way of creeping in. This frustration can be debilitating. For this reason, if you do not believe in the Mission as presented by your leaders, you should consider trying to impact your leaders to consider changing else your Quan is disturbed!

Relationship with Your Boss

Many a website rank relationship with your boss as being the primary reason someone leaves. A boss that is unreasonable, not appreciative, not invested in you, not having the ability to articulate a vision, not consistent with their decisions, not honest or just difficult to work with can negatively impact an employee's morale. Start by trying to make it work with your boss, have a conversation and if that does not work, try to find another position that would work for you in the same organization, else your Quan is disturbed and you should be in the market, its not short of good bosses...

Relationship with Peers

Sometimes in a work environment, one meets difficult peers or colleagues. If there is a peer who is difficult, try to understand why the relationship is strained and try to work it out with them. You might find, many a time, it is some inconsequential misunderstanding that could be causing the unwanted tensions. Spend time making your peer understand your viewpoint and come to some consensus.  If this does not work, escalate to your boss and try other avenues like HR. Hopefully your boss can diffuse this problem but if he is unable to and you have exhausted all avenues and the environment stresses you enough to compromise you health and work, your Quan is disturbed....the market awaits. While this sounds like a cop out, I have seen people quit for this very reason and find a better home.

Work/Life Balance

Some jobs are close to satisfying all the above but just seem to push the boundaries of your time. You are unable to balance your personal life with your work life. Long work hours, dealing with production issues, dealing with unreasonable deadlines, all these take away your personal moments with family or things you want to do. An occasional late night or late week is totally reasonable, one always wants to help move the needle, but when it becomes a pattern, frustration creeps in and your Quan is disturbed...

Valued and Appreciated

Feeling valued is what drives armies that are hopeless of success to fight a battle. They know their contributions matter. They feel valued and appreciated by their superior and peers for what they bring to the table. Appreciation does not have to be compensation related. A pat on your back, a calling you out explicitly in email, a gift card for lunch all can be quite morale boosting. If you are in an environment where you don't see this happening, it can be the most deadliest killers of your Quan and you need to address this soon...your Quan is disturbed...

The above are some of the forces that affect your Quan requiring you to look elsewhere. It does not have be a single item in the mentioned list but could be a combination of factors that affect your Quan.

The Push-Pull Principle When Changing Jobs


Someone I once knew would always ask employees when they turned in their resignation, "Are you running away from this job or running to your new job?" It is a very valid question and can put someone into retrospective. I have used that same question many a time, and most answers have been that they are running to a new job. In my naivety, I always assumed it was a binary answer, which I have come to realize was a flawed assumption. An existing job might be just great but a part of it might be disturbing the Quan which a new position might be fulfilling.

Are you running TO something or running AWAY from something? The Acid Test that helps you answer is really below...



As you get to work every day, ask yourself one question, are you dancing or close to it? You should be! If you are not,  your Quan is disturbed. Every day that you go to work, it has to be one where you are motivated and happy in your sphere of Quan, if that is not the case, you are doing yourself and the company you are working for an injustice.  A disturbed Quan bites!

Please note, that sometimes its all about putting bread on the table and/or circumstances that might require you to suck it up with hope the situation will change or it is the best you got. You are a survivor and you will fight to "Die another Day":

Interviewing

If you are here, your Quan is sufficiently disturbed that you are looking elsewhere or you simply are someone who wants to take your Quan to the next level. Welcome!
Below are some thoughts I am sharing. If you choose to use it, it is at your own risk!

Preparation


Interviews are hard. Yes they are, don't let some someone convince you with: "Just be you and you'll be fine". It's you making a case about your desire to join an organization and how you will be the best fit for a job profile. The interviewers are judging you vs. all other candidates they are working with to fill the role. Prepare, even Rocky did!


It is very important that you prepare the right things though. There is no use building your biceps for a spelling bee competition right?


So how do you prepare?


Understand what is expected of your targeted  new job and assess whether you have the chops to do it or you have it in you to pick up the skills to deliver. It's very important that you understand what will be expected of you at your target job and whether you feel you have the aptitude to learn and deliver and most importantly will it make you dance?

Now that you know what is expected of you in the new job and you want to do the job, start your preparation:
  • Read through the Job Description in Detail. Determine compensation, trajectory etc etc associated with the job.
  • Learn about your target organization. It's is quite important that you know why you want you want to join wherever you are applying. Wikipedia it, google it, LinkedIn it,  see youtube videos, call a friend who works there...etc etc.
  • Glassdoor and other company focused sites - Never say no to a gift horse. There are sites like Glassdoor etc that provide information on interviews that you must familiarize yourself with.  Sometimes Glassdoor interviews and reviews might be enough to act as deterrent. 
  • Understand what the job skills are and that they match your interest. Hone them or refresh them. For example, if the job requires strong Data Structures and Algorithms knowledge and you want to be in that area, you should definitely spend some time refreshing those skills.
Don't target a job profile but target what you want to be doing.

Your preparation is complete. In Rocky terms, you can sprint up the Philadelphia museum of art without breaking a sweat. 

The Interview


You will face many challengers along the way that you need to overcome like the 36th Chamber of Shaolin:

Chamber One - The Recruiter


You have been reached out to. Your first contact is usually a recruiter assigned with filling the position. These folk can range from just wanting to know your basics to deep divers who want to know:
  • What your passions are?
  • What you bring to the table?
  • Why you want to join the organization?
  • Why are you looking to leave?
  • Do you meet the checklist expected of this position?
The smarter requiters will put you in a conversation rather than run you through a bullet point question list. These folk are the sentry into the interview process. They are very good at what they do. Take this time to understand the role, the company, the benefits, any asks from your end, like 'relocation', 'Work from Home', 'Commute' etc. This would be a great place for you to get some basic questions answered that are important to determine whether you want to move ahead with this job.

Chamber Two - The Phone Screeners


This is where you will arguably meet the some folk who might well be the people you are directly working with. These folk will be evaluating your chops to see whether or not you are ringing true to your resume and how well you would fit the job and team. These are also the folk who usually determine whether or not you are invited to a face to face interview. The screens are usually under an hour long and are either over the phone or via video chat. While their goal is to determine whether you are a potential fit and whether or not to proceed to the next level, you should also consider that this is where you get to chat with folk who you might be working with and to gleam whether or not to invest more time with this opportunity. As with every step in an interview process, knowledge is half the battle. If you are able to get information on your interviewers, take some time in researching them. Linked In, google search etc. The simple steps to approach this are:
  • Answer questions to the point without meandering too much. Let them drive.
  • Be honest. These are people who know their job and it's in your best interest to stay true. If you don't know something, it might be better to say, "I am not very familiar in that area but..."
  • Demonstrate why you will be a fit for the job. Highlight something you feel is pertinent from your experience.
  • They are looking to see how you would fit into the team(s). Inject your personality here and gauge how they work on a day to day basis and whether its a place you want to be at.
  • Get your list of questions answered. It might range from company/team life, culture, asking interviewers of their tenure at the company, etc etc


Chamber X - The interview

Preparing for the Interview

You have passed the most important gate now. You have an invitation into the building. Take a moment to congratulate yourself on your accomplishment. Your goals, drivers and skills seem to be in line with what your target company desires
As part of your preparation, ask your recruiter for the interview schedule and interviewers you will be interviewing with. It is very important that you understand your audience when presenting your stance in an interview. For example explaining the details of a core dump file to a Business person might not be a good use of interview time. Researching the persons you will be interviewing with ahead of the interview on LinkedIn, Facebook, google etc will arm you with information to assist during your interviews.
Lets to get to the site. Ask ahead about the attire, people have been denied just because they looked too stiff. Not sure I agree with the rationale behind that decision but there is usually nothing wrong with putting your best foot forward. Remember that one of the most important areas that interviewers will be evaluating you is on is how much the job would mean to you. 
You are dressed to kill now, go tiger!!!!! Don't forget a copy of your resume...

Execution

It's almost always down to this. If your preparation was good, this should be about translating that into a days work. As you enter into your day take these thoughts with you...
  • These are people you will be working with. While they are judging you, remember you are also  judging them to see if you will be a fit into their team and culture.
  • Relax. This goes without saying and is the hardest to do but stress and nervousness can impair your ability to project your full potential. Understand that these are just people like you and most likely will be the folk you will have your morning coffee with in a few weeks. So chill!
  • Be honest on what you bring to the table
    • What you are good at
    • What you want to do
    • What you can do
    • What is expected of you
  • Look for a...holes, they pop up like pimples and are obvious to spot...count the number, it matters, less is better. 


  • Ask questions about the work, the company, the culture, the vision, the target. These will help you understand more about the people and whether you want to work for the company.
  • Signs - Keep you radar up at all times. Be attentive to the environment around you, see the signs. A boss shouting at an employee, terminals unlocked, you being left alone in the interview process, interviewers not prepared...while there are negative, there are many positive signs as well that you can witness like courteous people, collaborative work, ease of access, quality time for employees, prepared interviewers etc etc
  • Honesty - As mentioned right through this process, your best bet is being honest and up front.
  • Pace of Interview - Pace of the interview is really the employers show.  However, you do have an influence on the pace of your the interview Don't fear to ask and assert appropriately. 
  • The Boss - Hopefully you are able to get some time with your would be boss. Ask about his management style, try to get an idea of how it would be working for him. Try to determine if he receptive to inputs and ideas. Look for data around his interaction with his reports.
  • The Team - If you are able to meet the team you will be working with, that is great as you will be able to visualize how you would fit with them and get some idea of the dynamics as they might play out.
  • Rock on - Be open, be free, be focused,  Answer questions to the point, don't wind into to unwanted areas. 
This is where you get a chance to decide whether or not you want to be part of this organization if an offer were to be made. For this reason, you should take your time, ask what you need to, get as much data as you can. 

The Offer


You are here because previous chambers have been positive. You have made a positive impression and are ready to receive that call.  If you have the luxury of refusing an offer that is lower than what you are making, you should, unless your Quan is significantly disturbed that would need you to opt out of your current job,
Things to focus on the offer;
  • The Base: Your base salary is what you take home. That is the one that puts the bread on the table for sure.  Bonuses and other promises are nice but not guaranteed unless contractual.
  • The Benefits: Very important perspective that can either make or break. You should consider health, dental, retirement contribution etc etc.
  • The Bonus: Bonuses are promises that might pay out well or never materialize. For this reason, its very important to get your 'Base' where you are comfortable with. That said, in some cases, companies are known to go lower on the base but have a large variable bonus component. If you see that model working for you, get some historical data of how that worked for others to help your decision making process.
  • The Perks: Perks are things a company goes above and beyond with. These matter a lot to some folk. Understand what these are and see how they impact you.
  • The RSU: If your target company can offer RSU's, ensure you negotiate a good settlement. Working for your employer is a mutually beneficial relationship. You want the share price to climb and they do as well, RSU's are a mechanism that make you feel like an invested owner.
  • The Relo: If you need to move to join the new gig, ask about the relocation expenses your employer would bear. 
  • Sign On: Many employers will give you a sign-on that kinda represents a good-faith from their end. Ask for this else you might be leaving money on the table.

The Commitment


This is where you say "Yes or No!!!!". It's a major decision. You need to be as sure as you can, you need to understand the change involved for you and yours and make a call. Hopefully its like below:

If you are now here, congratulations....you are entering your 'Honeymoon period' with your new company with a hope that it lasts as long as possible ;-)...

As mentioned at the beginning, this is me sharing an experience. I am at my new employer with whom I am having a blast but I thoroughly enjoyed working at my former employer. I leave you with some fond  memories of my time with my former employer.


Some Good References

Sunday, December 6, 2015

Maven Integration Testing of Spring Boot Cloud Netflix Eureka Services with Docker

Introduction


My previous blog was about using Spring Cloud Netflix and my experiences with it.  In this BLOG, I will share how one could perform a maven integration test that involves multiple Spring Boot services that use Netflix Eureka as a service registry. I have utilized a similar example as the one I used in my BLOG about Reactive Programming with Jersey and RxJava where a product is assembled from different JAX-RS microservices. For this BLOG though, each of the microservices are written in Spring Boot using Spring Cloud Netflix Eureka for service discovery. A service which I am calling the Product Gateway, assembles data from the independent product based microservices to hydrate a Product.

Product Gateway


A mention of the Product Gateway is warranted for completeness. The following represents the ProductGateway classes where the ObservableProductResource uses a ProductService which in turn utilizes the REST clients of the different services and RxJava to hydrate a Product.



The ObservableProductResource Spring Controller class is shown below which uses a DeferredResult for Asynchronous processing:
@RestController
public class ObservableProductResource {  
  @Inject
  private ProductService productService;
  
  @RequestMapping("/products/{productId}")
  public DeferredResult<Product> get(@PathVariable Long productId) {
    DeferredResult<Product> deferredResult = new DeferredResult<Product>();
    Observable<Product> productObservable = productService.getProduct(productId);

    productObservable.observeOn(Schedulers.io())
    .subscribe(productToSet -> deferredResult.setResult(productToSet), t1 -> deferredResult.setErrorResult(t1));
    
    return deferredResult;
  }
}
Each Micro Service client is declaratively created using Netflix Feign. As an example of one such client, the BaseProductClient, is shown below:
@FeignClient("baseproduct")
public interface BaseProductClient {
  @RequestMapping(method = RequestMethod.GET, value = "/baseProduct/{id}", consumes = "application/json")
  BaseProduct getBaseProduct(@PathVariable("id") Long id);
}

What does the Integration Test do?

The primary purpose is to test the actual end to end integration of the Product Gateway Service. As a maven integration test, the expectancy is that it would entail:
  • Starting an instance of Eureka
  • Starting Product related services registering them with Eureka
  • Starting Product Gateway Service and registering it with Eureka
  • Issuing a call to Product Gateway Service to obtain said Product
  • Product Gateway Service discovering instances of Product microservices like Inventory, Reviews and Price from Eureka
  • Product Gateway issuing calls to each of the services using RxJava and hydrating a Product
  • Asserting the retrieval of the Product and shutting down the different services
The test itself would be a maven JUnit integration test.
As the services are bundled as JARs with embedded containers, they present a challenge to start up and tear down during an integration test. 
One option is to create equivalent WAR based artifacts for testing purposes only and use the maven-cargo plugin to deploy each of them under a separate context of the container and test the gateway. That however does mean creating a WAR that might never really be used apart from testing purposes.
Another option to start the different services is using the exec maven plugin and/or some flavor(hack) to launch external JVMs.
Yet another option is write custom class loader logic [to prevent stomping of properties and classes of individual microservices] and launch the different services in the same integration test JVM.
All these are options but what appealed to me was to use Docker containers to start each of these microservice JVMs and run the integration test.  So why Docker? Docker seems a natural fit to compose an application and distribute it across a development environment as a consistent artifact. The benefits during micro service based integration testing where one can simply pull in different docker images such as services, data stores etc of specific versions without dealing with environment based conflicts is what I find appealing.

Creating Docker Images 


As part of building each of the web services, it would be ideal to create a Docker image. There are many maven plugins out there to create Docker images [actually too many]. In the example, we have used the one from Spotify.  The building of the Docker image using the Spotify plugin for Spring Boot applications is nicely explained in the BLOG from spring.io, Spring Boot with Docker.
What I would see happening is that as part of the build process, the Docker image would be published to a docker repository which is internal to an organization and then made available for other consumers.

Integration Test


As part of the pre-integration test phase of maven, we would like to start up the Docker containers representing the different services. In order for the gateway container to work with the other service containers, we need to be able to link the Docker containers. I was not able to find a way to do that using the Spotify plugin. What I instead found myself doing is utilizing another maven plugin for Docker by Roland HuB which has much better documentation and more features. Shown below is the plugin configuration for the integration test.

<plugin>
  <groupId>org.jolokia</groupId>
  <artifactId>docker-maven-plugin</artifactId>
  <version>0.13.6</version>
  <configuration>
    <logDate>default</logDate>
    <autoPull>true</autoPull>
    <images>
      
      
      <!--....Other service containers like price, review, inventory-->
      
    </images>
  </configuration>
  <executions>
    <execution>
      <id>start</id>
      <phase>pre-integration-test</phase>
      <goals>
        <goal>start</goal>
      </goals>
    </execution>
    <execution>
      <id>stop</id>
      <phase>post-integration-test</phase>
      <goals>
        <goal>stop</goal>
      </goals>
    </execution>
  </executions>
</plugin>
One of the nice features is the syntax color prefix of each containers messages, this gives one a sense of visual separation among the multitude of containers that are started. The Integration Test itself is shown below:
@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = { ProductGatewayIntegrationTest.IntegrationTestConfig.class })
public class ProductGatewayIntegrationTest {
  private static final Logger LOGGER = Logger.getLogger(ProductGatewayIntegrationTest.class);
  
  /**
   * A Feign Client to obtain a Product
   */
  @FeignClient("product-gateway")
  public static interface ProductClient {
    @RequestMapping(method = RequestMethod.GET, value = "/products/{productId}", consumes = "application/json")
    Product getProduct(@PathVariable("productId") Long productId);
  }

  @EnableFeignClients
  @EnableDiscoveryClient
  @EnableAutoConfiguration
  @ComponentScan
  @Configuration
  public static class IntegrationTestConfig {}

  // Ribbon Load Balancer Client used for testing to ensure an instance is available before invoking call
  @Autowired
  LoadBalancerClient loadBalancerClient;

  @Inject
  private ProductClient productClient;

  static final Long PRODUCT_ID = 9310301L;

  @Test(timeout = 30000)
  public void getProduct() throws InterruptedException {
    waitForGatewayDiscovery();
    Product product = productClient.getProduct(PRODUCT_ID);
    assertNotNull(product);
  }

  /**
   * Waits for the product gateway service to register with Eureka
   * and be available on the client.
   */
  private void waitForGatewayDiscovery() {
    while (!Thread.currentThread().isInterrupted()) {
      LOGGER.debug("Checking to see if an instance of product-gateway is available..");
      ServiceInstance choose = loadBalancerClient.choose("product-gateway");
      if (choose != null) {
        LOGGER.debug("An instance of product-gateway was found. Test can proceed.");
        break;
      }
      try {
        LOGGER.debug("Sleeping for a second waiting for service discovery to catch up");
        Thread.sleep(1000);
      }
      catch (InterruptedException e) {
        Thread.currentThread().interrupt();
      }
    }
  }
}
The test uses the LoadBalancerClient client from Ribbon to ensure an instance of 'product-gateway' can be discovered from Eureka prior to using the Product client to invoke the gateway service to obtain back a product.

Running the Example


The first thing you need to do is make sure you have Docker installed on your machine. Once you have Docker installed, clone the example from github (https://github.com/sanjayvacharya/sleeplessinslc/tree/master/product-gateway-docker) and then execute a mvn install from the root level of the project. This will result in the creation of Docker images and the running of the Docker based integration tests of the product gateway. Cheers!