Getting rid of monoliths: #lagom and microservice hooha

The road to ridding application monoliths is one wrought with many obstacles, mostly organisational. Whatever your reason behind this brave move, you have convinced yourself that you will fix this big ball of mud, somehow… You will no longer stand for this shanty town of a spaghetti code, riddled with nonsensical donkey poo. The code no longer speaks to you, it has surpassed the point of having a smell. This abomination has a cyclomatic “flavour” that you can almost taste it at the tip of the tongue. No amount of Ballmer’s peak is going to help. So you are going to make good on that promise to your linting tool, probably Sonar, for whom you have whispered hours of sweet nothings to the tunes of Bob Marley, “Don’t worry ‘bout a thing ’cause every little thing gonna be alright…” What once was a good idea has hideously mutated into a giant walking talking abomination that crawls under your skin and haunts your every step. Under the guise of some sound architectural decisions, you are going to pay down this technical debt, hard. You are going to to do microservice, and you are going to ship Docker containers. If this is you, read on.

The biggest modern day monoliths of them all, J2EE

Tasked with the daunting task of ripping apart functional applications into microservices, you will battle bikeshedders whilst debating your silly asses off with armies of monolith fandom. During what seems to be a very long coup d’état, this attempt to evangelise a newer leaner architecture and approach is riddled with skeptics. “What is frontend without JSF or JSP?! How dare you even question server-push technology like XYZFaces? Then how about the JNDI lookups and EJBs?! Surely you cannot replace these things?”

So six years ago, I had a very pleasant experience of pulling out Glassfish with embedded Jetty, replacing I-Hate-My-Faces with some simple JS (nowadays we call it the MEAN stack, or its many other permutations), and started building APIs with microservice principles. Turns out that is what Spotify did too at the time. So there you go haters. Bottom line, do not use J2EE no matter what if you care about having a competitive advantage. But if you need some good reasons, here are some:

  • Most J2EE containers are grounded on the notion of vertical scalability last I checked. Clustering should be idempotent and stateless, and scaling horizontal.
  • J2EE containers are not cloud native. Just look at their clustering! Unless you feel like having VPNs and private networks across different public clouds or data centers, you can probably just forget it.
  • So let’s put it behind the load balancer? No, most J2EE containers don’t do shared session persistence out of the box.
  • Let’s not kid ourselves, you will customise the crap out of this J2EE container; dropping war files upon war files to fix all its shortcomings.
  • Your sysadmins do not work with war/jar/ear files. They are *nix gurus deserving to be treated like one. Ship your product like an actual product, sir! Apt/yum/brew is your friend and please follow Filesystem Hierarchy Standard (FHS) for goodness sake.

Decide exactly what you are going to piecemeal

Have a very clear idea of the different categories or lifecycles of the applications you are going to transform. Timing is important, as with showing results. Afterall, this is the minimum viable product (MVP) approach naturally. Highly recommend avoiding system of records to begin with, those are definitely not quick wins and you will be in an arduous match, going for the full 5 rounds. Your future competitive advantage does not sit in your ERP or CRM systems. If it does, then um, yeah.

  • Isolate and separate clearly the functionality you are transforming or building
  • Ensure the isolation goes all the way down to infra, this is devops after all
  • Think of how to horizontally scale
  • Think of elasticity
  • Think of shared persistence across network

Java is dead, long live Java!

The trend a few years back was relentlessly hating on Java and Java devs. Evidence shows otherwise. Java is still around and it is going to be around for quite some time to come. No need to switch out it just yet even after the programming nuclear winter imposed by object-orientation and Java/.NET alike. Good code, good ecosystem based on tooling, and good solid design patterns go a long way, regardless of application domain or programming languages.

The truth is the same could have been said about node.js. I recall a number of years back, few colleagues quoted Hacker News regarding the state of node.js and how immature it was and Java was the “preferred” choice, even though the sysadmin community bagged the crap out of Java at the time. If you make strategic decisions predominantly based on the whims of HN, then you are just as plonkers as the next troll. What your node.js/Java boys and girls need to remember:

  • Repeatable, testable build pipelines. Think CI/CD.
  • Coding standards and linting, no brainers there
  • Packaging. Do the right thing, treat sysadmins as your end customers.
  • Separate out load balancing or clustering to other applications like nginx or haproxy. TCP stack makes more sense when written in C.
  • Lord forbid you try to do TLS termination in Java. This is really not cool bro. You got a number of other choices, so do not add this complexity to the landscape. There are no OpenSSL implementation in Java, and OpenSSL is already difficult to maintain as it is.
  • Good monitoring and logging practices goes a long way
  • Think network. Think TCP and HTTP.
  • Your JVM will live on top of a kernel. Know them. Tune that JVM and tune that kernel if needed.

So, microservices. Heard of #lagom?

So you have found a good set of business functional requirements to transform into a set of microservices. Heard of #lagom? Maybe Dropwizard or Springboot? The choices are probably all OK, and when doing microservices, there are simply no bad choices in my opinion. The gains outweigh the means here. The kicker is that there are probably a number of customised endpoint you will need to integrate with. This could be HTTP, something-else-over TCP or whatever. There could also be JPA or other nosql data stores you need to use. Pick your microservice framework component knowing that this is a framework and it can easily grow. The microservice strategy can easily bloat into “milliservice” or “services” (SOA?) if you are not careful. So just how do you stop the size of the code base from expanding? Keep distinct business functionalities as separate services and code bases. The sizing is up to you. Also, split up common functionalities into submodules. Both Dropwizard and Springboot has a bunch. Lagom for example has recently being introduced as the microservice framework for Java, it does have quite a lot of these connectors already in place. For me, I opted to homebrew our own microservice framework for maximum flexibility, ownership, and performance tuning.

Either way, armed with your chosen messiah of a framework, the idea here is to rain down free non-functional requirements across multiple projects and dev teams. Cost leadership for all!

  • Ease of hooking up to modern monitoring tools with a configurable metrics set. JVM memory, vmstat, iostat, CPU, JVM gc, etc etc
  • Ease of pulling out logs into say Influxdb or something.
  • Connectors to DBs should be submoduled and shared for future projects. Polyglot persistence ftw.
  • API documentation is super important. Do not assume your API users know your API, and make a point of doing backward compatibility
  • Follow semver.

Keeping your head above water is number one. Hang in there, the good days will come. Just remember, nothing is new in software since 1970, they just get modern marketing hype. And lastly, your fancy new microservice is legacy as soon as you launch. So please do consider the future generations. Let’s end this cycle of spaghetti monster code through old fashion craftsmanship.

Microservice API on Elastic Beanstalk with Jetty, Jersey, Guice, and Mongodb

This blog aims to outline how one can very easily ship microservice APIs on Elastic Beanstalk with Jetty, Jersey, Guice and Mongodb. This piece of work written in Java was an inspiration over a weekend, where I finally found a coherent red thread based on some of the work I did in the past. So if you are reading this, be fully aware that this stack is no longer cool and hip since the conception of RxJava and Akka 😀 http://blog.circleci.com/its-the-future/

Code lives here. https://github.com/yveshwang/api-example

Why Elastic Beanstalk

Platform as a service (PaaS) is rather nice. A little similar to Heroku, Elastic Beanstalk (EB) environment can be tailored to run Docker containers amongst other things. Where EB truly shine is that the platform is accessible and completely configurable as if it was IaaS. This is made possible by ebextensions. For example, each EB instance by default ship with a local Nginx instance as designed by AWS. If you really want to switch it out, in theory you can do so by hijacking the initialisation process through ebextensions, uninstall Nginx, and install something like Varnish instead 😀 Yep, you can well and truly extend and tweak the platform to your heart’s content and potentially break everything.

Shipping to Elastic Beanstalk is fairly straight forward as it mostly involves a Dockerfile or a Docker image hosted in a public or private docker.io repository. If you have a Dockerfile or a Docker image, and a bit savvy with the eb cli tool, you are sorted and ready to deploy to Elastic Beanstalk.

Having a Docker container also makes testing repeatable and standardised. See the previous build pipeline blog series. It is by intention that the infrastructure part of the setting up eb is left out of this post for now as I believe Elastic Beanstalk deserves a blog post of its own. So let’s keep it old school and talk UML and code a little bit in this blog post.

Why Jetty

Jetty is a lightweight container as a naive attempt to defeat the monolithic J2EE containers because let’s be honest, most of us do not use half the functionalities in J2EE and the clustering of these J2EE containers goes against the very principle of microservices in my opinion. Instead, we should adhere to HTTP and RESTful API everything! Note that Jetty is most certainly cool, but it is not RxNetty or Akka-http cool.

Why Guice

Inversion of control is neat. On a grander scale, Guice can be used to inject mock layers en mass. For example, using Mockito to configure an entire mock data access layer and injecting that in context of unit or integration testing thereby allowing more tests to be written with less dependencies. Guice is also a nice way to help address the separation of concerns by keeping configuration away from business logic. Lastly, being able to do @Inject anywhere is powerful and allows us to construct a lot of templates and basically scale out horizontally through scaffolding code. When used properly, this is the little unsung hero of the Java world in my opinion.

Why Mongodb

Expect endless devops discussion on this very topic. Ever since the hacker news trolls came out of the woodworks against 10gen, the discussion has never ended. I like Mongo. I like it because it is fast to bang out a prototype.

DBs can vastly differ in ACID properties and thus address different combinations of CAP. I think I will save my opinion on Mongodb for another blog post another time. For now, Morphia is nice to work with in Java.

Why Jersey

Jersey is a pretty well structured way to write RESTful endpoints.

Putting it all together

Busting out some sick UML skills here.

Class diagram for api-example

Class diagram for api-example

Some basic principle by convention are as follows:

  • Each entity lives in its own collection in Mongodb
  • Each entity has one data access object (DAO)
  • Facade pattern is applied and should only contain business logic (no DB related operations)
  • Each DAO then at the very least as its own facade that can be extended to support business logic
  • You can freely inject other facades into one another.
  • Each facade maps to one HTTP resource supporting typical CRUD routines for that entity’s
    RESTful interfaces, GET, PUT, POST, DELETE and PATCH (ha!)
  • Caching headers, ETag, IMF headers can live in filters
  • Basic auth is also supported here as an example, that should live in filters too

Benefits

  • Loose coupling between the layers. You can replace Mongo quickly by just replacing the DAO implementations.
  • Most code is scaffolding or pure business logic. All connector code and basic CRUD support, including PATCH, lives in its respective base classes for entity, DAO, facade to resource layer that can be easily extended and reused.
  • Easy to test. All layers are tested, all entities are tested. And the test code can be easily extended
  • You can ship this code to an offshore team and expect that they can easily create new entities and new HTTP endpoints in a short time by simply copypasta some scaffolding code, reuse some templated CRUD classes, follow the basic CRUD routines, and pump out some basic business logic 🙂 Good times!
  • If you actually have a good team to work with, then this stack is very easy to extend by simply following the Facade pattern. Build cool stuff like your own in-memory RRDs for statistics then inject that statistics to other business logic!
  • Clustering is easy because this stack speaks HTTP and you would simply need a load balancer and some minor (or major?) Mongodb config.

Cons

  • At the core, Facade pattern is used liberally. This is not an event driven or reactive approach at all. When using facade, think shared memory, which means threading and parallelism will require due diligence. This is one reason I believe an event-driven, message based approach would improve the Facade pattern.
  • The stack compiles against JDK7. It would work fine with JDK8.
  • Not reactive, and by definition, not hipster enough.

#leetspeak ’14 for me: big ball of mud, devops, architectural validation

Nothing in I.T. is new. We merely learn to call it different names. We continue to battle with big ball of mud. As this guerilla warfare thickens, we learn to scope down and manage smaller balls of mud through tighter collaboration and empathy between dev and ops. Devops anyone? The war against object-oriented programming rages on, backed by experts and Hacker News alike. It seems that we need an escape goat but even as objects are denounced as the new evil one thing remains constant, Python is a great language with a shit runtime and Java is a shit language with a great runtime. Oh and there is still no love for Visual Basic or Sharepoint developers. Actors, messages and events are the new hipster but have they not been around since Erlang or Cobol? We give it a new name, a new stage like C10K and the internet and mobility, forgetting the lessons of old. Nothing in I.T. is new.

It is not often that one gets a total validation to previous architectural design and implementation by the likes of Spotify. I have long defended and championed my Java expertise in the land of sysadmins dinosaurs, where empathy seems to be part of a bike shedding exercise. From this baptism of fire I realised it is not the language, it is the approach and architecture design that matters. A choice we make that will determine the construct and the size of this ball of mud. I walked out of Niklas Gustavsson’s talk thrilled, overwhelmed with joy and kinship. For I am not alone. For the stuff I architected in the past are composed of the same technical choices, principles and components. A big pat on my own back, and we need to do that from time to time 🙂

Then this brings us to devops, collaboration and throwaway code. Well, if Gartner said that it is here in 2015, then the pitch is even easier to convince the execs.

Devop is not a position, title or an engineering degree. I have written in the past that devops is culture. It is empathy and collaboration. For us grunts and peons navigating the corporate waters, this is the ammunition we need to break down silos, poor communications, ridding ourselves of these monolithic non-scalable balls of muds. This is a grass root movement that will change the faces of I.T. departments worldwide.

Indeed it is a rarer occasion to be surrounded by software craftsman of similar background that are driven, passionate and willing to share their experiences. For that I am grateful to both Tretton37 and Gothenburg. If you have missed leetspeak 2014, fear not, it was recorded. So thank you again and I will definitely be back for next year.

Now time to assemble my devop Avengers. We got work to do.

Microservice architecture and startups: a thought excercise

All buzz words aside, microservice architecture or microservices, refer to a style of software architecture where components are built as small and self confined services. These services, when integrated as part of a larger stack, can be arranged to deliver functionalities akin to a traditional monolithic application. This blog post is part of the devops discourse, an exploration and a thought exercise to further examine the best fit of microservice architecture for startups.

Quick micro service summary

Martin Fowler is an authoritative voice on this topic, and he has a series of blog posts outlining the finer details of microservices. Microservice architecture are two peas in a pod with the taxonomy of devops (see https://macyves.wordpress.com/2014/03/08/taxonomy-of-devops-in-startups/), where in addition the former exudes the following characteristics:

  • Componentization via services, service for everything!
  • Organised around business capabilities, cross functional teams
  • Products not projects, full SDLC responsibilities from teams
  • Smart endpoints and dumb pipes, common integration protocols
  • Decentralised governance, common components/libraries
  • Decentralised data management, data redundancy and distribution
  • Infrastructure automation, scalable platforms
  • Design for failure, resilience and high QoS
  • Evolutionary design, continuous integration

Mass adoption?

These aforementioned characteristics are in fact not exclusive to microservice architectures. They can be found in monoliths too. For example, I believe you can have smart endpoints and dumb pipes even when running a large clusters of heavy weight J2EE containers (J2EE is arguably the epitome monolithic platform). You can certainly build decentralised reusable components, which can be put together to form new WAR/EAR packages that can be deployed on an elastic cloud infrastructures. The point here is that enterprises and monoliths are not the mark of the beast. Much like microservices and devops are not the messiah. In fact Fowler stamps his cautious optimism by saying

…you shouldn’t start with a microservice architecture. Instead begin with a monolith, keep it modular, and split it into microservices once the monolith becomes a problem.

So for a start up?

Assuming that resources are scarce and VC funding aplenty, the following are some of the common grounds that could help in the endevour of moving monolithic architecture to a microservice one.

  • Cross functional teams. Build devops mentality by avoiding silos from the get go. This extends even to mixing individual with varying programming languages background at times.
  • Product overheads (release, packaging, maintenance, testing, etc). Avoid pushing out too many tiny products that can lead to overwhelming overheads and potential technical debt. Naturally do not build competing products.
  • If several product lines exists, then the implication is that technical teams do not spend time contributing to the same code base. Camaraderie may be amiss and teams maybe in loggerheads. Promote better cross-team communication and be aware of unintended silos.
  • Cross functional team requires quite a wide spread of competencies. Invest in your resources and promote accountability and dogfooding.
  • Build monoliths and utilise common integration protocols (HTTP is always a good choice). Commonly adopted protocols tends to be easier to debug.
  • Keep the size of the teams small. Large teams build larger system.

Take-aways

Microservice architecture attempts to programmatically address devops via architectural design and giving it a name that resonate with enterprise developers and architects. I predict that adoption of microservices will not be hampered by its inherent complexity or costs, but by human factors. Further more, currently microservices read like a silver bullet to everything, but underneath the surface there looms a fair amount of overhead and potential technical debts. I believe that a technical organisation needs to reach a certain critical mass and attain a certain level of competency in order to fully adopt microservices and reap the full benefits.

Indeed I do not believe a startup company is fully capable of building microservice architecture from the get go. I would advice for a transition strategy from monolith to microservice for small technical organisations. Naturally, startups can easy build components that can be used as a microservice too. Perhaps thats where microservices shines the most for a startup.