Getting rid of monoliths: #lagom and microservice hooha

The road to ridding application monoliths is one wrought with many obstacles, mostly organisational. Whatever your reason behind this brave move, you have convinced yourself that you will fix this big ball of mud, somehow… You will no longer stand for this shanty town of a spaghetti code, riddled with nonsensical donkey poo. The code no longer speaks to you, it has surpassed the point of having a smell. This abomination has a cyclomatic “flavour” that you can almost taste it at the tip of the tongue. No amount of Ballmer’s peak is going to help. So you are going to make good on that promise to your linting tool, probably Sonar, for whom you have whispered hours of sweet nothings to the tunes of Bob Marley, “Don’t worry ‘bout a thing ’cause every little thing gonna be alright…” What once was a good idea has hideously mutated into a giant walking talking abomination that crawls under your skin and haunts your every step. Under the guise of some sound architectural decisions, you are going to pay down this technical debt, hard. You are going to to do microservice, and you are going to ship Docker containers. If this is you, read on.

The biggest modern day monoliths of them all, J2EE

Tasked with the daunting task of ripping apart functional applications into microservices, you will battle bikeshedders whilst debating your silly asses off with armies of monolith fandom. During what seems to be a very long coup d’état, this attempt to evangelise a newer leaner architecture and approach is riddled with skeptics. “What is frontend without JSF or JSP?! How dare you even question server-push technology like XYZFaces? Then how about the JNDI lookups and EJBs?! Surely you cannot replace these things?”

So six years ago, I had a very pleasant experience of pulling out Glassfish with embedded Jetty, replacing I-Hate-My-Faces with some simple JS (nowadays we call it the MEAN stack, or its many other permutations), and started building APIs with microservice principles. Turns out that is what Spotify did too at the time. So there you go haters. Bottom line, do not use J2EE no matter what if you care about having a competitive advantage. But if you need some good reasons, here are some:

  • Most J2EE containers are grounded on the notion of vertical scalability last I checked. Clustering should be idempotent and stateless, and scaling horizontal.
  • J2EE containers are not cloud native. Just look at their clustering! Unless you feel like having VPNs and private networks across different public clouds or data centers, you can probably just forget it.
  • So let’s put it behind the load balancer? No, most J2EE containers don’t do shared session persistence out of the box.
  • Let’s not kid ourselves, you will customise the crap out of this J2EE container; dropping war files upon war files to fix all its shortcomings.
  • Your sysadmins do not work with war/jar/ear files. They are *nix gurus deserving to be treated like one. Ship your product like an actual product, sir! Apt/yum/brew is your friend and please follow Filesystem Hierarchy Standard (FHS) for goodness sake.

Decide exactly what you are going to piecemeal

Have a very clear idea of the different categories or lifecycles of the applications you are going to transform. Timing is important, as with showing results. Afterall, this is the minimum viable product (MVP) approach naturally. Highly recommend avoiding system of records to begin with, those are definitely not quick wins and you will be in an arduous match, going for the full 5 rounds. Your future competitive advantage does not sit in your ERP or CRM systems. If it does, then um, yeah.

  • Isolate and separate clearly the functionality you are transforming or building
  • Ensure the isolation goes all the way down to infra, this is devops after all
  • Think of how to horizontally scale
  • Think of elasticity
  • Think of shared persistence across network

Java is dead, long live Java!

The trend a few years back was relentlessly hating on Java and Java devs. Evidence shows otherwise. Java is still around and it is going to be around for quite some time to come. No need to switch out it just yet even after the programming nuclear winter imposed by object-orientation and Java/.NET alike. Good code, good ecosystem based on tooling, and good solid design patterns go a long way, regardless of application domain or programming languages.

The truth is the same could have been said about node.js. I recall a number of years back, few colleagues quoted Hacker News regarding the state of node.js and how immature it was and Java was the “preferred” choice, even though the sysadmin community bagged the crap out of Java at the time. If you make strategic decisions predominantly based on the whims of HN, then you are just as plonkers as the next troll. What your node.js/Java boys and girls need to remember:

  • Repeatable, testable build pipelines. Think CI/CD.
  • Coding standards and linting, no brainers there
  • Packaging. Do the right thing, treat sysadmins as your end customers.
  • Separate out load balancing or clustering to other applications like nginx or haproxy. TCP stack makes more sense when written in C.
  • Lord forbid you try to do TLS termination in Java. This is really not cool bro. You got a number of other choices, so do not add this complexity to the landscape. There are no OpenSSL implementation in Java, and OpenSSL is already difficult to maintain as it is.
  • Good monitoring and logging practices goes a long way
  • Think network. Think TCP and HTTP.
  • Your JVM will live on top of a kernel. Know them. Tune that JVM and tune that kernel if needed.

So, microservices. Heard of #lagom?

So you have found a good set of business functional requirements to transform into a set of microservices. Heard of #lagom? Maybe Dropwizard or Springboot? The choices are probably all OK, and when doing microservices, there are simply no bad choices in my opinion. The gains outweigh the means here. The kicker is that there are probably a number of customised endpoint you will need to integrate with. This could be HTTP, something-else-over TCP or whatever. There could also be JPA or other nosql data stores you need to use. Pick your microservice framework component knowing that this is a framework and it can easily grow. The microservice strategy can easily bloat into “milliservice” or “services” (SOA?) if you are not careful. So just how do you stop the size of the code base from expanding? Keep distinct business functionalities as separate services and code bases. The sizing is up to you. Also, split up common functionalities into submodules. Both Dropwizard and Springboot has a bunch. Lagom for example has recently being introduced as the microservice framework for Java, it does have quite a lot of these connectors already in place. For me, I opted to homebrew our own microservice framework for maximum flexibility, ownership, and performance tuning.

Either way, armed with your chosen messiah of a framework, the idea here is to rain down free non-functional requirements across multiple projects and dev teams. Cost leadership for all!

  • Ease of hooking up to modern monitoring tools with a configurable metrics set. JVM memory, vmstat, iostat, CPU, JVM gc, etc etc
  • Ease of pulling out logs into say Influxdb or something.
  • Connectors to DBs should be submoduled and shared for future projects. Polyglot persistence ftw.
  • API documentation is super important. Do not assume your API users know your API, and make a point of doing backward compatibility
  • Follow semver.

Keeping your head above water is number one. Hang in there, the good days will come. Just remember, nothing is new in software since 1970, they just get modern marketing hype. And lastly, your fancy new microservice is legacy as soon as you launch. So please do consider the future generations. Let’s end this cycle of spaghetti monster code through old fashion craftsmanship.

Advertisements

Taxonomy of DevOps in startups

Software engineering is changing and DevOps is at the heart of it. There are myriad of DevOps blogs and conference talks on transforming enterprises to more DevOps, usually centered around incorporating operational concerns early on software development lifecycle (SDLC).

An Enterprise Transformation

On a larger scale, Mike Kavis, VP/Principal Architect for Cloud Technology Partners, puts it quite bluntly on a post titled No, You Are Not a DevOps Engineer.

“Enterprises are struggling with DevOps. They all want DevOps even though many do not know what it is. In many cases, I see infrastructure teams who are calling themselves DevOps leading a grassroots initiative. When I ask them where the development team is, they often say either “we did not invite them,” or even worse, “we don’t talk to them.””

Kavis goes on saying:

“Another reoccurring pattern I see is that a “DevOps” team’s first step is often to figure out if they are going to use Chef or Puppet (or Salt or Ansible or whatever else is hot). They have not even defined the problems that they are setting out to solve, but they have the tools in hand to solve them. Often these teams wind up building thousands of lines of the scripts, which raises the question, “are we in the business of writing Chef scripts or in the business of getting to market faster with better quality and more reliability?” Too often, these teams code themselves into a corner with mountains of proprietary scripts that actually add more waste to the system, instead of removing waste from the system, which is what the driving forces behind the DevOps movement are all about.”

Arguably these are symptoms specific to big enterprises, where traditionally massive engineering silos exists with minimal communication coupled with ticketing/queuing systems in between departments. But are startup basically immune to these symptoms?

Definition of DevOps

I agree with Kavis that DevOps is not a role or job title, it is not an engineering discipline nor a methodology or process framework. I also agree with this post, titled What Is (Not) DevOps, and How do We Get There, that DevOps is a culture shift. It is a movement that encourages and improves communication, collaboration, and fosters quality software product delivery through tight knit teamwork.

Daemon Edward, Co-Founder of DTO Solutions stated in QCon London 2014, titled Dev “Programming” Ops for DevOps Success, that the definition of DevOps to him is

  • a way of seeing your problems
  • a way of evaluation solutions
  • a way of communicating these things
  • always evolving

I wholeheartedly agree.

DevOps taxonomy in startups

Arguably the size of startups and the limited resources requires every technical members to be all-hands-on-deck, participating in all aspects of SDLC. Often those that are slightly more interested in Ops would take the lead in addressing the many -ilities in requirements and release management. Those more interested in Devs would take the lead on churning out code and test automation. Based on the lack of resource and the inherent drive and focus of a startup, the taxonomy of DevOps in startups would be the following:

  • -ilities are addressed as backlog items during development
  • Release/production concerns are on the table from day one
  • Teams have a tighter bond, communication channel transparent
  • All efforts are tightly mapped with tangible values (be it business or customer)

The DevOps sweet spot

It would appear that startups are DevOps havens. This would not be the case. Startups are not immune to the aforementioned problems in larger enterprises. Silos do exist in startups, be it a single man team or hundreds. There are plenty of pitfalls along the way. In my experience, I believe in the existence of a DevOps sweet spot. The points below are my own guidelines for chasing that utopian DevOps shangri la.

  • Non-functional requirements, -ilities as features residing in backlogs
  • Build systems that makes managing easy for Ops
  • Foster a nurturing company environment for thriving Devs
  • Treat Ops as customer facing, sales/consultancy alignment
  • Treat Devs as business value delivery, marketing/R&D alignment
  • Adopt lean/agile principles
  • Last but not least, Stop, collaborate, and listen! (no wiser words have come out of the 90s)

Vagrant, Varnish and vmods

Development environment has been plaguing us for a while in my product development department. From dependencies hell to complex setup in operations, our development environment has gone through the usual gauntlet of pains and complaints.

This has changed with Vagrant. It is the single tool that gels the devs with the ops; quintessential devop tool if you will. Not only Vagrant has helped eliminate the “works on my machine” bugs, we use it for automated integration tests. In addition, this one tool has made our development environment setup quick and simple for our HCI guys too.

We do a lot of integration work with Varnish Cache and I thought I would take this opportunity to share this simple Vagrantfile, as an example, to help get started with installing varnish and libdigest-vmod from source.

Note that the provisioning process is rather crude in this example. Rather, the intention here is to out outline the steps required to get varnish and vmods installed and running via Vagrant. For production and future maintainability, do use Chef or Puppet as it can be seamlessly integrated within the Vagrantfile.


# -*- mode: ruby -*-
# vi: set ft=ruby :

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

  config.vm.define :varnish do |varnish|
    varnish.vm.box = "varnish"
    varnish.vm.box_url = "http://files.vagrantup.com/precise64.box"
    $script_varnish = <<SCRIPT
echo Installing dependencies, curl
sudo apt-get update
sudo apt-get install curl -y
sudo apt-get install git -y
curl http://repo.varnish-cache.org/debian/GPG-key.txt | sudo apt-key add -
echo "deb http://repo.varnish-cache.org/ubuntu/ precise varnish-3.0" | sudo tee -a /etc/apt/sources.list
echo "deb-src http://repo.varnish-cache.org/ubuntu/ precise varnish-3.0" | sudo tee -a /etc/apt/sources.list
sudo apt-get update
echo ==== Compiling and installing Varnish from source ====
sudo apt-get install build-essential -y
sudo apt-get build-dep varnish -y
apt-get source varnish
cd varnish-3.0.4
./autogen.sh
./configure
make
sudo make install
cd ..
echo done
echo ==== Compiling and installing lib-digest vmod from source ===
git clone https://github.com/varnish/libvmod-digest.git
sudo apt-get install libmhash-dev libmhash2 -y
cd libvmod-digest
./autogen.sh
./configure VARNISHSRC=/home/vagrant/varnish-3.0.4 VMODDIR=/usr/local/lib/varnish/vmods
sudo make install
cd ..
echo ===== done ====
echo ===== firing up varnish via commandline ====
sudo varnishd -a :80 -T :6081 -f /vagrant/test.vcl
touch varnish_vm
SCRIPT

    varnish.vm.provision :shell, :inline => $script_varnish
  end

end