Thank you #devoxxpl! – Oh and today I saw the Donald Drumpf of devops: make devops great again?!

#Devoxx and #devoxxpl, what a fantastic experience in a wonderfully geeky city that is Krakow. The conference was a massive eye opener, from talks on the bleeding edge to discussion not far from the old school governance and architecture (yes, people still care about these discussions apparently). Indeed I felt very grateful to have presented my talk in #devoxx this year, mostly sharing our fun enterprise transformation journey and repping the Avengers (shout out to Warsaw yo!). I found my session to be very enjoyable particularly thanks to a very receptive audience. Thank you from the bottom of my heart 🙂 You all rocked!

One speaker that did stand out for me today was a talk about development culture, particularly about the lack of business sense developers had, and that these nerd are considered cost centers because they only wished to essentially waste funding to try out new tech. Seriously dude… you are the Donald Drumpf of devops and I am sub blogging you (I guess this is a thing now). It’s not like business side can’t be a cost center either and it is pure naivete to put them up on an undeserved pedestal.

I hope that particular message was not taken to heart by the audience, but instead I would say this. To my fellow devops and developers and admins alike. Thirst for knowledge. There is nothing wrong with chasing bleeding edge. This is the core of innovation and startup mentality.

Disruptive tech and business models like Bitcoins, Uber and AirBnB should be evidence enough that you do not need antiquated F.U.D. and this strange fixation on so called “business sense”. Grow as technologists and blossom into creative innovators. Give back to tech and your open source communities. Do not listen to the Donald Drumpf of devops and his spiel based on self doubts and guilt trips. Chase knowledge.

Devops is culture.


#kubecon ’16 for me: inspiration, community, and strategic validation

In a short span of time, companies will no longer own or probably never buy new hardware. Gone are the days of PXE booting racks and racks of the stuff. These classical and mysterious oompa loompas are being replaced by automation and tooling. Hardware vendors are cannibalising and consolidating their business all at the same time. While skeptics remain, haters are always going to hate. Then Kubernetes flew in. Like a moth to a flame I was completely drawn at the time. Alas, just scratching the surface lies a giant of a community, oozing with great ideas at an amazing pace. Armed with my own fanboy enthusiasm, I ventured into my first #kubecon this year. One thing for sure, I cannot wait to move stuff to CoreOS and rkt!

These days projects are executed by highly agile guru devops fullstack teams, and we are shipping containers. Containers are a fun new way to package software, as devs are always keen to rewrite SysV init or system.d. I too am guilty of this in the past, at times mustering the courage to create these unique snowflakes rainbow-unicorn of a bash script. Yes, it appears that the dark days are back… maybe… and some of us might be enjoying this a little too much 😀

Containerisation is the essence of devops as it breaks down the barriers between sysadmins and devs. However Kubernetes does not solve the fact that we are still maintaining or are currently churning out shitty applications despite the improved process around it. The usual remedy is by hammering down on non-functional requirements to level out most of the technical debt. Naturally Kubernetes does not give you free non-functional requirements. Truth is, you never got non-functional requirements for free. This was the clear dividing line between craftsmanship or sipping soup from the big ball of mud. What Kubernetes does give you is elasticity, optimisation and scalability.

Self serviced infra is cool. It was a common theme throughout #kubecon. Having Kubernetes as an integral part of the development process will bring about the next level devops. Empowering these team seems to be the magic sauce and I wholeheartedly agree. Finally we can ditch the last of Elastic Beanstalk.

The biggest validating moment for me was centered around enterprise transformation, based around automation, build pipelines and ChatOps. I knew my strategy was awesome, but this completely made my day. We have in place all three within a short time and we are not alone in this hipster movement! Not to mention we are bloody good at this too! Right up there with the best. A proud moment as I quietly gave myself a pat on the back, swiftly jumped on Slack, shared the good news and congratulated everybody on the team.

#kubecon this year for me was just an amazing experience. It has been a while since I have attended a conference for the sole purpose to just soak up the knowledge from the superheros that I look up to. It was nerdinus maximus, happy hour all day everyday, and I can’t wait to do it all over again.

Microservice API on Elastic Beanstalk with Jetty, Jersey, Guice, and Mongodb

This blog aims to outline how one can very easily ship microservice APIs on Elastic Beanstalk with Jetty, Jersey, Guice and Mongodb. This piece of work written in Java was an inspiration over a weekend, where I finally found a coherent red thread based on some of the work I did in the past. So if you are reading this, be fully aware that this stack is no longer cool and hip since the conception of RxJava and Akka 😀

Code lives here.

Why Elastic Beanstalk

Platform as a service (PaaS) is rather nice. A little similar to Heroku, Elastic Beanstalk (EB) environment can be tailored to run Docker containers amongst other things. Where EB truly shine is that the platform is accessible and completely configurable as if it was IaaS. This is made possible by ebextensions. For example, each EB instance by default ship with a local Nginx instance as designed by AWS. If you really want to switch it out, in theory you can do so by hijacking the initialisation process through ebextensions, uninstall Nginx, and install something like Varnish instead 😀 Yep, you can well and truly extend and tweak the platform to your heart’s content and potentially break everything.

Shipping to Elastic Beanstalk is fairly straight forward as it mostly involves a Dockerfile or a Docker image hosted in a public or private repository. If you have a Dockerfile or a Docker image, and a bit savvy with the eb cli tool, you are sorted and ready to deploy to Elastic Beanstalk.

Having a Docker container also makes testing repeatable and standardised. See the previous build pipeline blog series. It is by intention that the infrastructure part of the setting up eb is left out of this post for now as I believe Elastic Beanstalk deserves a blog post of its own. So let’s keep it old school and talk UML and code a little bit in this blog post.

Why Jetty

Jetty is a lightweight container as a naive attempt to defeat the monolithic J2EE containers because let’s be honest, most of us do not use half the functionalities in J2EE and the clustering of these J2EE containers goes against the very principle of microservices in my opinion. Instead, we should adhere to HTTP and RESTful API everything! Note that Jetty is most certainly cool, but it is not RxNetty or Akka-http cool.

Why Guice

Inversion of control is neat. On a grander scale, Guice can be used to inject mock layers en mass. For example, using Mockito to configure an entire mock data access layer and injecting that in context of unit or integration testing thereby allowing more tests to be written with less dependencies. Guice is also a nice way to help address the separation of concerns by keeping configuration away from business logic. Lastly, being able to do @Inject anywhere is powerful and allows us to construct a lot of templates and basically scale out horizontally through scaffolding code. When used properly, this is the little unsung hero of the Java world in my opinion.

Why Mongodb

Expect endless devops discussion on this very topic. Ever since the hacker news trolls came out of the woodworks against 10gen, the discussion has never ended. I like Mongo. I like it because it is fast to bang out a prototype.

DBs can vastly differ in ACID properties and thus address different combinations of CAP. I think I will save my opinion on Mongodb for another blog post another time. For now, Morphia is nice to work with in Java.

Why Jersey

Jersey is a pretty well structured way to write RESTful endpoints.

Putting it all together

Busting out some sick UML skills here.

Class diagram for api-example

Class diagram for api-example

Some basic principle by convention are as follows:

  • Each entity lives in its own collection in Mongodb
  • Each entity has one data access object (DAO)
  • Facade pattern is applied and should only contain business logic (no DB related operations)
  • Each DAO then at the very least as its own facade that can be extended to support business logic
  • You can freely inject other facades into one another.
  • Each facade maps to one HTTP resource supporting typical CRUD routines for that entity’s
    RESTful interfaces, GET, PUT, POST, DELETE and PATCH (ha!)
  • Caching headers, ETag, IMF headers can live in filters
  • Basic auth is also supported here as an example, that should live in filters too


  • Loose coupling between the layers. You can replace Mongo quickly by just replacing the DAO implementations.
  • Most code is scaffolding or pure business logic. All connector code and basic CRUD support, including PATCH, lives in its respective base classes for entity, DAO, facade to resource layer that can be easily extended and reused.
  • Easy to test. All layers are tested, all entities are tested. And the test code can be easily extended
  • You can ship this code to an offshore team and expect that they can easily create new entities and new HTTP endpoints in a short time by simply copypasta some scaffolding code, reuse some templated CRUD classes, follow the basic CRUD routines, and pump out some basic business logic 🙂 Good times!
  • If you actually have a good team to work with, then this stack is very easy to extend by simply following the Facade pattern. Build cool stuff like your own in-memory RRDs for statistics then inject that statistics to other business logic!
  • Clustering is easy because this stack speaks HTTP and you would simply need a load balancer and some minor (or major?) Mongodb config.


  • At the core, Facade pattern is used liberally. This is not an event driven or reactive approach at all. When using facade, think shared memory, which means threading and parallelism will require due diligence. This is one reason I believe an event-driven, message based approach would improve the Facade pattern.
  • The stack compiles against JDK7. It would work fine with JDK8.
  • Not reactive, and by definition, not hipster enough.

Websocket: the road to hell is paved with good intentions

“Server-push technology” was an elusive sales pitch half a decade ago. COMET, Flash and websockets promised ways for servers to “push” messages onto clients. We sold our souls to heavy weight libraries like Icefaces, GWT, or some other JSF abomination that included hixie-75/hixie-76 support. We became oblivious to this devil’s promise and started blaming browsers when stuff did not work. Now with RFC 6455, the realities of websocket for me is still as disjointed as ever. I started off as an evangelist and a staunch supporter for websocket, but as Captain Hindsight would say, “you are doing it wrong”, and I quickly learnt from my mistakes.

This blog is my personal journey with websocket. In the era of devops and cloud (just as elusive and sales pitchy as ever), in my opinion, I find it really hard to see how this protocol would fit elegantly into an elastic, edge-side, micro-service cloud architecture/infrastructure.


Websocket does not work out of the box for most load balancers. Furthermore, when you upgrade the connection directly to the backend (see this piece I wrote for Varnish a while back) you lose edge-side load balancing and caching, thus essentially piping the backend directly to the browser one connection per client at the time. Without some additional clever connection multiplexing components in between the load balancer and the websocket server, like websocket-multiplex or some tricked out node-http-proxy, the websocket server will not scale. For those that prefer sales and marketing lingo, this means it is not “cloud enabled” or “web scale.” Furthermore, websocket’s binary protocol, implemented by libraries such as jWebsocket, is extremely hard to debug in production. Unless you are super savvy with Wireshark and regularly eat TCP dumps for breakfast, and not to mention a bit of a masochist, I highly recommend staying away from websocket all together at the time of writing.

Websocket in practice, and asking some hard devops questions

In the past, I have had the displeasure of working with Icefaces and GWT. These Java front end frameworks abstract away the nitty gritty of the network and protocols such as websocket versions, handshakes, messaging format and error handling with elegant MVC models. This is all well and good on the drawing board, but MVC is a design pattern for separation of concerns on the UI level. Not exactly applicable when talking about the complexity and reality of running a websocket server in this internet and mobile driven world.

I have spent past 4 years developing and supporting an application that first utilised websocket. I have to admit, it was fun building directly ontop of jWebsocket, but it was painful and nearly impossible to debug in production. This was alleviated when we went full blown backbone.js, require.js and d3.js. Keeping things simple pays off in the long run. From that experience, I have devised a checklist for any future websocket projects, and help potentially avoiding the same situation from happening again.

  • Are there any authentication with this websocket implementation? If so, how does this tie into the user model? (jWebsocket for example requires you to specify a couple of system users for authentication and namespaces/roles for authorisation. These are separated from the existing authentication/authorisation model used by the webapp)
  • If it runs within browser, can you tweak the client-side retry logic or connection timeout variables?
  • If this runs outside of a browser (and this gets weird fast) are there any TCP client that can be used for automated testing or health checks in production?
  • How do you test this automatically? Is it supported by Selenium or phantom.js?
  • Can this webserver server be plugged into existing load-balancer? Any additional tweaks and settings required?
  • Does this need multiplexing?
  • How do you debug this on client-side in production? This is usually not possible because the connection is now elevated into a TCP connection and browsers no longer cares for it.
  • How do you debug this on server-side in production? This gets even more tricky as you include multiplexers and load balancers and various other nodes that speaks this binary protocol.
  • How do you debug this at all? Not possible if browser gives up the connection and everything in between is a mystery.
  • Ok so we can do TCP dump and Wireshark. So are you ready to do TCP dump between somebody’s browser and the origin server and everything else in between?
  • Catering to older browsers means Flash fallback. Are you prepared to open an additional Flash port and start supporting this? (and repeat the set of debug and test questions for Flash fallback)
  • Does this thing scale? Yes if you multiplex the connection.
  • How does it handle reconnection and stateful connection?
  • How does the server side handle connections and thread pools handling these logics behind each connection?
  • Does the network infrastructure block upgrade headers? Any other firewall rules that might break websocket?
  • Lastly, you must be prepared to give up caching.

Benefits with websocket

  • True server-push! Yes, this is true. Long-polling or COMET is not exactly “pushing” messages per definition. You are safe from the definition police.
  • You get to play with fun new websocket multiplexing code. This is quite cool actually. Mucking around with node.js is always fun. Perhaps you are thinking about building a Varnish VMOD to support websocket multiplexing. Perhaps you are thinking about building some kind of HTTP cacheable stream for websocket messages before stitching them back out as websocket payloads? This is all very exciting indeed!
  • Ability to build cool HipChat-like applications

Jenkins, jenkins-job-builder and Docker in 2 steps

So here is a simple example that will help provision a VM with Jenkins, jenkins-job-builder and Docker all in one with Vagrant.

Gone are those embarrassing git commits with those peskey jenkins-job-builder yaml files! Throwing in Docker install in Vagrant for Mac users who shuns boot2docker.

Some bonus stuff like examples of provisioning Jenkins plugin via the cli interface, creating first time Jenkins user are chucked in too.

Check it out!

2-steps are as below

  1. vagrant up
  2. point your browser to http://localhost:38080 and enjoy

and then ..

jenkins-jobs --conf jenkins-job-builder-localhost.ini test your-jobs.yaml -o output/

3 steps really… and for the keen learners wanting some intro material for docker, go here, and here for jenkins-job-builder.

#leetspeak ’14 for me: big ball of mud, devops, architectural validation

Nothing in I.T. is new. We merely learn to call it different names. We continue to battle with big ball of mud. As this guerilla warfare thickens, we learn to scope down and manage smaller balls of mud through tighter collaboration and empathy between dev and ops. Devops anyone? The war against object-oriented programming rages on, backed by experts and Hacker News alike. It seems that we need an escape goat but even as objects are denounced as the new evil one thing remains constant, Python is a great language with a shit runtime and Java is a shit language with a great runtime. Oh and there is still no love for Visual Basic or Sharepoint developers. Actors, messages and events are the new hipster but have they not been around since Erlang or Cobol? We give it a new name, a new stage like C10K and the internet and mobility, forgetting the lessons of old. Nothing in I.T. is new.

It is not often that one gets a total validation to previous architectural design and implementation by the likes of Spotify. I have long defended and championed my Java expertise in the land of sysadmins dinosaurs, where empathy seems to be part of a bike shedding exercise. From this baptism of fire I realised it is not the language, it is the approach and architecture design that matters. A choice we make that will determine the construct and the size of this ball of mud. I walked out of Niklas Gustavsson’s talk thrilled, overwhelmed with joy and kinship. For I am not alone. For the stuff I architected in the past are composed of the same technical choices, principles and components. A big pat on my own back, and we need to do that from time to time 🙂

Then this brings us to devops, collaboration and throwaway code. Well, if Gartner said that it is here in 2015, then the pitch is even easier to convince the execs.

Devop is not a position, title or an engineering degree. I have written in the past that devops is culture. It is empathy and collaboration. For us grunts and peons navigating the corporate waters, this is the ammunition we need to break down silos, poor communications, ridding ourselves of these monolithic non-scalable balls of muds. This is a grass root movement that will change the faces of I.T. departments worldwide.

Indeed it is a rarer occasion to be surrounded by software craftsman of similar background that are driven, passionate and willing to share their experiences. For that I am grateful to both Tretton37 and Gothenburg. If you have missed leetspeak 2014, fear not, it was recorded. So thank you again and I will definitely be back for next year.

Now time to assemble my devop Avengers. We got work to do.

Hipsterising Windows: cygwin vs babun vs git bash vs powershell – the Onion scale

It is clear with Azure that Microsoft is adopting a tool chaining strategy based on Git and other useful terminal tools. Gone are the days of drawing pretty boxes in a CASE tool and expect the stuff you cooked up in thin air to be high performance, scalable and easily provisioned through cloud providers. The first step is get yourself tools that can actually help your everyday devops life. Hark! let us hipsterise Windows and bring on the terminals!

So if you take pride in your .dotfiles, git-foo and devops craftsmanship, but got stuck navigating the treacherous GUI waters of Redmond county, then this is for you. This blog post aims to be an opinionated and cynical evaluation of 4 major terminal options available in Windows for running Git and other common everyday *NIX tools. The scale are in unit of Onions, because you will be weeping when you start working on these abominations. Command Prompt did not make the list. It died with DOS 6.22.


Use Mac or Linux. Save yourself while you can. If you absolutely cannot avoid using Windows as the main platform, or unable to run it through Virtualbox, then install Babun; though the experience is no where near the level of “epic unicorns riding waves of candy rainbow”, it is still very solid. Be prepared to deal with stuff like BLODA (Big List of Dodgy Apps), you’ll question the existence of antivirus as a marketing and vendor lock-in strategy, and really get some hands-on practicing copious amounts of “let me reboot my laptop to see if it fixes the problems. Oh it did.” Start rubbing your ears and say “Wooooosa”, and remember the fault is not with Babun, but with Windows.


Git can be installed via chocolatey. This is a package manager that can be installed in Powershell with this monstrosity of a command.

PS:\> iex ((new-object net.webclient).DownloadString(''))

Chocolately is billed as apt-get for windows, but you would need Powershell know how, and you best be a C# developer. Guessing if you do devops on *nux/*BSD platform, this is not for you. Naturally, you will bloat your xbox PC since none of the UNIX tools exists. Forget about curl because in the world of Powershell, curl equivalent looks like…

PS F:\> (New-Object System.Net.WebClient).DownloadString("")

Ships with Windows and has a fancy blue background. Quite possibly awesome with stuff like Sharepoint. Network proxy works without additional configuration. But why Sharepoint… why…

You need to know your C#. Pipelining involves objects, not strings. My guess is that sed and awk would not work without a heap of ToString() method calls. Conceptually very powerful for having a debug console for everything Microsoft. This is extreme vendor lock-in.

4/5 Onions. Balling my eyes out reading MSDN documentation to just trying to add headers to the Webclient. In contrast, this would be the perfect platform if you do know your C#. I do not see very sharp, if you would pardon the pun.


This is old school. Both Git and package managers are available. Options here are apt-cyg and pact. Cygwin is POSIX compliant and can be the portable platform for most things *NIX.

Copy and pasting is made slightly easier with using the default clipboard. No need to fiddle with pbcopy or “*y and “*p. Yes, can’t think of any other pros besides the fact that most tools are available here.

Ready to start using dos2unix and unix2dos all over the shop. Cygwin looks dated but a little .dotfiles TLC will spruce it up real nice. But be prepared to download the internet for the tools you want. You will probably come across a bunch of quirks and work around to get stuff going. Note that node.js no longer supports Cygwin either. Need to explicitly setup HTTP_PROXY env variable for network proxies.

3/5 Onions. It feels like I am back in university trying learn things instead of producing results.


Best in class here. It is essential Cygwin minus the quirks and ugliness. Has plenty out-of-the-box tools and a nice package manager named pact that seems to work alright. pact is a bit like brew for Mac.

Purrrty and super fast to get going. zsh is my new fav shell.

Copy pasting is a bit of a pain. Babun is installed in its own directory, sort of like a mini chroot environment. This makes accessing your windows stuff feels a little jail-breaky. But why would you want to do that anyways when you got a perfectly good shell? Need to explicitly setup HTTP_PROXY env variable for network proxies.

1/5 Onions. It is not all that bad and has most things you need for a flying start. It is however not an integrated terminal environment like the ones in Mac and Ubuntu. There are quirks. For example, getting gvim to pop up in windows, (instead of using pact install gvim, which uses xterm and lord knows what would happen), the following script was added to make it work, provided that gvim.bat lives happily in the %SystemRoot% folder.

cmd /c gvim.bat "$@"

git bash

Not POSIX and not a derivative of Cygwin. This is MSYS, which is a collection of GNU utilities based on MinGW. Being non POSIX, it uses Windows C native runtime directly. It is as bare mental as Windows can go, if you would ignore the antivirus and el crapo boot time.

Great for git. That is about it. It also doesn’t care if files not UNIX format delimited. Pretty chillax this tool.

Ugly as hell. Need your TLC from .dotfiles to make it visually appealing. Need to explicitly setup HTTP_PROXY env variable for network proxies. If you want other tools then you are shit out of luck. No package manager as far as I know. I

2/5 Onions. Its not all that bad again, but it is really not ideal. Copy and pasting is a pain with this marking business.


I use my mac for real work. I am too afraid to install Node, Ruby, Perl or Python stuff on these xbox laptops. Virtualbox and Vagrant is a must have for sanity.