#kubecon ’16 for me: inspiration, community, and strategic validation

In a short span of time, companies will no longer own or probably never buy new hardware. Gone are the days of PXE booting racks and racks of the stuff. These classical and mysterious oompa loompas are being replaced by automation and tooling. Hardware vendors are cannibalising and consolidating their business all at the same time. While skeptics remain, haters are always going to hate. Then Kubernetes flew in. Like a moth to a flame I was completely drawn at the time. Alas, just scratching the surface lies a giant of a community, oozing with great ideas at an amazing pace. Armed with my own fanboy enthusiasm, I ventured into my first #kubecon this year. One thing for sure, I cannot wait to move stuff to CoreOS and rkt!

These days projects are executed by highly agile guru devops fullstack teams, and we are shipping containers. Containers are a fun new way to package software, as devs are always keen to rewrite SysV init or system.d. I too am guilty of this in the past, at times mustering the courage to create these unique snowflakes rainbow-unicorn of a bash script. Yes, it appears that the dark days are back… maybe… and some of us might be enjoying this a little too much 😀

Containerisation is the essence of devops as it breaks down the barriers between sysadmins and devs. However Kubernetes does not solve the fact that we are still maintaining or are currently churning out shitty applications despite the improved process around it. The usual remedy is by hammering down on non-functional requirements to level out most of the technical debt. Naturally Kubernetes does not give you free non-functional requirements. Truth is, you never got non-functional requirements for free. This was the clear dividing line between craftsmanship or sipping soup from the big ball of mud. What Kubernetes does give you is elasticity, optimisation and scalability.

Self serviced infra is cool. It was a common theme throughout #kubecon. Having Kubernetes as an integral part of the development process will bring about the next level devops. Empowering these team seems to be the magic sauce and I wholeheartedly agree. Finally we can ditch the last of Elastic Beanstalk.

The biggest validating moment for me was centered around enterprise transformation, based around automation, build pipelines and ChatOps. I knew my strategy was awesome, but this completely made my day. We have in place all three within a short time and we are not alone in this hipster movement! Not to mention we are bloody good at this too! Right up there with the best. A proud moment as I quietly gave myself a pat on the back, swiftly jumped on Slack, shared the good news and congratulated everybody on the team.

#kubecon this year for me was just an amazing experience. It has been a while since I have attended a conference for the sole purpose to just soak up the knowledge from the superheros that I look up to. It was nerdinus maximus, happy hour all day everyday, and I can’t wait to do it all over again.

Advertisements

Yay! My first “enterprisey” blog post!

Not that the world needs any more “enterprisey” blogging but I thought I would throw in my two cents in this arena anyway as it is a new genre for me. Consider this my feeble attempt to disrupt Gartner from the perspective of the t-shirt brigade 😀 And thus I have started this new blog series aptly labeled “enterprisey.”

This is a slightly more architectural and strategic take than my usual blogging style. Naturally this is entirely my own opinion and nobody else’s.

For those that is already part this hipster movement and need a monolith-battle-hardened sparring partner, or if you are keen to jump on for the sake of carrying out tactics such as devops, microservices or polyglot persistence, then stay tuned and watch this space!

Microservice API on Elastic Beanstalk with Jetty, Jersey, Guice, and Mongodb

This blog aims to outline how one can very easily ship microservice APIs on Elastic Beanstalk with Jetty, Jersey, Guice and Mongodb. This piece of work written in Java was an inspiration over a weekend, where I finally found a coherent red thread based on some of the work I did in the past. So if you are reading this, be fully aware that this stack is no longer cool and hip since the conception of RxJava and Akka 😀 http://blog.circleci.com/its-the-future/

Code lives here. https://github.com/yveshwang/api-example

Why Elastic Beanstalk

Platform as a service (PaaS) is rather nice. A little similar to Heroku, Elastic Beanstalk (EB) environment can be tailored to run Docker containers amongst other things. Where EB truly shine is that the platform is accessible and completely configurable as if it was IaaS. This is made possible by ebextensions. For example, each EB instance by default ship with a local Nginx instance as designed by AWS. If you really want to switch it out, in theory you can do so by hijacking the initialisation process through ebextensions, uninstall Nginx, and install something like Varnish instead 😀 Yep, you can well and truly extend and tweak the platform to your heart’s content and potentially break everything.

Shipping to Elastic Beanstalk is fairly straight forward as it mostly involves a Dockerfile or a Docker image hosted in a public or private docker.io repository. If you have a Dockerfile or a Docker image, and a bit savvy with the eb cli tool, you are sorted and ready to deploy to Elastic Beanstalk.

Having a Docker container also makes testing repeatable and standardised. See the previous build pipeline blog series. It is by intention that the infrastructure part of the setting up eb is left out of this post for now as I believe Elastic Beanstalk deserves a blog post of its own. So let’s keep it old school and talk UML and code a little bit in this blog post.

Why Jetty

Jetty is a lightweight container as a naive attempt to defeat the monolithic J2EE containers because let’s be honest, most of us do not use half the functionalities in J2EE and the clustering of these J2EE containers goes against the very principle of microservices in my opinion. Instead, we should adhere to HTTP and RESTful API everything! Note that Jetty is most certainly cool, but it is not RxNetty or Akka-http cool.

Why Guice

Inversion of control is neat. On a grander scale, Guice can be used to inject mock layers en mass. For example, using Mockito to configure an entire mock data access layer and injecting that in context of unit or integration testing thereby allowing more tests to be written with less dependencies. Guice is also a nice way to help address the separation of concerns by keeping configuration away from business logic. Lastly, being able to do @Inject anywhere is powerful and allows us to construct a lot of templates and basically scale out horizontally through scaffolding code. When used properly, this is the little unsung hero of the Java world in my opinion.

Why Mongodb

Expect endless devops discussion on this very topic. Ever since the hacker news trolls came out of the woodworks against 10gen, the discussion has never ended. I like Mongo. I like it because it is fast to bang out a prototype.

DBs can vastly differ in ACID properties and thus address different combinations of CAP. I think I will save my opinion on Mongodb for another blog post another time. For now, Morphia is nice to work with in Java.

Why Jersey

Jersey is a pretty well structured way to write RESTful endpoints.

Putting it all together

Busting out some sick UML skills here.

Class diagram for api-example

Class diagram for api-example

Some basic principle by convention are as follows:

  • Each entity lives in its own collection in Mongodb
  • Each entity has one data access object (DAO)
  • Facade pattern is applied and should only contain business logic (no DB related operations)
  • Each DAO then at the very least as its own facade that can be extended to support business logic
  • You can freely inject other facades into one another.
  • Each facade maps to one HTTP resource supporting typical CRUD routines for that entity’s
    RESTful interfaces, GET, PUT, POST, DELETE and PATCH (ha!)
  • Caching headers, ETag, IMF headers can live in filters
  • Basic auth is also supported here as an example, that should live in filters too

Benefits

  • Loose coupling between the layers. You can replace Mongo quickly by just replacing the DAO implementations.
  • Most code is scaffolding or pure business logic. All connector code and basic CRUD support, including PATCH, lives in its respective base classes for entity, DAO, facade to resource layer that can be easily extended and reused.
  • Easy to test. All layers are tested, all entities are tested. And the test code can be easily extended
  • You can ship this code to an offshore team and expect that they can easily create new entities and new HTTP endpoints in a short time by simply copypasta some scaffolding code, reuse some templated CRUD classes, follow the basic CRUD routines, and pump out some basic business logic 🙂 Good times!
  • If you actually have a good team to work with, then this stack is very easy to extend by simply following the Facade pattern. Build cool stuff like your own in-memory RRDs for statistics then inject that statistics to other business logic!
  • Clustering is easy because this stack speaks HTTP and you would simply need a load balancer and some minor (or major?) Mongodb config.

Cons

  • At the core, Facade pattern is used liberally. This is not an event driven or reactive approach at all. When using facade, think shared memory, which means threading and parallelism will require due diligence. This is one reason I believe an event-driven, message based approach would improve the Facade pattern.
  • The stack compiles against JDK7. It would work fine with JDK8.
  • Not reactive, and by definition, not hipster enough.

Websocket: the road to hell is paved with good intentions

“Server-push technology” was an elusive sales pitch half a decade ago. COMET, Flash and websockets promised ways for servers to “push” messages onto clients. We sold our souls to heavy weight libraries like Icefaces, GWT, or some other JSF abomination that included hixie-75/hixie-76 support. We became oblivious to this devil’s promise and started blaming browsers when stuff did not work. Now with RFC 6455, the realities of websocket for me is still as disjointed as ever. I started off as an evangelist and a staunch supporter for websocket, but as Captain Hindsight would say, “you are doing it wrong”, and I quickly learnt from my mistakes.

This blog is my personal journey with websocket. In the era of devops and cloud (just as elusive and sales pitchy as ever), in my opinion, I find it really hard to see how this protocol would fit elegantly into an elastic, edge-side, micro-service cloud architecture/infrastructure.

tldr;

Websocket does not work out of the box for most load balancers. Furthermore, when you upgrade the connection directly to the backend (see this piece I wrote for Varnish a while back) you lose edge-side load balancing and caching, thus essentially piping the backend directly to the browser one connection per client at the time. Without some additional clever connection multiplexing components in between the load balancer and the websocket server, like websocket-multiplex or some tricked out node-http-proxy, the websocket server will not scale. For those that prefer sales and marketing lingo, this means it is not “cloud enabled” or “web scale.” Furthermore, websocket’s binary protocol, implemented by libraries such as jWebsocket, is extremely hard to debug in production. Unless you are super savvy with Wireshark and regularly eat TCP dumps for breakfast, and not to mention a bit of a masochist, I highly recommend staying away from websocket all together at the time of writing.

Websocket in practice, and asking some hard devops questions

In the past, I have had the displeasure of working with Icefaces and GWT. These Java front end frameworks abstract away the nitty gritty of the network and protocols such as websocket versions, handshakes, messaging format and error handling with elegant MVC models. This is all well and good on the drawing board, but MVC is a design pattern for separation of concerns on the UI level. Not exactly applicable when talking about the complexity and reality of running a websocket server in this internet and mobile driven world.

I have spent past 4 years developing and supporting an application that first utilised websocket. I have to admit, it was fun building directly ontop of jWebsocket, but it was painful and nearly impossible to debug in production. This was alleviated when we went full blown backbone.js, require.js and d3.js. Keeping things simple pays off in the long run. From that experience, I have devised a checklist for any future websocket projects, and help potentially avoiding the same situation from happening again.

  • Are there any authentication with this websocket implementation? If so, how does this tie into the user model? (jWebsocket for example requires you to specify a couple of system users for authentication and namespaces/roles for authorisation. These are separated from the existing authentication/authorisation model used by the webapp)
  • If it runs within browser, can you tweak the client-side retry logic or connection timeout variables?
  • If this runs outside of a browser (and this gets weird fast) are there any TCP client that can be used for automated testing or health checks in production?
  • How do you test this automatically? Is it supported by Selenium or phantom.js?
  • Can this webserver server be plugged into existing load-balancer? Any additional tweaks and settings required?
  • Does this need multiplexing?
  • How do you debug this on client-side in production? This is usually not possible because the connection is now elevated into a TCP connection and browsers no longer cares for it.
  • How do you debug this on server-side in production? This gets even more tricky as you include multiplexers and load balancers and various other nodes that speaks this binary protocol.
  • How do you debug this at all? Not possible if browser gives up the connection and everything in between is a mystery.
  • Ok so we can do TCP dump and Wireshark. So are you ready to do TCP dump between somebody’s browser and the origin server and everything else in between?
  • Catering to older browsers means Flash fallback. Are you prepared to open an additional Flash port and start supporting this? (and repeat the set of debug and test questions for Flash fallback)
  • Does this thing scale? Yes if you multiplex the connection.
  • How does it handle reconnection and stateful connection?
  • How does the server side handle connections and thread pools handling these logics behind each connection?
  • Does the network infrastructure block upgrade headers? Any other firewall rules that might break websocket?
  • Lastly, you must be prepared to give up caching.

Benefits with websocket

  • True server-push! Yes, this is true. Long-polling or COMET is not exactly “pushing” messages per definition. You are safe from the definition police.
  • You get to play with fun new websocket multiplexing code. This is quite cool actually. Mucking around with node.js is always fun. Perhaps you are thinking about building a Varnish VMOD to support websocket multiplexing. Perhaps you are thinking about building some kind of HTTP cacheable stream for websocket messages before stitching them back out as websocket payloads? This is all very exciting indeed!
  • Ability to build cool HipChat-like applications

Jenkins, jenkins-job-builder and Docker in 2 steps

So here is a simple example that will help provision a VM with Jenkins, jenkins-job-builder and Docker all in one with Vagrant.

Gone are those embarrassing git commits with those peskey jenkins-job-builder yaml files! Throwing in Docker install in Vagrant for Mac users who shuns boot2docker.

Some bonus stuff like examples of provisioning Jenkins plugin via the cli interface, creating first time Jenkins user are chucked in too.

Check it out! https://github.com/yveshwang/jenkins-docker-2step

2-steps are as below

  1. vagrant up
  2. point your browser to http://localhost:38080 and enjoy

and then ..

jenkins-jobs --conf jenkins-job-builder-localhost.ini test your-jobs.yaml -o output/

3 steps really… and for the keen learners wanting some intro material for docker, go here, and here for jenkins-job-builder.

The realities of trying to do Agile in a CMMI environment

Outsourcing is hard. Trying to do agile in a Capability Maturity Model Integration (CMMI) level 5 environment is damn near impossible. Without even discussing lean, let’s start by getting the basics down and first examine how one can start thinking about doing Agile with SCRUM in an enterprise development before tackling outsourcing. CMMI defines the what, whereas Agile lets you in on the how. This blog post aim to highlight some of the challenges you will face when bundling Agile with CMMI. To help stay the course and hit the ground running, a simple summary of my key takeaways based on previous experiences are provided. Most importantly, go sit the Scrum Master/Product Owner training. It is worth the 2 days.

Assumption

The assumption here is most people get CMMI wrong. A quote from the father of CMM, Watts Humphrey, “Work on you quality and processes first, the levels will come by themselves.” This by nature is agile, an implicit continuous improvement if you will. Unfortunately, the pursuit of certification has landed most adopters and practitioners of CMMI into the Waterfall model as we understand it today. As an interesting side note, the Waterfall model since its conception is not, nor has it ever been a single rigid software development life cycle. Most people misunderstand it and simply ignored the rest of the paper after the abstract, see “The art of destroying software.”

For me, CMMI level 5 conjures up an image of Jedi masters/ninja coders, industry thought leaders that are constantly optimising, learning, sharing, and challenging the status quo. My experience thus far is the exact opposite. Unfortunately for me, CMMI level 5 has been an environment with large amount of unnecessary restrictions, heavyweight processes, meetings, coupled with documentations that nobody reads or maintains. Worst case scenario is that useless documentation exists where nobody understands the content. Let’s not even begin to explore the notion of devops or fullstack as CMMI is a siloed model that ties everything together through processes, and you guessed it, meetings. A quote from a man I respect very much once said, “Meetings keep you busy, and not spending money.” The reality is that CMMI is typically completely wrongly applied, but complacency or fear of change makes CMMI the prime roadblock for doing anything Agile. In a devops world where Agile is considered way too heavy, CMMI appears to be out of the stoneage.

An interesting fusion of both CMMI and Agile started to surface nearly half a decade a go. Supported by some academics, CMMI is slowly incorporating and adapting to Agile. However, the practical aspect of transforming a rigid siloed team into a some what agile setup is a monumental shift in paradigm, all too often hampered by organisational pushback. CMMI is not going to catch up anytime soon as it continues to waddle in the back of the bus with its late adoption of Agile. Currently CMMI 1.3 for development has incorporated Agile into the processes. Services and acquisition has been left outside alone (it’s cold out here). Overall, one does not have to look very far to get a sense of the magnitude and complexity of CMMI 1.3.

The Agile Manifesto: the values

Let’s take a breather from CMMI and rehash the Agile Manifesto. Hark! Let us ask in their wisdom.

We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

Individuals and interactions over processes and tools

Working software over comprehensive documentation

Customer collaboration over contract negotiation

Responding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more.

The Agile Manifesto: the principles

The following are the principles of Agile. Truth to be told that it is arguably easier for startups or development houses to be Agile than enterprises. Limited budget and human resources usually breed clever ways to address ever changing requirements. Light organisational structure also helps. However, the bottom line is that Agile principles are the same regardless of budget and team size. These principles are proven and battle hardened and are well suited for both large or small organisations.

  1. Satisfy the customer through early and continuous delivery of valuable software.
  2. Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
  1. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
  2. Business people and developers must work together daily throughout the project.
  3. Build projects around motivated individuals.
  4. Give them the environment and support they need, and trust them to get the job done.
  5. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
  6. Working software is the primary measure of progress.
  7. Agile processes promote sustainable development.
  8. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
  9. Continuous attention to technical excellence and good design enhances agility.
  10. Simplicity–the art of maximizing the amount of work not done–is essential.
  11. The best architectures, requirements, and designs emerge from self-organizing teams.
  12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

What the Agile doesn’t tell you – key takeaways

All agile methodologies will help highlight the pain points quick and early. However Agile is not the silver bullet for solving these pain points, it is up to you to address them still. The real difficult barriers are usually organic, organisational, and cross-functional.

Expect change. A lot of changes. This is the essential first step towards lean, and it is going to be a painful one for some. On a positive note, being fluid and adopt to change is all part of being Agile.

Juggling SCRUM with regular traditional project management status updates will be tricky. Try Commitment-Driven SCRUM. See Commitment-Driven SCRUM.

CMMI usually requires a fixed price contract based on some fixed estimation. In order to correctly estimate the work involved, the type of tools, platforms will be rather monolithic and probably inflexible. New technologies are not part of the CMMI game. This is the exact opposite of Agile. Though CMMI is suitable for certain application domain, it is important to find the balance and not let CMMI swallow up every facet of I.T. Marketing and sales would rather have new prototypes instead of a mini ERP.

Lastly, Agile does not dictate what methodology best fit your organisation or team. Remember, the team should pick their own methodology appropriate for how they wish to work and interact.

Outsourcing

If outsourcing is part of the gig, have a read the following 3 part blog series on outsourcing and SCRUM I wrote nearly two year ago. It is still applicable today.

#leetspeak ’14 for me: big ball of mud, devops, architectural validation

Nothing in I.T. is new. We merely learn to call it different names. We continue to battle with big ball of mud. As this guerilla warfare thickens, we learn to scope down and manage smaller balls of mud through tighter collaboration and empathy between dev and ops. Devops anyone? The war against object-oriented programming rages on, backed by experts and Hacker News alike. It seems that we need an escape goat but even as objects are denounced as the new evil one thing remains constant, Python is a great language with a shit runtime and Java is a shit language with a great runtime. Oh and there is still no love for Visual Basic or Sharepoint developers. Actors, messages and events are the new hipster but have they not been around since Erlang or Cobol? We give it a new name, a new stage like C10K and the internet and mobility, forgetting the lessons of old. Nothing in I.T. is new.

It is not often that one gets a total validation to previous architectural design and implementation by the likes of Spotify. I have long defended and championed my Java expertise in the land of sysadmins dinosaurs, where empathy seems to be part of a bike shedding exercise. From this baptism of fire I realised it is not the language, it is the approach and architecture design that matters. A choice we make that will determine the construct and the size of this ball of mud. I walked out of Niklas Gustavsson’s talk thrilled, overwhelmed with joy and kinship. For I am not alone. For the stuff I architected in the past are composed of the same technical choices, principles and components. A big pat on my own back, and we need to do that from time to time 🙂

Then this brings us to devops, collaboration and throwaway code. Well, if Gartner said that it is here in 2015, then the pitch is even easier to convince the execs.

Devop is not a position, title or an engineering degree. I have written in the past that devops is culture. It is empathy and collaboration. For us grunts and peons navigating the corporate waters, this is the ammunition we need to break down silos, poor communications, ridding ourselves of these monolithic non-scalable balls of muds. This is a grass root movement that will change the faces of I.T. departments worldwide.

Indeed it is a rarer occasion to be surrounded by software craftsman of similar background that are driven, passionate and willing to share their experiences. For that I am grateful to both Tretton37 and Gothenburg. If you have missed leetspeak 2014, fear not, it was recorded. So thank you again and I will definitely be back for next year.

Now time to assemble my devop Avengers. We got work to do.