Hipsterising Windows: cygwin vs babun vs git bash vs powershell – the Onion scale

It is clear with Azure that Microsoft is adopting a tool chaining strategy based on Git and other useful terminal tools. Gone are the days of drawing pretty boxes in a CASE tool and expect the stuff you cooked up in thin air to be high performance, scalable and easily provisioned through cloud providers. The first step is get yourself tools that can actually help your everyday devops life. Hark! let us hipsterise Windows and bring on the terminals!

So if you take pride in your .dotfiles, git-foo and devops craftsmanship, but got stuck navigating the treacherous GUI waters of Redmond county, then this is for you. This blog post aims to be an opinionated and cynical evaluation of 4 major terminal options available in Windows for running Git and other common everyday *NIX tools. The scale are in unit of Onions, because you will be weeping when you start working on these abominations. Command Prompt did not make the list. It died with DOS 6.22.


Use Mac or Linux. Save yourself while you can. If you absolutely cannot avoid using Windows as the main platform, or unable to run it through Virtualbox, then install Babun; though the experience is no where near the level of “epic unicorns riding waves of candy rainbow”, it is still very solid. Be prepared to deal with stuff like BLODA (Big List of Dodgy Apps), you’ll question the existence of antivirus as a marketing and vendor lock-in strategy, and really get some hands-on practicing copious amounts of “let me reboot my laptop to see if it fixes the problems. Oh it did.” Start rubbing your ears and say “Wooooosa”, and remember the fault is not with Babun, but with Windows.


Git can be installed via chocolatey. This is a package manager that can be installed in Powershell with this monstrosity of a command.

PS:\> iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))

Chocolately is billed as apt-get for windows, but you would need Powershell know how, and you best be a C# developer. Guessing if you do devops on *nux/*BSD platform, this is not for you. Naturally, you will bloat your xbox PC since none of the UNIX tools exists. Forget about curl because in the world of Powershell, curl equivalent looks like…

PS F:\> (New-Object System.Net.WebClient).DownloadString("http://google.com")

Ships with Windows and has a fancy blue background. Quite possibly awesome with stuff like Sharepoint. Network proxy works without additional configuration. But why Sharepoint… why…

You need to know your C#. Pipelining involves objects, not strings. My guess is that sed and awk would not work without a heap of ToString() method calls. Conceptually very powerful for having a debug console for everything Microsoft. This is extreme vendor lock-in.

4/5 Onions. Balling my eyes out reading MSDN documentation to just trying to add headers to the Webclient. In contrast, this would be the perfect platform if you do know your C#. I do not see very sharp, if you would pardon the pun.


This is old school. Both Git and package managers are available. Options here are apt-cyg and pact. Cygwin is POSIX compliant and can be the portable platform for most things *NIX.

Copy and pasting is made slightly easier with using the default clipboard. No need to fiddle with pbcopy or “*y and “*p. Yes, can’t think of any other pros besides the fact that most tools are available here.

Ready to start using dos2unix and unix2dos all over the shop. Cygwin looks dated but a little .dotfiles TLC will spruce it up real nice. But be prepared to download the internet for the tools you want. You will probably come across a bunch of quirks and work around to get stuff going. Note that node.js no longer supports Cygwin either. Need to explicitly setup HTTP_PROXY env variable for network proxies.

3/5 Onions. It feels like I am back in university trying learn things instead of producing results.


Best in class here. It is essential Cygwin minus the quirks and ugliness. Has plenty out-of-the-box tools and a nice package manager named pact that seems to work alright. pact is a bit like brew for Mac.

Purrrty and super fast to get going. zsh is my new fav shell.

Copy pasting is a bit of a pain. Babun is installed in its own directory, sort of like a mini chroot environment. This makes accessing your windows stuff feels a little jail-breaky. But why would you want to do that anyways when you got a perfectly good shell? Need to explicitly setup HTTP_PROXY env variable for network proxies.

1/5 Onions. It is not all that bad and has most things you need for a flying start. It is however not an integrated terminal environment like the ones in Mac and Ubuntu. There are quirks. For example, getting gvim to pop up in windows, (instead of using pact install gvim, which uses xterm and lord knows what would happen), the following script was added to make it work, provided that gvim.bat lives happily in the %SystemRoot% folder.

cmd /c gvim.bat "$@"

git bash

Not POSIX and not a derivative of Cygwin. This is MSYS, which is a collection of GNU utilities based on MinGW. Being non POSIX, it uses Windows C native runtime directly. It is as bare mental as Windows can go, if you would ignore the antivirus and el crapo boot time.

Great for git. That is about it. It also doesn’t care if files not UNIX format delimited. Pretty chillax this tool.

Ugly as hell. Need your TLC from .dotfiles to make it visually appealing. Need to explicitly setup HTTP_PROXY env variable for network proxies. If you want other tools then you are shit out of luck. No package manager as far as I know. I

2/5 Onions. Its not all that bad again, but it is really not ideal. Copy and pasting is a pain with this marking business.


I use my mac for real work. I am too afraid to install Node, Ruby, Perl or Python stuff on these xbox laptops. Virtualbox and Vagrant is a must have for sanity.


Release management, part 3: Supporting and scaling idempotent, isolated, and parallel jobs in Jenkins build pipeline with Docker

This blog post aims to illustrate how Docker is introduced into an existing Jenkins build pipeline for the purpose of performing test automation in parallel whilst guaranteeing isolated and idempotent environments that are scalable.


In short, this is a truly exciting example of real world devops tool chaining with Docker as part of a build pipeline. Needlessly to say, by incorporating Docker into our build pipeline, it has has trimmed at least 70% of the build pipeline completion time by enabling lockless and parallel job execution. Since each jobs can run in parallel with Docker guaranteeing an isolated, idempotent environment for test automation, scaling out the build pipeline would be as simple as adding more Jenkins slaves. With clever use of Docker, Vagrant, Git and Jenkins together, this kind of tool chaining provides Platform as a Service (PaaS) that empowers both developers and operations alike, a shining example of devops.

In the first part of the Release Management blog series, the design of the build pipeline was conceived and outlined. Verification of the build pipeline design was covered in part two of the series. Permeating throughout this series is testability and automation. In addressing the correctness of test results and outcomes, the test execution environment must be isolated and test execution idempotent.

Guaranteeing an idempotent and isolated job execution

Ensuring a clean slate and running on a controlled environment when executing tests is paramount to the correctness of the test results. The first blog post discussed Vagrant as the tool for creating VMs, then running said tests on these VMs. Each job would perform vagrant up with a number of vagrant ssh command with specific -c parameter, then vagrant destroy for tear down. These commands guaranteed an idempotent and an isolated environment for each test execution in a Jenkins job. However there was a catch. The command, vagrant up, cannot be executed in parallel. See this discussion. This meant jobs that utilised Vagrant must be specified as a protected resource, accessible by one job, one slave at a time. This locking mechanism is supported by the Concurrent Throttle Builds Jenkins plugin, where jobs will wait and queue for certain resource to be freed before execution. The kicker is that this plugin is broken since October 2013 for Jenkins 1.536 and Concurrent Throttle 1.8. See Jenkins issue #19986 and #20211. Suffice to say, as of October last year, the build pipeline ran on a single Jenkins executor, taking a full hour to complete one pipeline build.

Speeding up and scaling out with parallel job execution

Whilst Vagrant definitely offers isolated and idempotent environments, at the time of writing, firing up parallel Vagrant boxes with VirtualBox was buggy at best. It is later confirmed that VirtualBox as a provider does not support parallel provisioning in Vagrant. Naturally if AWS was utilised as a Vagrant provider, this would be non-issue. Alas, it was not the case.

Secondly, provisioning each box takes a long time. Vagrant startup time are in minutes and increases significantly when compounded by copious amounts of apt-get or yum installs during provisioning. Although this can be mitigated by having pre-built Vagrant boxes with all dependencies installed, but it does become an added maintenance task.

The solution here is Docker. Unlike Vagrant, Docker creates lightweight containers, akin to LXC. Each containers can be pulled and pushed. This is similar to the Git model, with docker.io offering a centralised repository of all Docker images. With the recent Docker 1.0 release, it is ready for prime time and production usage. Indeed, using Docker is quick and easy. Create the necessary Dockerfile with the same provisioning steps as Vagrantfile is a breeze. When the same user as the Jenkins slave is included in the Dockerfile, upon successfully building the image, the Jenkins slave can then mount any directory onto this container and have the same user as Jenkins slave writing to that mounted directory from the container. This is central for reporting back test results.

Docker images, which act as a baseline for a container, provides an idempotent and isolated environment with the major benefit of being able to run containers in parallel, even if they are based on the same image.

Vagrant vs Docker – a moot comparison

The biggest question is does Docker completely replace Vagrant? The answer is a very clear no. Vagrant is extremely useful during development, essentially giving developers the ability to run as many varied platforms as possible. Docker is a container, an isolated kernel namespace running on top of the a shared Linux kernel. The difference is minute, but depending on the testing requirements, when VMs are required, Vagrant is still a valid choice over Docker.

Platform as a service with Git, Jenkins, Docker and Vagrant! Devops to the rescue

Having Vagrantfile and Dockerfile in their own repo is clever. With Git and Jenkins, provisioning updated sets of Vagrant boxes or Docker containers from Vagrantfile and Dockerfile respectively is easy and will give you the ability to track changes in these VMs and containers. When changes are detected, Vagrant boxes can be rebuilt and list of available boxes updated. Likewise, Docker containers can be rebuilt and pushed as the latest tag. This will both enable and empower the development team to maintain their own production VMs and containers for development, testing and releases. This is PaaS with liberal sprinkling of devops goodness.

Release management, part2: Designing scalable and testable build pipelines with BPMN and Jenkins Job Builder

This blog post aims to illustrate the process of designing build pipelines with Business Process Modeling Notation (BPMN) that is both scalable and testable. The central premise of this blog post is that Jenkins build pipelines are business processes and BPMN closely aligns with Jenkins and its plugin ecosystem. For example, with some clever use of Jenkins plugin, gateways and swimlanes in BPMN can be supported and realised. In the context of release management, Jenkins provides the executable platform for business processes, which benefit tremendously from automation.

In addition, scaling, testing, version controlling and deploying BPMN build pipelines as Jenkins jobs can be achieved through adopting Jenkins API, Git, and Jenkins Job Builder. All Jenkins jobs in the pipeline are written and templated in YAML and deployed via Git (say via an update-hook) when changes are detected. This significantly sped up the process of implementing and stringing together complex Jenkins jobs. This approach allows for early pipeline verification and ultimately enable devops teams to utilise Jenkins a lot more efficiently with greater freedom and control over their own build pipelines.

A quick BPMN primer

Designing build pipelines for release management is akin to doing any other business process modeling. BPMN offers a rich set of visual language to kick start this process. An example of BPMN is as below. See http://tynerblain.com/blog/2006/07/27/bpmn-gateways/ or http://www.bizagi.com/eng/downloads/BPMNbyExample.pdf for more information. Figure below is from the first link as outlined.

BPMN with joins and forks

BPMN with joins and forks

Nodes in BPMN represent events, activities and gates. The edges represent the relationship and the flow. This is fairly simple and offers a rich set of modeling tool to capture a complete business process. In addition, swimlanes represent multiple parallel flows. There is a strong similarity between UML activity diagrams.

Events are the active objects and they captures the start, end and intermediate events (such as timer events). Activities are split into tasks or subprocesses. They are user defined. Finally, gates are predefined set of logical operators designed to control the flow of the tasks. For example, exclusive, parallel, inclusive gateway and etc.

Designing your own build pipelines

In the context of release management, build pipeline refers to a series of test automation, compilation and deployment. These are tasks that are carried out prior to making the product available to the intended audience.

In the previous blog post in the series, https://macyves.wordpress.com/2014/05/20/release-management-part1-test-automation-with-vagrant-and-jenkins-build-pipeline/, test automation made up of the bulk of the release management process in the start. Each test oriented join job triggered multiple parallel downstream jobs. Only upon successful completion of all downstream jobs can the join job proceed to the next. This is modeled by the parallel gateway, which are used to defined forking and joining points in BPMN as below.

Forked unit testing  jobs

Forked unit testing jobs

Jenkins and plugins

While the previous blog post in the series outline the required plugin. No real explanations were given as to why and when to utilise them. There are a couple of key plugins that enables forking and joining, and the locking of common resources; join trigger plugin and throttle concurrent build plugin respectively. For example, the use of join trigger plugin is key to implementing parallel gates.

Once the plugin is installed, using it is as easy as accessing the usual Jenkins UI. The join plugin wiki page contains all the detailed information. https://wiki.jenkins-ci.org/display/JENKINS/Join+Plugin

For more information on the throttle plugin, see https://wiki.jenkins-ci.org/display/JENKINS/Throttle+Concurrent+Builds+Plugin

Roll out your build pipelines with Jenkins UI? NO!!

Armed with Jenkins and a complete pipeline design in BPMN, one is ready to start defining Jenkins job in the UI. However, avoiding the UI all together is strongly recommended at this stage. The reason is that despite the convenience supplemented by Jenkins UI, fiddling around with it s both time consuming and error prone. It is most certainly not scalable if you have multiple people working on the same build pipeline model working in a clunky UI environment.

One solution to the problem is to use Jenkins API and define all jobs in XML. However, arguably it is just as clumsy and error prone. The XML model supported by Jenkins maps to its very own internal object models. To utilise it fully one would require a complete understanding of the underpinning relational model used by Jenkins.

Roll out your build pipelines with Git, and Jenkins Job Builder? Yes!

As with all things in software development life cycle, release management processes need to be repeatable, scalable and resilient. The following are the non-functional requirements for the build pipeline.

  • Testability and debugging support on non-production Jenkins setup
  • Tracking changes and rollback
  • Scalability

Enter Jenkins Job Builder. Jenkins Job Builder is a fantastic tool for writing Jenkins job. The supported format is YAML and is human readable. Further more, templating is supported, which translates to common job definitions and job reuse. For example, you could roll out multiple build pipelines based on branches of the source code. Ultimately, you can very easily setup a personalised build pipeline for each developer and their own personal branch with the same set of job templates. The power of Jenkins Job Builder is astounding as it supports a myriad of Jenkins plugin. Moving away from a UI to a text file oriented way of interacting with Jenkins means productivity increases dramatically and that build pipeline skeleton can be tested.

Coupled with Vagrant, using Jenkin Job Builder allows for the same set of Jenkins jobs to be deployed on any Jenkins server. The implication is that in theory debugging Jenkin related issues or test failures can be easily provisioned and reproduced in a VM. This means that anybody on the team can fire up a fully fledged release management platform with almost all the production trimmings. See previous blog entry for the Vagrantfile that is capable of provisioning the exact Jenkins master node required for building and testing the packages. For more information on Jenkins Job Builder, see http://ci.openstack.org/jenkins-job-builder/

Last but not least, integrating Jenkins Job Builder with Git via the update-hook mechanism would enable automated deployment of newly pushed YAML to production Jenkins. Upon failure, one can simply roll back or just discard that commit. Complete freedom and control.

Testability and debugging support on non-production Jenkins setup

Before mapping out all the detailed scripts and various packaging steps, create an overarching build pipeline template. Run it through and see that all the join jobs triggers the correct downstream jobs. This is essentially functionally testing and verification of the build pipeline itself. This template can of course be easily reused and so on. Furthermore, a Vagrantfile would help the developers create their own build server.

Tracking changes and rollback

By utilising Git, one would immediately gain the ability to track changes in the YAML files and rollback if needed. The content of the Git update-hook script is entirely up to you.


Write the YAML as templated as possible. Use parameters and variables supported by Jenkins Job Builder. Scaling out different variants of the same build pipeline should be as easy as defining a new “project” or “job-group” in Jenkins Job Builder.


If you have read this far, you must be wondering if this is all too good to be true. Answer is yes. However, since each build pipeline is very specific to the product and release processes, these samples from OpenStack is a great starting point. https://github.com/openstack-infra/jenkins-job-builder/tree/master/samples

dotfiles and Java ninjutsu

All work mentioned below are open and done by true ninjas heralding the dark embrace of shells and terminals. I am but a mere follower of the art. This blog post acts as a harrowing reminder that one should constantly strive for ninpo perfection.

My dotfiles project, https://github.com/yveshwang/dotfiles is very much set to my personal liking.They are but a tiny subset of, and based on, https://github.com/mathiasbynens/dotfiles/ and https://github.com/paulirish/dotfiles.

In addition to some git and vim tweeks, the added bonus here is that you can switch Java version by doing the following command, setjdk, similar to that of update-java-alternatives command in Ubuntu. This is brilliant!

setjdk magic!

setjdk magic!

This sterling little pearler lives in the .extra files as

# switch JDK version for maverick
# http://www.jayway.com/2014/01/15/how-to-switch-jdk-version-on-mac-os-x-maverick/
function setjdk() {  
  if [ $# -ne 0 ]; then  
   removeFromPath '/System/Library/Frameworks/JavaVM.framework/Home/bin'  
   if [ -n "${JAVA_HOME+x}" ]; then  
    removeFromPath $JAVA_HOME  
   export JAVA_HOME=`/usr/libexec/java_home -v $@`  
   export PATH=$JAVA_HOME/bin:$PATH  
 function removeFromPath() {  
  export PATH=$(echo $PATH | sed -E -e "s;:$1;;" -e "s;$1:?;;")  
setjdk 1.7

edit 03.04.2014: github loves dotfiles! http://dotfiles.github.io/