Awesome sauce gleaned from #retailcode London 2018

#Retailcode London 2018 is over and it was such an unique hangout-time with other fellow speakers from the retail vertical. I am honoured and thankful to be invited. Having this opportunity to share my two cents and get feedback from like-minded individuals was awesome!

Jam packed within 2 days, information and knowledge sharing flowed and tech round tables ensued. We honed in on the real issues that plagued us all, and discussed devops both in detail and in a broader contextual sense. These big guns of retail IT, whom were either born in the digital API monetisation era or has completely transformed, gave their heartfelt stories. Along with some new ideas, we shared our current pain points and hard-earned titbits without hesitation. Long before I embarked on this journey, they already were and still are, the source to which we draw our inspiration. For me personally, it was also a great hands on opportunity to evaluate and compare our existing approach to CI/CD stack.

I will summarise my raw key reflections in no specific order, and anonymised.

  • Breaking down silos is still a thing, and some of us organise by functions.
  • Devops is trust, empowerment and an appetite for change.
  • Devops, when done correctly, reduces TCO.
  • Devops is culture, a view shared by all. And it is time to move on as it is becoming a bit of a red herring.
  • Instead of simply being the “king makers” of an by gone era, we now are a symbiotic part of the business with a firm seat at the table.
  • Encourage the notion of Citizen Developers and map out how to get non-tech onboard being citizen devs.
  • Know ALL your business processes because once you do, you can pick and choose which to automate and optimise and thus ending business case guess-work.
  • You want to monitor and correlate business KPI with that of infrastructure.
  • You want these monitoring to be transparent across all departments.
  • One of the retailers has a strict policy of not having transaction data in their microservice stack because of the speed they can gain in all other areas. Instead, they have a macro and microservice separation. Bit like a pace-layered architecture.
  • Some retailers have a matrix organisation and C-level people have technical debt and automation as part of their KPIs. Thus driving quality and firmly grounding IT goals into their day-to-day.
  • Innovation is based on trust.
  • Procuring Selenium as a service is a novel idea and worth considering.
  • Feature toggle is king. Enablement for AB testing and drive even quicker deployment.
  • Having an approval green button for launching/deployment equates to “you are doing it wrong.”

I have learnt a lot in the two days and now an eager beaver to hack and piecemeal some of these ideas when I get back. Thanks again for having me #retalcode London! Awesome stuff!

Advertisements

#GDPR + #DDoS = #GDPRDDoS

Ever asked yourself what would Jesus do with your personal and private data (the answer might be here)? And what happens when mere mortals revolt against such an entity by requesting for rights to access and rights to be forgotten perhaps all at once? How about a sustained revolt say over a period of months after 25th? Could lesser companies be on the verge of a new C10k problem with real world consequences? And dare I say, a GDPR flavoured DDoS? GDPR + DDoS = GDPRDDoS perhaps?

Often the greatest ideas pop up when casually discussing tech amongst close friends. Couple of beers and rounds of board games later over a quiet weekend, a realisation of how extensive one’s personal data permeates throughout all the digital services we take for granted today. Seeing that none of my immediate circle of tech friends presently works for a God-like entity capable of predicting our behaviours and futures, we started wondering what would the mere deities of the world do? In case you are wondering, God has a name, and it’s GAFAMA (Google, Amazon, Facebook, Apple, Microsoft, Alibaba).

This is my prediction and I believe that an active, distributed and organised effort to retrieve personal data will occur, and companies that have not automated anything will no doubt get hit the worst. Coining this very scenario, the GDPRDDoS.

Let’s do a little thought exercise together

Let’s assume a single piece of personal data is gathered in one place. This one system will have no problem retrieving such data (with consent of course). Now let’s assume this data is distributed to 3 other services all equipped with different persistence layers. Perhaps the first is structurally backed by a blockchain, another a vanilla SQL but done in an immutable and event-driven fashion, the last being a rebellious NoSQL document store. This level of integration is not uncommon for a microservice landscape and often considered fairy manageable. Now let’s then assume 1 of the 3, the blockchain oriented service, is procured as a SaaS. Whilst the NoSQL backed service is hosted in the EU cloud elsewhere but nonetheless outsourced operation and development with the vendor pretty much only responding to email based change requests. Finally let the good old SQL backed service be hosted and maintained internally with your own teams. Cos you know, we play it safe with SQL. We also assume that since the first 2 services are outsourced, they potentially have additional systems that the centralised place has no integration towards.

With that in mind, and assuming you’re responsible for these systems, let’s say 1 individual digitally asks for their data on the 25th.

Scenario 1 – When Old school case handling works

Now let’s assume that the system is fairly static and that no dynamic streams of data about users are ever pumped into your systems. Your blockchained service will be in trouble if you have baked in personal information as part of the blocks. Nonetheless, we assume they are somehow issued centrally and you are in luck here. Assuming also you have a fairly decent relationship with the NoSQL backed vendor, you can turn around a change request in 2 days. You pay a minor fixed price each time. Lastly, you have a fantastic team managing your SQL backed service and doing a quick query is no problem on the existing system. Great! The cost of retrieving this data is pretty much the time and material cost of 1st two vendors plus some hours in the 3rd. Your customer service burns the data onto carton full of 5” floppy disks and hand this over to the customer with a smile on their faces.

Scenario 2 – Scaling up to 10 requests

Same setup, but now 10 requests hits. No problems! With glee, your customer service bundles the 10 requests into the same email to the vendors, perhaps wait for the 2 days, pay for the 2 days and burn 10 cartons worth of floppy disks. Assuming you still have a amicable relationship with your tech vendors, you’re fine. Your team is still ok because dude c’on, 10 additional queries? Your turnaround time is slightly increased because having to handle 10 independent cases is hard, but your customer service manages. You sleep soundly at night.

Scenario 3 – 100 requests

Now things gets a little interesting. Your blockchain vendor is saying that they might have to invest in some mechanism to replicate the chains and essentially re-write and verify history. Now your MS access die-hard event-sourcing CQRS SQL team is also saying the same! “This is supposed to be immutable! We are gonna have to find a way to re-create all histories and projections somehow…. But 100 requests is not that bad, we can re-create the DB each time to the nth degree.” Your NoSQL vendor is fine, scales up the bills accordingly. You are getting a little edgy because the cost of retrieving the data is no longer an email or a simple SQL join-select, no longer a simple time and material calculation. You hope that no more additional requests comes in and that the nation is finally satisfied with the knowledge that you haven’t done sweet FA to their personal data. Your customer support might be threatening to quit if they are manually coordinating this and you are fast running out of floppy disks to procure. You are twiddling your thumbs, wondering if you should hit up your back ally hookup person for some floppy disks.

Scenario 4 – 1000 requests

At this rate, you know you are out of floppy disks, your customer support is borderlining quitting, and your blockchain and your internal teams are no longer speaking to you. You generate 1000 emails yourself in the hopes that somebody else internally will be able to piece together some kind of data to be handed over to somebody.

Scenario 5 – 10000 requests

You rage quit and attempts to talk yourself into a job with GAFAMA because you know they have probably invested a bunch in automating all of this stuff and you got other interests other than sending out emails trying to get hold of data!

Scenario 6 – A sporadic spike in requests

Let’s say 1% of your national population requests this on the 25th…… #fml

Key takeaways

Though the scenarios are purely theoretical and fictional, nevertheless it was a joy to write. Our data belongs to us and it is time that all consumers take back their rightful ownerships. If businesses are not automating, or not sure how, or have systems and vendors that are vastly difficult to integrate with, then brace yourselves, your winter is coming.

#GDPR, #devops, and some honest advice

Here are some free and honest advice on the topic of EU General Data Protection Regulation (GDPR) – because frankly we are a little tired of the corporate jargon.

Being a good custodian of personal data does not have to be scary nor difficult. I happen to think it is quite straight forward, though it can be a fair amount work. Most important piece of the puzzle for the data controllers or processors is that you will require a devops/fullstack team. If you are an old school enterprise, then you should have gone through some kind of I.T. transformation to embrace and put in place principles of devops. This is because during the process of working through the various controls surrounding personal data, chance are things will be missed, particularly if you have a very large portfolio. The only sure thing is that as an organisation, you can work in an iterative, lean and agile, and collaborative manner.

Some unsorted advice in bullet points below.

  • You should have a crew of fullstack devops superheroes for your customer facing application stack.
  • The definition of a customer facing application stack includes the full spectrum of interactions between the consumers and your organisation. This include direct user interactions all the way down to the back office. If you need personal data somewhere along the chain, then it is in scope.
  • As data controller or data processor, or both, you will not sell, distribute or own said data without clearly informing the users and obtaining consent at the very least.
  • You have evaluated if you need a Data Protection Officer (DPO), and if so, you have appointed or sourced one.
  • Customer facing application has clear privacy policies and terms and conditions
    Naturally your system follow and oblige these terms and conditions and privacy policies.
  • You may even have content or communiqué explaining how you have secured your customers’ data.
  • Safe harbour and privacy shield is not settled yet. Avoid storing data in US physical locations to play it safe.
  • Should probably have infra as code (terraform or puppet etc) and be a little lean to be able to move data and applications from various public clouds or onprem setups when required.
  • Encryption matters. Don’t be an idiot. Do it at rest, and do it on transit and don’t use dated.
  • Use bcrypt or scrypt for password hashes, salted of course.
  • Exercising sound ITSec principles are a no brainer.
  • Spread the use of an unique user id across all system as a pseudonymisation effort and standardisation.
  • OAuth2 has a great selection of grant mechanisms that supports different ways of authentication and authorisation towards different systems and user-agents.
  • Build and customise your consent process to that of OAuth2.
  • Nuke data when your users wants out, including the source and integrated systems even with OAuth.

The hard truth is, GDPR is not in anyway shape or form finite or deterministic to warrant an engineering approach or a scientific model as basis for discussion. It is more likely that the process of addressing GDPR will be personalised and unique to each company. Ironically, it is a little like personal data itself. Most importantly, if you do not meet some of these devops requirements, you might want to start there first, and fast.

<rant>
I have been racking my brains trying not to sound like a giant multi level marketing douche on the topic of EU General Data Protection Regulation (GDPR). This is my nth attempt at drafting this blog post and literally the writing has gone from selling fear and greed to regurgitating some hallelujah self-help Secret-esque scheme that regurgitated the same old concepts like “consent”, “transparency”, “pseudonymisation”, that is suppose to concretely address “privacy by design”, “obligation of data controllers”, or “data subject rights” etc. Well, at least I will portrait an image of misguided confidence whilst oozing a ton of leadership if I may say so myself. Nevertheless, I personally feel some simple, down-to-earth steer is needed on the subject matter and I have decided to put it out to the universe.
</rant>

What #WoW has taught me about building a better #devops team in an enterprise

One does not need to look any further than World of Warcraft (WoW) to appreciate the beautifully enacted artistry that is teamwork. And it is exactly what an enterprise I.T. needs. Picture a cross functional and multidisciplinary team of rockstars working towards the same goal. This blog is about drawing a comparison between staffing up a devops team for product development in the same manner as running a team through an instance in WoW, dispelling any F.U.D. along the way, and getting that epic win for the enterprise.

Why gamers, why WoW?

Suffice to say as a developer, one spends countless hours in front of the PC. It is fair to say that some of us would at times play games together, or WoW to be exact. As Jane McGonigal @ TED 2010 spelled it out for us, that gamers are people whom has picked up the skill of solving extremely difficult problems in an collaborative and multidisciplinary setting. They are self motivated and extremely committed. The rationale can be mapped to the addictive feeling of an epic win. Achieving the insurmountable or being constantly on verge of an epic win is something quite familiar in the context of software development. I would argue that a typical software craftsman share a number of qualities with a successful WoW player:

  • continuous learning
  • application and adaptation of the knowledge they have gained throughout the years
  • mastery of the tools they use
  • deep understanding of the ecosystem they are working with
  • pride in their work and being extremely mindful towards quality

Why software craftsman matters? Because quite frankly, most business are going to be software houses. Digitisation is about doing things right by I.T. and your nerds are now your core business.

Gamers always believe that an epic win is possible, and that it’s always worth trying, and trying now. Gamers don’t sit around. Gamers are virtuosos at weaving a tight social fabric.
Jane McGonigal, Feb 2010 @ TED

Epic wins

The trick here is to make our daily work meaningful and as important as any other world saving quests. Provided time is not spent in meetings or corporate risk management theatre, I would argue most of what we do conjures up the same feeling of achievement. For example:

  • finding that extremely annoying bug and fixing it
  • putting together a super slick build automation pipeline
  • creating that killer UI, designing that awesome architecture then building it
  • ChatOps
  • deploying something automagically 20 to 50 times a day, and etc

These activities are all mini epic wins for me personally. We are your hipster superheroes after all.

This is a load of piss take. How about PCI-DSS, SOX, ITIL or COBIT?

If you are distracted by PCI-DSS, SOX and the like, start writing some code and run a few instances in WoW. Experience the magic first hand. And don’t forget to read this and this during your down time. Get over the smoke screens and get back on track with the epic wins.

“And this argument – that collaboration between silos, or even cross-functional teams, is forbidden by regulation or “best practice” – is an example of what we in the consulting industry call a bullshit smokescreen.”
Jez Humble, 19 October 2012

OK I’m onboard! Let’s go!

Assuming you are being innovative and building things from scratch, typically in WoW you need 1x tank, 1x healer, 3x DPS (Damage per second). Translating to software terms, you will need 1x tech lead, 3x fullstack devs, and 1x sysadmin. A 5-persons multidisciplinary team goes a long way. Let me break this down.

Tank/Tech lead: In WoW terms, your role is to take all direct damage for the team and be the leader of the pact. You draw and keep the attention of the epic boss to youself. You shield the team from a complete wipe. In software terms, you are a SCRUM master or the spiritual tech leader of the team. You take the unnecessary meetings and shield the team away from distractions. You might code a little or contribute to testing and test automation, or you might be a UX specialist. One of your main task is to free up time and space to enable your rockstar coders to code; aka DPS down that epic boss.

DPS/Rockstar coders: In WoW terms, you deal damage, hard and fast and consistently over a period of time. You are much more squishier than the tanks and you should not pull aggro to yourselves and burden the healer. In software terms, you are the rockstar coder with minimal meetings booked in your calendar. You churn out code, good quality code of course. Any rubbish spaghetti code is considered “drawing aggro” and will probably piss off your resident healer and tank. I would naturally recommend fullstack developers for all 3 roles with specific focus/spec on API/backend, frontend/UI, and automation.

Healer/Sysadmin: In WoW terms, you heal the team members, but mostly the tank. You do little damage yourself but are fully capable of keeping the entire team alive throughout the instance. In software terms, you are the sysadmin. You deal with the technical debt directly produced by the team members. You try to ship their work even at times knowing that they are produced way passed their Ballmer’s peak. You try to tune and configure the software to keep it alive. You care about performance and high availability. A healer is arguably the most critical role in the team. You must be more devops than the rest of the devops boys and girls, if you get my drift.

But an enterprise is more like a 40-men Onyxia raid back in vanilla days?!

This is absolutely true. Depending on the goal at hand, coming up with a functional governance model to work with 40 rockstars will be difficult. At a much larger scale, it is more about culture than anything else in my opinion. Your guild and guild master will be important, as their insight and coordination plays a critical role in the success. However, the fundamentals are the same. You still need small agile individual tribes or teams to perform certain tasks.

In summary: teamwork

It is fairly obvious to see that you absolutely require a diverse and cross functional team. No single role alone can build a winning product, or get that epic win, even for Linus Torvald.

“I can’t do UI to save my life. I mean if I was stranded on an island and the only way to get off that island was to make a pretty UI, I’d die there.”
Linux Torvald, 13 April 2016 @ TED

At the end of the day, working together is the essence of devops. In the meantime, I had a great time writing this post 😀 Big it up to the Avengers, our residence craftsman guild.

❤ Cap-A

Getting rid of monoliths: #lagom and microservice hooha

The road to ridding application monoliths is one wrought with many obstacles, mostly organisational. Whatever your reason behind this brave move, you have convinced yourself that you will fix this big ball of mud, somehow… You will no longer stand for this shanty town of a spaghetti code, riddled with nonsensical donkey poo. The code no longer speaks to you, it has surpassed the point of having a smell. This abomination has a cyclomatic “flavour” that you can almost taste it at the tip of the tongue. No amount of Ballmer’s peak is going to help. So you are going to make good on that promise to your linting tool, probably Sonar, for whom you have whispered hours of sweet nothings to the tunes of Bob Marley, “Don’t worry ‘bout a thing ’cause every little thing gonna be alright…” What once was a good idea has hideously mutated into a giant walking talking abomination that crawls under your skin and haunts your every step. Under the guise of some sound architectural decisions, you are going to pay down this technical debt, hard. You are going to to do microservice, and you are going to ship Docker containers. If this is you, read on.

The biggest modern day monoliths of them all, J2EE

Tasked with the daunting task of ripping apart functional applications into microservices, you will battle bikeshedders whilst debating your silly asses off with armies of monolith fandom. During what seems to be a very long coup d’état, this attempt to evangelise a newer leaner architecture and approach is riddled with skeptics. “What is frontend without JSF or JSP?! How dare you even question server-push technology like XYZFaces? Then how about the JNDI lookups and EJBs?! Surely you cannot replace these things?”

So six years ago, I had a very pleasant experience of pulling out Glassfish with embedded Jetty, replacing I-Hate-My-Faces with some simple JS (nowadays we call it the MEAN stack, or its many other permutations), and started building APIs with microservice principles. Turns out that is what Spotify did too at the time. So there you go haters. Bottom line, do not use J2EE no matter what if you care about having a competitive advantage. But if you need some good reasons, here are some:

  • Most J2EE containers are grounded on the notion of vertical scalability last I checked. Clustering should be idempotent and stateless, and scaling horizontal.
  • J2EE containers are not cloud native. Just look at their clustering! Unless you feel like having VPNs and private networks across different public clouds or data centers, you can probably just forget it.
  • So let’s put it behind the load balancer? No, most J2EE containers don’t do shared session persistence out of the box.
  • Let’s not kid ourselves, you will customise the crap out of this J2EE container; dropping war files upon war files to fix all its shortcomings.
  • Your sysadmins do not work with war/jar/ear files. They are *nix gurus deserving to be treated like one. Ship your product like an actual product, sir! Apt/yum/brew is your friend and please follow Filesystem Hierarchy Standard (FHS) for goodness sake.

Decide exactly what you are going to piecemeal

Have a very clear idea of the different categories or lifecycles of the applications you are going to transform. Timing is important, as with showing results. Afterall, this is the minimum viable product (MVP) approach naturally. Highly recommend avoiding system of records to begin with, those are definitely not quick wins and you will be in an arduous match, going for the full 5 rounds. Your future competitive advantage does not sit in your ERP or CRM systems. If it does, then um, yeah.

  • Isolate and separate clearly the functionality you are transforming or building
  • Ensure the isolation goes all the way down to infra, this is devops after all
  • Think of how to horizontally scale
  • Think of elasticity
  • Think of shared persistence across network

Java is dead, long live Java!

The trend a few years back was relentlessly hating on Java and Java devs. Evidence shows otherwise. Java is still around and it is going to be around for quite some time to come. No need to switch out it just yet even after the programming nuclear winter imposed by object-orientation and Java/.NET alike. Good code, good ecosystem based on tooling, and good solid design patterns go a long way, regardless of application domain or programming languages.

The truth is the same could have been said about node.js. I recall a number of years back, few colleagues quoted Hacker News regarding the state of node.js and how immature it was and Java was the “preferred” choice, even though the sysadmin community bagged the crap out of Java at the time. If you make strategic decisions predominantly based on the whims of HN, then you are just as plonkers as the next troll. What your node.js/Java boys and girls need to remember:

  • Repeatable, testable build pipelines. Think CI/CD.
  • Coding standards and linting, no brainers there
  • Packaging. Do the right thing, treat sysadmins as your end customers.
  • Separate out load balancing or clustering to other applications like nginx or haproxy. TCP stack makes more sense when written in C.
  • Lord forbid you try to do TLS termination in Java. This is really not cool bro. You got a number of other choices, so do not add this complexity to the landscape. There are no OpenSSL implementation in Java, and OpenSSL is already difficult to maintain as it is.
  • Good monitoring and logging practices goes a long way
  • Think network. Think TCP and HTTP.
  • Your JVM will live on top of a kernel. Know them. Tune that JVM and tune that kernel if needed.

So, microservices. Heard of #lagom?

So you have found a good set of business functional requirements to transform into a set of microservices. Heard of #lagom? Maybe Dropwizard or Springboot? The choices are probably all OK, and when doing microservices, there are simply no bad choices in my opinion. The gains outweigh the means here. The kicker is that there are probably a number of customised endpoint you will need to integrate with. This could be HTTP, something-else-over TCP or whatever. There could also be JPA or other nosql data stores you need to use. Pick your microservice framework component knowing that this is a framework and it can easily grow. The microservice strategy can easily bloat into “milliservice” or “services” (SOA?) if you are not careful. So just how do you stop the size of the code base from expanding? Keep distinct business functionalities as separate services and code bases. The sizing is up to you. Also, split up common functionalities into submodules. Both Dropwizard and Springboot has a bunch. Lagom for example has recently being introduced as the microservice framework for Java, it does have quite a lot of these connectors already in place. For me, I opted to homebrew our own microservice framework for maximum flexibility, ownership, and performance tuning.

Either way, armed with your chosen messiah of a framework, the idea here is to rain down free non-functional requirements across multiple projects and dev teams. Cost leadership for all!

  • Ease of hooking up to modern monitoring tools with a configurable metrics set. JVM memory, vmstat, iostat, CPU, JVM gc, etc etc
  • Ease of pulling out logs into say Influxdb or something.
  • Connectors to DBs should be submoduled and shared for future projects. Polyglot persistence ftw.
  • API documentation is super important. Do not assume your API users know your API, and make a point of doing backward compatibility
  • Follow semver.

Keeping your head above water is number one. Hang in there, the good days will come. Just remember, nothing is new in software since 1970, they just get modern marketing hype. And lastly, your fancy new microservice is legacy as soon as you launch. So please do consider the future generations. Let’s end this cycle of spaghetti monster code through old fashion craftsmanship.

Yay! My first “enterprisey” blog post!

Not that the world needs any more “enterprisey” blogging but I thought I would throw in my two cents in this arena anyway as it is a new genre for me. Consider this my feeble attempt to disrupt Gartner from the perspective of the t-shirt brigade 😀 And thus I have started this new blog series aptly labeled “enterprisey.”

This is a slightly more architectural and strategic take than my usual blogging style. Naturally this is entirely my own opinion and nobody else’s.

For those that is already part this hipster movement and need a monolith-battle-hardened sparring partner, or if you are keen to jump on for the sake of carrying out tactics such as devops, microservices or polyglot persistence, then stay tuned and watch this space!