#startup reflections: designMode is IDDQD

Few things one should do during the second peak of a global pandemic:

  • Kick off a start-up business
  • Resurrect the old tech blog
  • Content is still king even if tech concepts are explained through interpretative dances on TikTok

So here is my second shot at relevance. TIL how to toggle god mode from the wifey, allowing you to live edit and demo design changes. Useful stuff! \o/

document.designMode = ‘on’

A bit of #devops chinwag with @devopsdayscph

Software is still eating the world and I am super excited to be keynoting in Devopsdays Copenhagen! Come hang out!

https://platform.twitter.com/widgets.js

Awesome sauce gleaned from #retailcode London 2018

#Retailcode London 2018 is over and it was such an unique hangout-time with other fellow speakers from the retail vertical. I am honoured and thankful to be invited. Having this opportunity to share my two cents and get feedback from like-minded individuals was awesome!

Jam packed within 2 days, information and knowledge sharing flowed and tech round tables ensued. We honed in on the real issues that plagued us all, and discussed devops both in detail and in a broader contextual sense. These big guns of retail IT, whom were either born in the digital API monetisation era or has completely transformed, gave their heartfelt stories. Along with some new ideas, we shared our current pain points and hard-earned titbits without hesitation. Long before I embarked on this journey, they already were and still are, the source to which we draw our inspiration. For me personally, it was also a great hands on opportunity to evaluate and compare our existing approach to CI/CD stack.

I will summarise my raw key reflections in no specific order, and anonymised.

  • Breaking down silos is still a thing, and some of us organise by functions.
  • Devops is trust, empowerment and an appetite for change.
  • Devops, when done correctly, reduces TCO.
  • Devops is culture, a view shared by all. And it is time to move on as it is becoming a bit of a red herring.
  • Instead of simply being the “king makers” of an by gone era, we now are a symbiotic part of the business with a firm seat at the table.
  • Encourage the notion of Citizen Developers and map out how to get non-tech onboard being citizen devs.
  • Know ALL your business processes because once you do, you can pick and choose which to automate and optimise and thus ending business case guess-work.
  • You want to monitor and correlate business KPI with that of infrastructure.
  • You want these monitoring to be transparent across all departments.
  • One of the retailers has a strict policy of not having transaction data in their microservice stack because of the speed they can gain in all other areas. Instead, they have a macro and microservice separation. Bit like a pace-layered architecture.
  • Some retailers have a matrix organisation and C-level people have technical debt and automation as part of their KPIs. Thus driving quality and firmly grounding IT goals into their day-to-day.
  • Innovation is based on trust.
  • Procuring Selenium as a service is a novel idea and worth considering.
  • Feature toggle is king. Enablement for AB testing and drive even quicker deployment.
  • Having an approval green button for launching/deployment equates to “you are doing it wrong.”

I have learnt a lot in the two days and now an eager beaver to hack and piecemeal some of these ideas when I get back. Thanks again for having me #retalcode London! Awesome stuff!

#GDPR + #DDoS = #GDPRDDoS

Ever asked yourself what would Jesus do with your personal and private data (the answer might be here)? And what happens when mere mortals revolt against such an entity by requesting for rights to access and rights to be forgotten perhaps all at once? How about a sustained revolt say over a period of months after 25th? Could lesser companies be on the verge of a new C10k problem with real world consequences? And dare I say, a GDPR flavoured DDoS? GDPR + DDoS = GDPRDDoS perhaps?

Often the greatest ideas pop up when casually discussing tech amongst close friends. Couple of beers and rounds of board games later over a quiet weekend, a realisation of how extensive one’s personal data permeates throughout all the digital services we take for granted today. Seeing that none of my immediate circle of tech friends presently works for a God-like entity capable of predicting our behaviours and futures, we started wondering what would the mere deities of the world do? In case you are wondering, God has a name, and it’s GAFAMA (Google, Amazon, Facebook, Apple, Microsoft, Alibaba).

This is my prediction and I believe that an active, distributed and organised effort to retrieve personal data will occur, and companies that have not automated anything will no doubt get hit the worst. Coining this very scenario, the GDPRDDoS.

Let’s do a little thought exercise together

Let’s assume a single piece of personal data is gathered in one place. This one system will have no problem retrieving such data (with consent of course). Now let’s assume this data is distributed to 3 other services all equipped with different persistence layers. Perhaps the first is structurally backed by a blockchain, another a vanilla SQL but done in an immutable and event-driven fashion, the last being a rebellious NoSQL document store. This level of integration is not uncommon for a microservice landscape and often considered fairy manageable. Now let’s then assume 1 of the 3, the blockchain oriented service, is procured as a SaaS. Whilst the NoSQL backed service is hosted in the EU cloud elsewhere but nonetheless outsourced operation and development with the vendor pretty much only responding to email based change requests. Finally let the good old SQL backed service be hosted and maintained internally with your own teams. Cos you know, we play it safe with SQL. We also assume that since the first 2 services are outsourced, they potentially have additional systems that the centralised place has no integration towards.

With that in mind, and assuming you’re responsible for these systems, let’s say 1 individual digitally asks for their data on the 25th.

Scenario 1 – When Old school case handling works

Now let’s assume that the system is fairly static and that no dynamic streams of data about users are ever pumped into your systems. Your blockchained service will be in trouble if you have baked in personal information as part of the blocks. Nonetheless, we assume they are somehow issued centrally and you are in luck here. Assuming also you have a fairly decent relationship with the NoSQL backed vendor, you can turn around a change request in 2 days. You pay a minor fixed price each time. Lastly, you have a fantastic team managing your SQL backed service and doing a quick query is no problem on the existing system. Great! The cost of retrieving this data is pretty much the time and material cost of 1st two vendors plus some hours in the 3rd. Your customer service burns the data onto carton full of 5” floppy disks and hand this over to the customer with a smile on their faces.

Scenario 2 – Scaling up to 10 requests

Same setup, but now 10 requests hits. No problems! With glee, your customer service bundles the 10 requests into the same email to the vendors, perhaps wait for the 2 days, pay for the 2 days and burn 10 cartons worth of floppy disks. Assuming you still have a amicable relationship with your tech vendors, you’re fine. Your team is still ok because dude c’on, 10 additional queries? Your turnaround time is slightly increased because having to handle 10 independent cases is hard, but your customer service manages. You sleep soundly at night.

Scenario 3 – 100 requests

Now things gets a little interesting. Your blockchain vendor is saying that they might have to invest in some mechanism to replicate the chains and essentially re-write and verify history. Now your MS access die-hard event-sourcing CQRS SQL team is also saying the same! “This is supposed to be immutable! We are gonna have to find a way to re-create all histories and projections somehow…. But 100 requests is not that bad, we can re-create the DB each time to the nth degree.” Your NoSQL vendor is fine, scales up the bills accordingly. You are getting a little edgy because the cost of retrieving the data is no longer an email or a simple SQL join-select, no longer a simple time and material calculation. You hope that no more additional requests comes in and that the nation is finally satisfied with the knowledge that you haven’t done sweet FA to their personal data. Your customer support might be threatening to quit if they are manually coordinating this and you are fast running out of floppy disks to procure. You are twiddling your thumbs, wondering if you should hit up your back ally hookup person for some floppy disks.

Scenario 4 – 1000 requests

At this rate, you know you are out of floppy disks, your customer support is borderlining quitting, and your blockchain and your internal teams are no longer speaking to you. You generate 1000 emails yourself in the hopes that somebody else internally will be able to piece together some kind of data to be handed over to somebody.

Scenario 5 – 10000 requests

You rage quit and attempts to talk yourself into a job with GAFAMA because you know they have probably invested a bunch in automating all of this stuff and you got other interests other than sending out emails trying to get hold of data!

Scenario 6 – A sporadic spike in requests

Let’s say 1% of your national population requests this on the 25th…… #fml

Key takeaways

Though the scenarios are purely theoretical and fictional, nevertheless it was a joy to write. Our data belongs to us and it is time that all consumers take back their rightful ownerships. If businesses are not automating, or not sure how, or have systems and vendors that are vastly difficult to integrate with, then brace yourselves, your winter is coming.

Speaking about #tdd with #serverless, #automation and tooling, and #devops in @code_europe! #strangethingsareafoot

Super thrilled to be holding two talks in @code_europe in sunny Warsaw on the 7th of Dec! The first talk will be about tdd with serverless, automation, and tooling. I will touch on Cloudfront and Lambda@Edge with liberal sprinkling of node.js and mocha. Ultimately piecing the buzz word alphabet soup together with Jenkins in a live demo format 🙂 If the demo gods are with us, you are in for some docker in docker in vagrant spawning dockers type of inception mad science.

Second talk aims to cover the usual devops circuit and the curiosities for the macabre; a transformative journey and a deep dive. Having grown up with Bill and Ted’s Excellent Adventure as a budding young lad, I am happy to ground the subject matter in a light hearted manner this time 🙂 #strangethingsareafoot

Come hear our #devops and #microservices journey in #IFSFconf2017!

Very grateful to be invited to speak in the International Forecourt Standards Forum (IFSF) Innovation and Collaboration Conf 2017 in Paris this year 🙂 If you are in town, swing by or hit me up on twitter, and let’s talk #docker #microservices #apis and #devops!

Come hear my rant @ Göteborg on the topic of #GDPR

You know you want to 🙂 Come hang for the rant, but stay for Mårten Rånge‘s session on Property Based Testing.

update 13.04.2017: slide deck avail here: https://github.com/yveshwang/presentations/tree/sthlm/impress.js/gbg

Implementing GDPR & Property Based Testing

Wednesday, Apr 12, 2017, 5:00 PM

Atomize AB
Västra Hamngatan 11 Göteborg, SE

25 Members Attending

Onsdagen den 12:e april kl17:00 välkomnar Atomize AB tillsammans med SWENUG er till ett event där Yves Hwang från Circle K kommer prata om hur han implementerade GDPR där. GDPR kommer 2018 och ersätter PUL och berör alla företag och myndigheter som handskas med känsligt data som berör EU medborgare. Yves bor i Norge men härstammar från Australien d…

Check out this Meetup →

#GDPR, #devops, and some honest advice

Here are some free and honest advice on the topic of EU General Data Protection Regulation (GDPR) – because frankly we are a little tired of the corporate jargon.

Being a good custodian of personal data does not have to be scary nor difficult. I happen to think it is quite straight forward, though it can be a fair amount work. Most important piece of the puzzle for the data controllers or processors is that you will require a devops/fullstack team. If you are an old school enterprise, then you should have gone through some kind of I.T. transformation to embrace and put in place principles of devops. This is because during the process of working through the various controls surrounding personal data, chance are things will be missed, particularly if you have a very large portfolio. The only sure thing is that as an organisation, you can work in an iterative, lean and agile, and collaborative manner.

Some unsorted advice in bullet points below.

  • You should have a crew of fullstack devops superheroes for your customer facing application stack.
  • The definition of a customer facing application stack includes the full spectrum of interactions between the consumers and your organisation. This include direct user interactions all the way down to the back office. If you need personal data somewhere along the chain, then it is in scope.
  • As data controller or data processor, or both, you will not sell, distribute or own said data without clearly informing the users and obtaining consent at the very least.
  • You have evaluated if you need a Data Protection Officer (DPO), and if so, you have appointed or sourced one.
  • Customer facing application has clear privacy policies and terms and conditions
    Naturally your system follow and oblige these terms and conditions and privacy policies.
  • You may even have content or communiqué explaining how you have secured your customers’ data.
  • Safe harbour and privacy shield is not settled yet. Avoid storing data in US physical locations to play it safe.
  • Should probably have infra as code (terraform or puppet etc) and be a little lean to be able to move data and applications from various public clouds or onprem setups when required.
  • Encryption matters. Don’t be an idiot. Do it at rest, and do it on transit and don’t use dated.
  • Use bcrypt or scrypt for password hashes, salted of course.
  • Exercising sound ITSec principles are a no brainer.
  • Spread the use of an unique user id across all system as a pseudonymisation effort and standardisation.
  • OAuth2 has a great selection of grant mechanisms that supports different ways of authentication and authorisation towards different systems and user-agents.
  • Build and customise your consent process to that of OAuth2.
  • Nuke data when your users wants out, including the source and integrated systems even with OAuth.

The hard truth is, GDPR is not in anyway shape or form finite or deterministic to warrant an engineering approach or a scientific model as basis for discussion. It is more likely that the process of addressing GDPR will be personalised and unique to each company. Ironically, it is a little like personal data itself. Most importantly, if you do not meet some of these devops requirements, you might want to start there first, and fast.

<rant>
I have been racking my brains trying not to sound like a giant multi level marketing douche on the topic of EU General Data Protection Regulation (GDPR). This is my nth attempt at drafting this blog post and literally the writing has gone from selling fear and greed to regurgitating some hallelujah self-help Secret-esque scheme that regurgitated the same old concepts like “consent”, “transparency”, “pseudonymisation”, that is suppose to concretely address “privacy by design”, “obligation of data controllers”, or “data subject rights” etc. Well, at least I will portrait an image of misguided confidence whilst oozing a ton of leadership if I may say so myself. Nevertheless, I personally feel some simple, down-to-earth steer is needed on the subject matter and I have decided to put it out to the universe.
</rant>