This post has languished in my drafts drawer of this blog for yonks – I wrote most of it in June earlier this year. However, it seemed worth getting out the door before MapCamp 2018, so I’ve tidied it up and well, here it is. So, when you read this, please try to pretend it’s still June...
This weekend (ahem), news broke that Microsoft had bought Github, for the eye watering figure of 7.5bn US dollars. This represents a good excuse to use a map to help illustrate a strategy called Innovate, Leverage, Commoditise or ILC. You can use ILC to help understand how you might grow an ecosystem around an activity, then use that ecosystem to make it look like you’re really innovative and focussed on your users, while really letting others do the hard, risky work of building products and services, and finding a market for you to sell to.
First of all, a disclaimer
I don’t work for Microsoft, nor Github, and nor have I ever worked for them.
And while it’s hard to work in tech without bumping into someone who works for these companies, and while these maps might be more interesting if I could base them on clandestine conversations with these folk about their company strategy, the reality is rather more mundane – this piece is just based on my own experiences of using their products/services, and understanding from them from the outside.
So to be clear, these are mainly reckons.
With that out of the way, let’s start.
Let’s make a value chain for using Github
Okay, let’s start with a value chain, to help us start mapping this all out. You can follow along making your own map, with the same template I used here (please do, and let me know if it helps)
I’m mapping this from the point of view of someone making a digital product for a set of users or customers. In order to continue having users and customers, you need to be able to actually ship a digital product to them, regularly.
This is our key need we’re meeting. I use the term ‘ship’ loosely to mean ‘end up with actual users of said product’, with things like marketing, documentation and so on – not just doing a deploy somewhere.
So, to ship a product, you need to have an actual product available somewhere that people can access, and way to access it – that’s the right hand side part of the value chain above.
If it’s a digital product, you’ll typically access over the web, via a web browser, or maybe a mobile app. Although, depending on how tightly integrated your supply chain is, you might rely on some custom client too.
Usually this client will speak to some running code hosted somewhere that you control, and are responsible for putting together – you can think of this as the other ‘branch’ of the value chain on the left.
It’s extremely rare to write all the code for a product yourself these days – products are usually a combination of your own code, in-house, which then depends on a load of 3rd party code libraries and other dependencies.
If you care about getting a product to users more than once, you’ll want to have repeatable process for deploying or releasing code, and you’ll typically work in some kind of dev/design environment where you can easily make changes or trigger a deploy, fetching the right combination of code from source control.
All of these things rely on servers, which increasingly these days are some kind of virtualised machines, provisioned from some cloud of compute power somewhere. And all this compute relies on to lots of power to keep all the infrastructure running, which is why it’s all the way at the bottom.
Mapping evolution in our value chain
The next step once you have a value chain, is to map it horizontally along the evolution axis.
To be honest, the mapped version looks quite close to the value chain, but there are a few differences. Source control is not a new concept, and there’s a decent array of products and services available now we might use. Let’s put that’s in the middle, around the ‘product/rental’ column.
Also, generally speaking, because development time is so expensive, you write custom code in-house only when off the shelf or open source tools don’t fit your needs. I’ve represented that by moving it waaaay to the left, so the more common libraries and dependencies are more to the right.
Really, we should probably represent the dependencies as a continuum, from custom all the way to commodity, but hey, all maps are imperfect, but some are useful, and besides, I need that space for later on in this article.
Where Github fits into this
I’ve now updated the map here to show where, as of June 2018, if you use Github, it offers features or services you might use.
When you push code to Github, you might have triggers to deploy new code in a pipeline, or begin a test run. Doing this typically involves fetching other code from around the internet (often on Github too).
And, depending on what you use to edit code (i.e maybe you use the Atom editor, for example), you might be using software provided by Github here as well. If you’re using anything more complex than notepad to write code, your development environment will typically use some open source software make authoring code a nicer experience, and this is also often retrieved from somewhere on Github.
The grey shading here highlights where your usage of Github could could be considered visible to them. These might cover things like:
- downloading an extension through github’s software like the Atom editor (or something else, if it’s hosted on Github)
- deploying your own code, triggering activity from other services
- fetching some 3rd party dependency from code hosted on github
- bookmarking a repo by starring or forking it
When I say visible, I don’t mean that the details of your code are being read by sneaky men at Github. I’m referring to to the fact that people operating a platform like Github, can reasonably be assumed to know that a feature is being used – like a deploy trigger, or a project being forked – not least because if you ran this yourself you’d typically want to track its use so you can allocate enough resources to meet demand. We’ll come back to this, as it’s quite key to this Innovate, Commoditise, Leverage model mentioned previously.
Coming back to those dependencies as a continuum
Remember when I said how it might be better to represent the dependencies as a continuum, going from left to right? Different bits of code you pull into application all have different levels of maturity, and popularity, so really, it’s better to think of this less as a single point, but as a continuum spanning genesis on the left all the way to product/rental, well into the middle of the evolution axis on the map.
Mining this continuum for insight
If you a) host a lot of code, b) have loads of users who generate loads of information about which dependencies are being used by hosting their code on your platform, it’s possible to get an idea of where communities are forming around different activities (i.e by seeing how projects grow, or how much collaboration takes place around them).
And where there’s a growing base of activity around something useful to you, you’re then able to invest some more time and effort in making it more widely available to the rest of your users.
Because this new activity is either new, or newly accessible to the rest of the your users, this can make you look really innovative and customer focussed – it looks like you’re anticipating your customers needs, when what you’re really doing is speeding up adoption and widespread use of an activity you’ve already seen a community growing around. This is a much less risky prospect than picking something at random and hoping you guessed correctly that it’s useful to your users.
Back to the map
We’d represent this activity by identifying things that were towards the left in the libraries and dependencies part of the map (in the custom built column) and think of them as being moved to the right, creating new features, products or services in the middle (more in the product/rental column).
You can see this in a fair few places with Github – for example:
Custom domains and SSL on the pages product
Sure, you could set up Nginx and LetsEncrypt to serve static pages over SSL, on a server you own, for a site you run.
Or you could set up some clever scripts using TravisCI which generates static files from a repo somewhere, pushes them to some object storage like Amazon Cloudfront or Google Cloud Storage. You could even pay actual money to another service instead of paying in terms of engineer time, and use Netlify, or Zeit, Google Firebase hosting.
Or you could just let Github do it for you, as part of the service if you host your code with them .
The same applies with security checking of your code. You could invest loads time and money building a security checking pipeline that code passes through before it makes it to production.
But if you don’t use these, or haven’t heard of them before, it sure is nice (maybe less so for Snyk and co) to have automated security checking bundled in by default with Github now.
A brief segway into ILC – Innovate Leverage Commoditise
Alright. Now we have a map, and we’ve established that we can see usage at different parts of the map. It’s time to spend a bit of time on covering the Innovate, Leverage, Commoditise (ILC) model properly, and see how it relates to Github, as a sensing engine, as described by Chris Daniel here:
You can use the ILC model to understand why organisations would invest in creating an ecosystem around a given product or activity, and actively tend to the communities within – and crucially look past all the feel-good ‘we just like being nice and open’ messaging that goes with it in the press releases.
This diagram below from Simon Wardley is admittedly a bit daunting, but I’m hoping we can have a go at reading through it together.
If you provide a thing, and you can track how much it’s being used within the ecosystem, you don’t need to be really intrusive and track how specifically its being used to get an idea how the ecosystem in which it is used changes – you just need to get an idea of which bits are growing, relative to each other to see where you should be paying attention. Let’s look at these terms individually, and try to relate it to the diagram above.
When you’re providing a service that exposes new hot spots of usage on your platform, and it’s being used to do new things on the left hand side of the map – you end up with a sensing engine of sorts.
Where there isn’t much usage, you know you don’t need to pay so much attention yet, and you can continue to let other people take on all the risk of building something new, or grow a new sector by themselves.
On one level, you’re outsourcing your innovation and all the costs associated to your users.
Referring to that diagram above, you might think of this as activity A on the map, providing free hosting of repo, and all the collaboration features as widely as possible, to move it to the right.
By doing this, you get to see if there is usage of your service increasingly quickly in a new field, (in the case of Github, it might be through deploys, starring or forking repos, or closing pull requests and so on).
You can think of this as signal that to have a look more closely at what the people responsible for that spike in usage are up to, and identify new activity – you can think of this as activity B, C and D, higher up on the left of the map.
These are activities you didn’t come up with yourself, and might be meeting needs you didn’t know existed. In the diagram above, it might be the case that activities B and C, end up going nowhere, but activity D is actually really relevant to your users, and something you can get behind.
While others do a lot of the hard work of discovering the new stuff, you’re holding back, and only exposing the winners to your users, which takes us to the next step.
As we mentioned before – it’s easier to take an activity and make it available to more people, as you know there’s already a market forming you might sell to, than it is to correctly guess where a market will emerge.
When you’re doing this you’re providing leverage to an activity that’s already there, albeit in a relatively small, early-adopter community, and exposing it to a much wider audience.
The concrete Github example might be automated security checks on your code, something that was previously a thing you’d have to rely on software you install and run yourself, or a something had to rely on another company for, and now it’s available for free, as a default for new projects.
This makes Github’s service more attractive then it was before, and allows for the circle to ILC begin again – where they’re able to take more activities, and commoditise them, making them freely available to more users, so more new things can be built.
How the map changes with Microsoft
You could argue that by acquiring Github, Microsoft is now really well placed to do this ILC sensing engine thing, but across much more of the map now.
Microsoft makes ever increasing amounts of money by selling access to servers, and the cloud services made available using those servers now.
And the more usage it is able to move onto its own cloud, away from Google or AWS, the more it’s able to identify future activities faster than Google or AWS might be able to, and act upon them (or if we’re honest, catch up with AWS, who are so dominant in this field it’s basically grotesque).
So hopefully, this has made it easier to understand the ILC model, illustrated with Microsoft and Github.
If this has interested you, you might also find this piece by Ben Thompson of Stratechery about the acquisition worth a read. In particular, this paragraph at the end really spells out how different the Microsoft of 2018 is compared to the Microsoft nerds have grown to love to hate over the last twenty years:
Microsoft is betting that a future of open-source, cloud-based applications that exist independent of platforms will be a large-and-increasing share of the future, and that there is room in that future for a company to win by offering a superior user experience for developers directly, not simply exerting leverage on them.
Anyway, if you have questions about this piece, drop a comment below, or say hi at MapCamp – I’ll be around, and while my beard is now waay larger than the avatar pic on this blog, I should still be recognisable…