A word of warning; after writing this post I realise that truly understanding it, requires a more than basic understanding of the Data Quadrant Model. And of course; if you still don't understand where I am going with it, then it's save to say that I failed you, my apologies. :-)
A while ago I wrote a blog about the Data Quadrant Model I developed. I use this model in my consultancy and speaking engagements. Increasingly I receive great feedback from organisations that are applying it, which is great.
A week ago I wrote about the un-order of quadrant IV. A few days ago I posted the blog about the order of quadrant I as well as the chaos of quadrant III. It's time to describe the battle against the ever increasing entropy of quadrant II.
Imagine quadrant I as data where the molecules are set in a fixed position, neatly managed, ordered & governed. Now, in quadrant II the molecules are beginning to move. They can be rearranged in a vast amount of different positions, the number of positions can be huge, infinite even. Now replace these molecules with data and take into account that data can be copied and re-used over and over again. There is a vast array of possible positions for data.....
Back to the Data Quadrant Model...
Quadrant II reflects the contexts1 in which data is used, like I said before - in quadrant II we have multiple versions of the truth.
Lets, for argument's sake, assume that the number of contexts/truths is potentially large. It is save to say that the number of ways in which data can (and probably must) be re-arranged, aggregated, calculated, inferred or otherwise manipulated to serve a vast array of contexts, will equally be (exponentially) large.
What quadrant II aims to achieve is to control the entropy as long as possible against justifiable costs. It is known that entropy, by definition, will only increase. But there all kinds of tools, technologies and mitigation measures to extend the time before entropy reaches an un-manageable (or non-cost-effective) state. If the latter happens we can always return to the 'zero-state' - still present in quadrant I - and start again.
That's the cool thing about data. Entropy in the universe is ever increasing and we can never reset (unless we break some fundamental rules of physics or with divine intervention), with data we can....
Quadrant II is trying to maintain order (same as quadrant I). The entropy makes it harder though and subsequently more energy (=costs) is needed. Where the systems in quadrant I are characterised by its obvious state, the systems in quadrant II are considered to be complicated2.
1 It would be interesting to research what the independent variables for # of contexts could be. Size of the organisation, # of internal & external stakeholders, # differentiation of products and services, dynamics of the (world)market, complexity of business processes, managerial effectiveness, organisation hierarchy...?
2 A system can be very complicated but not complex at all. A system is complex when it has emergent behaviour (quadrant IV)