With regard to 'order' and the 2nd law of thermodynamics I have discussed the four quadrants of my Data Quadrant Model:
- The order of quadrant I
- The chaos of quadrant III
- The un-order of quadrant IV
- The "order" of quadrant II
Entropy in the Data Quadrant Model is lowest in quadrant I, higher in II, even higher in IV and highest in III. If we do not actively decrease it, we tend to loose the value of (the) data(-platform) and we will find ourselves investing huge amounts of Euro's to do the same thing over and over again (like Groundhog Day) or spending huge amounts of Euro's to control an 'out-of-control-beast'. Unfortunately, I have seen and still see this a lot.
In an ultimate open system1, entropy can not decrease. In the universe the entropy will only increase (and eventually we all die). However, the Data Quadrant Model is a closed system. And in a closed system we can decrease entropy. How?
There are roughly three important directions in which entropy is to be decreased actively:
- Decreasing entropy from III to I
- Decreasing entropy from IV to II
- Decreasing entropy from II to I
(I describe these in the details of this post. Warning it is not for the faint-hearted)
Important message: like in physics, decreasing entropy costs energy. The higher the difference in entropy between two systems/quadrants, the higher the energy needed. And yes, you can replace 'energy' with 'costs'.
We now enter the field of Data Management. A prime directive of Data Management is to reduce entropy in data and to keep the data-platform in a sustainable modus where it serves the data-driven and data-centric organisation. It is hard, not cheap and still mostly unknown territory, but if you can make it work, the rewards - in the era of datafication - are huge.
Un-managed, un-governed, data of unknown quality that is marked as data that is important for the organisation, will need to be promoted to quadrant I.
ad2. Decreasing entropy from IV to II
Brilliant insights of double PhD's who have constructed and tested analytical models or discovered interesting patterns (e.g. fraud, data quality, etc..) need to promote their products to quadrant II in order for them to be productised. This means using these products in a system where they can be scaled, changes are managed and funds are allocated to improve them. Furthermore, we need these brilliant people to discover new stuff, not to maintain the stuff that has already been proven. That needs to be automated. We need to free this scarce resource of the burden of maintenance, version control etc..
ad3. Decreasing entropy from II to I
This is tough one, but a most vital one. Data that travels from quadrant I to quadrant II is transformed from basic facts to a context that is needed by someone or something. Rules are being executed on the facts and a context is born. A context that is used (and designed) by the various stakeholders of the organisation.
A very simple example might clarify:
- Fact: on may 15, 2015 I order a bicycle for 100 USD in Seattle, US
- Fact: on may 15, 2015 the exchange rate USD to EUR is 0,9
- Fact: on may 15, 2014 the exchange rate USD to EUR is 0.6
- Context: Bikes I ordered in EUR with Exchange Rate 15/5/2015: Bike is 90 EUR
- Context: Bikes I ordered in EUR with Exchange Rate 15/5/2014: Bike is 60 EUR
What if we could solidify/persist this context and give it the same treatment as facts? Ordered, automated, managed and governed (low entropy)? Still with me?
For the abstract thinker or my fellow data architects; I mix two levels of abstraction now and I need to differentiate between the two. The Data Quadrant Model on the datamodel level (where we differentiate between facts and context) and the Data Quadrant Model on a data-processing level where we push back data to the system of quadrant I (don't be fooled, we still enforce the separation between fact and context!). The latter deserves a whole chapter....
Working on it....
1 A system is a part of the universe being studied, while the surroundings are the rest of the universe that interacts with the system.