A colleague sent me this link http://martinfowler.com/bliki/PolyglotPersistence.html the other day, and I appreciate the author’s point of view. In the discussion that ensued, we started talking about the topics a bit more in-depth. One item that came up is whether, in this polyglot world, every application would need to track every other’s use of every data item. My response was:
While you don’t need every application to know every other application’s use of every data item, you do need (I contend) a unified (mental) model of the business constraints/requirements of the data items which are shared (or related/referenced). This is a gap in the way the original author was describing this polyglot persistence (IMHO) – he seems to leave that as an ‘exercise for the reader’.
If you have two medical insurance applications “Registration” and “Reimbursement”, then they need to agree on the higher order (conceptual and/or logical) data model against which both applications would. For example, does Reimbursement make a check out to the patient, or is it the person who heads the household? If Registration doesn’t have a concept of ‘head of household’, then how would Reimbursement be able to implement that?
The abstraction capability of a web service _does_ enable applications to conceal the nitnats of how exactly it stores its data (and so does SQL), but there has to be some exposed (and agreed-upon) data model.
I contend that this agreement at the conceptual/logical level is both the reason that the ‘Shared Database Integration’ has been so favored, and also the reason people rail against it and claim that the model is too monolithic and slow to change. I have rarely encountered an organization that is cognizant of (let alone, effectively express) the data relationships in their business. This lack of understanding (I claim) is the root cause to the glacial speed at which data models typically evolve.