There’s a never ending debate about NoSQL v RDBMS systems that seems to polarize the people that prefer one technology over the other. In fact, it seems that every time I talk to someone that dislikes NoSQL technology in general, the topic of eventual consistency comes up. The idea that not all data in our system might be up to date seems to be one of the concepts that scares many DBAs most of all.
I was chatting with some people recently about complex SQL Server configurations, and the topic of replication came up. While this is a technology with so much potential, it seems that replication has been a bit neglected by Microsoft and is both amazing and brittle in its implementation. However, if you think about it, replication results in data that isn’t consistent across the systems.
I’m sure many would argue that this isn’t an issue, but how many of you have businesses that make decisions or have processes built on data in replicated databases? I’m sure plenty of you, and most of the time, the data is consistent enough for use by our clients.
There are plenty of ways in which we implement data movement across our databases that results in potentially inconsistent results for our clients. In fact, I’ve had no shortage of discussions with clients that can’t understand why two reports run minutes apart show different results.
Today I’m curious how many of you have systems that your businesses depend on where the data is eventually consistent because of some technology that moves information from one database to another. Perhaps you might even share some of the tricks you use to ensure that delays or problems in your transfer process are detected and fixed before your clients realize just how inconsistent their data might be.