BI, Uncertainty, and the Two Watch paradox – Part 2

In part 1 of this post, I described what I called the "two watch" phase of BI system adoption, when discrepancies between existing reports and the results of a newly implemented BI system cause angst of the part of the client and can bring the process of system acceptance and adoption to a screeching halt. In this second part of the post, we move from describing the problem to asking whether there are constructive steps to help everyone through it. In other words, what can be done to get a client to the point where they feel comfortable putting that old watch back in their pocket for good, and accepting the numbers coming out of the BI system as the new single watch allowing them to know the time without any lingering doubts?

I think there actually are a couple of things that can help. First, you can discuss the questions up front that are going to get asked anyway when there are discrepancies between old and new reports. What level of accuracy is needed from the new BI system? Does the analysis to be done require correct tallying of every single system transaction? Are the existing reports known to REALLY be accurate? Are they validated on an ongoing basis in any way? What will make the client comfortable about pulling the plug on the old reports for good? What are the essential acceptance criteria for the new system? Discussing these issues ahead of time can instill confidence, exactly as asking them only after the results of the new and old systems diverge will sound defensive and instill doubt.

However, the most critical aspect of the old and new comparison game is the ability to pinpoint differences in terms of the specific transactions causing the deltas in total numbers. The conceptual cul-de-sac to be avoided at all costs is the deadly standoff where the new system is trying to simply match an aggregate number – "Our old report says 45,039 and the new one says 44,823. That can't be right!" In this case, you are simply shooting in the dark, not knowing if the old report is any good, or if there really is a bad business rule or programming error in there somewhere. It is critical that both the old and new reports be traceable to the individual transactions which they represent, and those that are causing any discrepancies can be individually evaluated. This can actually translate a negative (the reporting discrepancy) into a positive (increased customer confidence) when individual transactions can be isolated and their inclusion or exclusion explained in terms of the organization's business rules.

And finally, at the right time, the client needs to agree to turn off the old reports, to cut the cord to the past. It is always tempting to keep the old ones around "just in case," but this will always provide a lingering organizational dependence that really needs to be nipped in the bud. If the new reports are validated, it should be in with the new and definitely out with the old.

However, at the end of day, there is that moment when that happens, when existing reports can be safely relegated to the vast scrap heap of obsolete software and everyone can settle down with the one, shiny new watch that delivers the single accepted version of reality. And that day, rather than any other milestone in the BI project, is the real finish line we should keep our eyes on from day one.