When I present these days I often show this photo of an outdoor banner which simply shows a chimpanzee and the words "99.4% human" (I found this advertisement for the Natural History Museum while in London). My notion then being if .6% makes this much difference, no wonder existing information systems are not very intelligent. The more I think about this the more relevant this seems.
Now think Web 2.0 and mashups.
I often hear in this context the term "good enough" being used to describe systems or widgets that while far from perfect are at least useful. Something like the 80/20 test. And while this type of technology and mindset work fine in many contexts and at times is well received in the market place, there are other contexts where this notion of "good enough" just isn’t.
Here is one blatant example. From time-to-time at the airport I notice the flight schedule monitors have crashed – displaying an operating system core dump. My think then goes: I am glad that this "good enough" operating system for the flight schedules is not the platform used for the plane’s flight systems! No debate here … it is just not good enough.
In space travel a few degrees off and you burn up during re-entry or fall outside of the solar system. Oops! No such thing as good enough in this department either.
High performance, high reliability and high accountability systems like those running in the financial services and communications sectors are no place for this emerging fascination with good enough componetry.
Detecting and preempting bad things from happening (e.g., million dollar fraud events, big things that go "boom" in the night) are also not well served by the good enough mindset either.
With respect to intelligent systems that deliver enterprise awareness (e.g., Perpetual Analytics) again this notion of good enough just isn’t. The DNA difference between chimpanzees and humans seems like another example – whereby .6% off the mark is not nearly good enough.
As a consequence of this thinking, I find myself spending time addressing subtle little things like Sequence Neutrality because, in this case, systems without such properties accumulate unacceptable biases. So my aspirations keep me focused on how to squeak as many .6% improvements into these enterprise awareness systems as possible.
[And on a Related Although Out of Place Thought: Since good enough is determined in relation to mission and expectations, maybe something to worry about is a "good enough" component for one mission getting mashed up into some other mission with very different reliability expectations. For example, using a match/merge engine designed to detect duplicates in customer lists for direct marketing (e.g., the consequence being more or less duplicate mail in the mailbox) later being applied in a law enforcement setting where action involves dudes with guns – think "danger Will Robinson."]