Certainty often shifts with observations over time. And this is good.
The smartest an organization can be is directly related to the net sum of its perceptions. These perceptions come in the form of observations – observations collected across the various enterprise sensors (e.g., transactional/monitoring systems).
The second most important factor for smarts is the ability to determine how these observations relate to each other – for lack of a better word – contextualization.
But smarts requires much more than just available data and good correlation. Two additional critical elements of smart systems are:
1. An ability to make assertions based on new data points
2. An ability to use new data points to reverse earlier assertions
If everything you ever learned was held in your head as a probability, your ability to think quickly on your feet would be drastically reduced. Deciding that two objects are the same (aka Semantic Reconciliation) is an example of an assertion. Like human thinking, smart systems make assertions based on previous experience.
Smart systems also have to be able to undo earlier assertions made in error. If a new observation is in fact evidence that invalidates earlier assertions, these earlier incorrect assertions must be corrected (there are some caveats, more on this at another time).
Once presented with compelling new data, systems that cannot flip-flop on previous certainties … are dumb. The same goes for humans.
I thought my youngest son’s birthday was the 5th of the month for the first five years of his life – celebrated his birthday on the 5th, filled in forms using this date, and so on. This went on until I saw his birth certificate where to my surprise I discovered his birthday was really on the 2nd of the month! Having to explain the change of date to him and others made for an interesting conversation. When revealed this to my boy, I used the good news and bad news model – the good news being he was a little older than we thought – kids love that. Should I someday get evidence the birth certificate was in error … well then … I’ll have to revise the ground-truth yet again.
With this point of view, when politicians accuse each other of “flip-flopping,” I always laugh to myself. As it is the person who refuses to change his/her point of view – despite ANY and ALL new evidence presented – that scares me the most.
When I speak of Sequence Neutrality I am referring to exactly this characteristic whereby new data points remedy earlier assertions. Basically, had one known the new data point earlier, would the outcome of any earlier assertions be different or were there assertions that could have been made that were not? While this is hard to do on real-time transactional streams at high data rates, nonetheless this is essential behavior and hence something I have spent years thinking about and working on.
Incremental learning systems must have a high degree of sequence neutral processing, or there is little chance they will be smart.
On a more technical note: As most systems are not smart, sequence neutral, flip-flopping systems … downstream systems getting fed data (e.g., data marts and case management systems) have not been designed for the ever-changing recontextualization of previous reports. A new breed of systems – with different design parameters – are going to be needed to take full advantage of upstream smart systems.
RELATED POSTS:
Big
Breakthrough in Performance: Tuning Tips for Incremental Learning Systems