My Photo

Your email address:

Powered by FeedBlitz

April 2018

Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30          
Blog powered by Typepad

Become a Fan

« Yesterday’s Technology Review Story: Blinding Big Brother, Sort of | Main | The Phone Call is Coming From Your House! Context is King. »

February 01, 2006


Feed You can follow this conversation by subscribing to the comment feed for this post.

Fred M-D


Again, I am definitely a major fan of the SRD/NORA (now IBM Entity Analytics) technology being discussed. I'd like to bring up the topic of NONDETERMINISTIC discovery, versus discovery through (deterministic) existing, predictive models...

There's tremendous value in using predictive, existing models -- which one may calibrate by changing the attributes and ranges in the associated model parameters. There are many alarms and discoveries that can be made with this dynamic, yet pre-determined approach.

Then there's the nondeterministic discovery model -- finitely bound only by the set of all possible permutations aggregating information via numerous affinity analysis heuristics (e.g., temporal clustering, co-occurrence clustering, other "discovered" affinities (i.e., common attributes not already present in the existing predictive models) ). Clearly, the computational complexity of discovery heuristics has been prohibitive in the past. But advances in algorithm optimization, as well as increased computational power and storage efficiencies make this discovery approach more and more feasible today.

My experience is that one needs both: computers to discover affinities / clustering models; and humans to pragmatically view these results and select the clusterings that truly make sense for a given domain / problem set. These can then be added to the predictive models -- to make better decisions, etc.

Thoughts / comments / foreshadowing you can share?

Thanks in advance!
Fred M-D

VZ Farrell

"As data changes, the new observation is integrated into the collective knowledge" with the incremental knowledge resulting in insight. Does this assume that each new observation generates all the concomitant relationships explicitly, or that inference is used to implicitly draw from what is explicit in the collective knowledge? I suspect it's the latter or the integration would become onerous. So, in other words, if you know that A=B and the new observation that B=C is integrated, if you are making use of an inference engine, you now automatically know that A=C. One new observation (that C cannot=D) can yield a lot of additional knowledge (A cannot=D, B cannot=D).

Bob Gourley

Jeff I've always valued this post. Now its one I'm recommending to all my CTO friends in the federal space. The Xmas terror attack is going to cause lots of folks to rethink our systems, and, although I'm sure that smart humans can find ways to defeat any systems we throw at this, I'm also certain that we can optimize our systems to do better by following the model you lay out here.


The comments to this entry are closed.