Sensing importance across a sea of dynamic systems with constantly changing data requires the accumulation and persistence of context. (I am using the term persistence here to mean storing/saving what one has observed and learned – in a database for example.)
If a system does not assemble and persist context as it comes to know it … the computational costs to re-construct context after the fact are too high. Therefore, a system will be more intelligent when it can persist context on data streams … and less intelligent when it does not persist context on data streams.
[Sidebar: After explaining this to my lawyer friend Peter Swire he said this is nothing new. He explained, “That is just like the ‘touch it once’ principle from the One Minute Manager book!” Yes, I had to confess, it is that basic – as is everything I conjure up. And, since when have lawyers become so concise?]
It is True: Context at Ingestion is Computationally Preferred
The highest degree of context attainable, per computational unit of effort, is achieved by determining and accumulating context at ingestion. This is achieved by taking every new data point (observation) received and first querying historical observations to determine how this new data point relates. And once this is determined, what has been learned (i.e., how the new data point relates to other known data points) is saved with the new data point.
Smart biological systems do this too. For example, as we humans “sense” the surrounding environment, we assemble these streaming data observations (sights, sounds, etc.) into context at that exact moment. And we do this, with Sequence Neutral processing – whereby the final context is the same despite the order in which observations are processed – at least for the most part.
Now not to be too abstract here. But, while I have been harping on the importance of creating Sequence Neutral processes – no trivial feat in real-time context engines – I am coming to the conclusion that a few aspects of Sequence Neutrality cannot be handled on data streams at ingestion! While this gives me a sinking feeling about the consequences this has to Scalability and Sustainability (i.e., no reloading, no batch processing), I am somewhat comforted by the fact that smart biological systems at the top of the food chain themselves go off-line for batch processing (i.e., sleep). I’m theorizing that dreams are in fact species’ effort to re-contextualize that information which could not be ingested with Sequence Neutrality. Because if humans could do this while being awake, from a survival and evolutionary stand point, we would!
With all of this in mind, I believe that many architectures, systems and processes which have originated from the batch world probably will have a hard time emerging as high context, intelligent systems. Further, I think next generation intelligent systems will be designed to assemble context on streams. But we have a long way to go towards intelligence on streams before we must resort to off-line processing.
Hi Jeff,
You and I have had this conversation in the past. I find it a fascinating subject (as do you) to consider. The notion of context, in my mind, is like the notion of truth - only valid at a specific point in time. Of course truth being subjective also by correlation requires that context is subjective.
However from a computational standpoint - I would have to separate context from truth, in that context is the raw descriptive attributes that make up "agreed upon surrounding information". There's a level of context (or a hierarchy of context) which breaks down even further: in relation to fight-or-flight as human instinct.
I think when processing data we should follow the wet-model, the neural model. Even though we may not know much about it (yet). The first is: fight-or-flight, running deterministic algorithms that determine fight-or-flight or relax type data sets may be the first level of filters. This of course has to be learned behavior over time, and of course the neural net or deterministic algorithms must be running consistently 24x7x365 absorbing input.
Beyond that, establishing context is very similar to the sleeping process - sorting, integrating, and re-arranging interpretation of all the sensory input we've received. Data is the same way, but now we must grapple with the following: FORM AND FUNCTION must be "stuck back together" in order to find true meaning or in order to use true data correlation and relevancy weightings. As you and I have said before: the brain may be made up of two types of basic functions: short-term, and long term memory, but the structure underneath is the same (synapses, dendrites, neurons, etc..) And the functions break down into immediate, versus nightly processing.
Following the natural world models might lead us to some interesting conclusions.
Cheers,
Dan L
Posted by: Dan Linstedt | August 30, 2006 at 01:43 AM
Dear Jeff
Finally got round to reading your postings. This is to pick up your assertion about data and context.
Recently, we have been thinking about decision-making based on accumulated experience and knowledge. In the process of thinking about knowledge management as a way to amass and distribute the potential knowledge in an organisation, we concluded that one key challenge of trying to record knowledge is that a piece of knowledge is actually is understood only in a particular context. In other words, a piece of data, which is understood as knowledge in a particular context, could be understood differently in a different context. And for the purposes of knowledge management, this poses a serious challenge as one cannot possibly list down all possible and future contexts.
Take another related example at hand. With Gmail, I am metagging my emails and some mails have more than 5 metatags (like "technology watch", "friends", "concepts", "ideas" etc). But yet now and then I would add new tags to old emails as I see new contexts to the old email, which can be considered a "data point".
If this is the situation, how can one possibly persistently tag contexts to any datapoint? In fact, could it not be argued that there would be potentially more context s than there are data points? (remember Max Boisot's vase-shaped diagrams in the Singapore Sensemaking Workshop?)
Notwithstanding this needling doubt, we are going to try out the idea of "data finds data, relevance finds user"...
Regards, Jimmy Khoo
Posted by: Jimmy Khoo | October 03, 2006 at 08:16 AM
My first thoughts on this are that creativity has been shown to jump with sleep: creative, efficient solutions to processes that could be done in an otherwise ungainly but straightforward pedantic fashion.
Jeff's idea of off-line processing being the time for re-contextualization is the same as my idea of the function for REM sleep (or REM-sleep associated processing) as the time to do the sometimes difficult process of blending new information into old, especially as it requires remodeling the old structure to incorporate new knowledge. Without the ability to erase or minimize old and new erroneous contextual tags, as Jeff puts it, the system would soon bog down with too many tags, or "more contexts than there are data points", as Jeff's colleague Jimmy Khoo wrote. The idea of interleaving new with old information as being a function for sleep is an idea of Bruce McNaughton, whose ideas themselves are influenced by David Marr.
Many studies show that sleep is important for learning complex associative things (making complex associations between items). Francis Crick and his colleague published a purely theoretical Science paper raising the possibility that sleep was for erasing all that unimportant information that passes into our brains each day.
My research shows both are true. The hippocampus, the assembly factory for complex associative memory and responsible for consolidating those memories to the neocortex in a parallel, distributed fashion, reactivates during REM sleep in a manner consistent both with strengthening newly acquired, as yet unconsolidated memories, and with erasing memories that have already been consolidated. Thus REM sleep could serve to recycle unnecessary synapses encoding what has become erroneous information all over the brain. I have yet to look at other structures. However some research in the field of synaptic plasticity during development strongly suggests that REM may be the time for pruning away the unused tags. My theory is that the unique conditions of this state make it the only time such pruning for remodeling normally takes place, during development and on to the end of life.
One final thought. I'm not sure that normally learning animals have much sequence neutral processing. Sequence may not be an important tag, but in some contexts it is good to have the order in which it was learned stored somewhere for reference. However, that said, much of what we know and do, especially the "how" of it all, needs no sequence tag. People with no hippocampus (bilateral lesion due to mishap) cannot ever remember having learned how to do something after their hippocampal loss (one famous ex. is learning how to build the Tower of Hanoi as fast as an intact person without ever remembering having seen the game before), but nevertheless their procedural learning is normal. Sleep may serve a role here, too, (especially sleep spindles of stage 2) but it is only just now starting to be uncovered.
Gina
--
Gina R. Poe, Ph.D.
Assistant Professor
Department of Anesthesiology and
Department of Molecular and Integrative Physiology
University of Michigan
Posted by: G. Poe | December 11, 2006 at 07:08 AM
"Sleep researchers at the University of Wisconsin-Madison School of Medicine and Public Health believe it is more evidence for their theory of "synaptic homeostasis." This is the idea that synapses grow stronger when we're awake as we learn and adapt to an ever-changing the environment, that sleep refreshes the brain by bringing synapses back to a lower level of strength. This is important because larger synapses consume a lot of energy, occupy more space and require more supplies, including the proteins examined in this study."
"Sleep — by allowing synaptic downscaling — saves energy, space and material, and clears away unnecessary "noise" from the previous day, the researchers believe. The fresh brain is then ready to learn again in the morning" (http://www.sciencedaily.com/releases/2009/04/090402143455.htm)
Posted by: James Thornton | August 04, 2011 at 11:55 PM
Hi, Jeff
Even I am always reading your posts it's been a long time without posting a comment.
You are representing a world of systems prepared for Helen Nissenbaum idea of context respect when managing privacy. Idea that has been included in the Consumer Privacy Bill of Rights.
http://www.amazon.com/Privacy-Context-Technology-Integrity-Stanford/dp/0804752370
http://www.whitehouse.gov/sites/default/files/privacy-final.pdf
So if we are able to retain context linked to customer consent for their data treatment and flows, then Helen Nissenbaum's idea is achievable in a much consistent way.
May be Peter Swire can provide you good advice on this as well ;-p
Regards
Posted by: Álvaro Del Hoyo | February 09, 2013 at 07:15 AM