My Photo

Your email address:


Powered by FeedBlitz

April 2018

Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30          
Blog powered by Typepad

Become a Fan

« Discoverability: The First Information Sharing Principle | Main | Dumb and Dumber: Consequences of the 2006 Silverman Triathlon »

November 13, 2006

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Dan Linstedt

Jeff, once again you have got me thinking. Of course you and I - well, we have a lot to think about. Or should I say contextualize on... Anyhow, here are some basic thoughts I have about contextualization that relate to data attribution. But first, I must say: I completely agree that data should be cleared from privacy and ethics concerns before being centralized by the librarian. However, going back to some of our previous conversations on form, function, the human mind... I'd like to discuss context and data attribution.

As you know I've worked on a data architecture which is centered around the notions of "key" information - that is to say that there is a single idea, thought, word, or action in which is focused (like an index). This key is meaningless by itself - or better yet, for different librarians holds different meanings. For instance, the mention of a simple date: July 12th, 1954 - may hold relevance to many different individuals, however by itself only signifies a point in time.

However esoterical it may seem, a point in time in and of itself is meaningless to a theoretical observer not within the confines of these boundaries. But I ramble, the first recognition of "context" for specific information has got to be the key, the whole key, and nothing but the key... Getting the librarian to recognize "keys" versus "descriptive information" I think would lead to some interesting findings in the foundational construction of a library system.

Now, if I may be so bold as to build on this concept... The KEY (once recognized) can become a building block for context. That is if we step back and over-simplify the notions of context to mean description surrounding the key. For instance July 12th, 1954 - someone says: i got married, someone else says: I drove my first car, someone else says: the day was cloudy and gray.

All of these provide the foundations of context for a date (one example of a key). From this we span out into the establishment of a) association between keys and b) information related to just the key, describing the key, and c) information describing the relationship of more than one key.

Beyond keys and relationships comes the real work - understanding or assigning relevance or meaning. In the real-time world, this notion is like a fight or flight syndrome - a very simplistic assignment of "value" and "danger" of information. Many times as (you have said above) some of this information (if not all of it) is processed during sleeping hours, and re-shuffled. I believe it is at these times when we really decide how this infomrmation is really related to our experiences in the past, and we assign it to categories, and establish context on the basis of pre-existing "knowledge".

To finish with the contextual piece: Keys surrounded by "descriptive information" based on arrival timings are what change our perspectives. Mining this information for different trends occurs during the night, these trends are utilized by the librarian during the day or real-time to determine fight or flight, and initial relevance of information.

Now regarding sequences of neutrality, the only assumptions I can make are: a) the librarian must be smart enough to separate the keys from the descriptions b) all data or information flowing into this system must be stamped with arrival time - not to say this is in the order they arrived, but to have some form of reference for later assimilation c) the librarian is responsible for categorizing, there must be a master librarian sequence for establishing meaning and definition within the categories as well as across categories - making sense of a specific perspective viewpoint, and assigning relevance based on what the end-user has asked for (ie: tracking queries as data).

Once the tracking mechanism (queries as data) has been put in place, recategorization (mining) can become self-sustaining over time, learning more and more about what and how these pieces of information should be connected, and how they are utilized.

The real trick is then figuring out what might be "not yet seen" that would solve one of the questions posed, or not yet asked.

Does any of this make any sense? My two cents anyhow...

Cheers, and lets' chat.
Dan Linstedt

SG

Perhaps you and/or IBM might be interested in some of the proprietary data analysis methodologies that we have developed at Strategic IT Security LLC ( www.StrategicITSecurity.com ). The starting point was Discovery Informatics, and has progressed/matured far beyond DI. In a nutshell, we address a number interdisciplinary approaches, including areas such as: 1a. Where A+B may or may not not equal B+A. 2a. Where A+B+C may or may not equal B+A+C. 3a. and other factors that, as we understand it, have not been fully considered or automated. Thank You, SG

The comments to this entry are closed.