My Photo

Your email address:


Powered by FeedBlitz

January 2015

Sun Mon Tue Wed Thu Fri Sat
        1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31
Blog powered by Typepad

Become a Fan

« Why Faster Systems Can Make Organizations Dumber Faster | Main | How to Use a "Glue Gun" to Catch a Liar »

July 01, 2007

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Brian

Re: using context to improve decoding accuracy

Yeah, it happens to me all the time, with all kinds of data. Like you, I notice it most with spoken language. Using context for decoding is part of why hidden markov models are good at speech recognition.

beecaver

Good point about data demands. Perhaps persisting only "significant" features could reduce storage requirements? Source attribution for the features could be pushed off to cheaper, higher latency storage media.

What do you think about the points made by http://www.identityresolutiondaily.com/70/making-a-case-for-living-context-in-identity-resolution/ ? It seems like an efficient / scalable framework for incorporating "live" updates is a real challenge...

The comments to this entry are closed.