My Photo

Your email address:

Powered by FeedBlitz

April 2018

Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30          
Blog powered by Typepad

Become a Fan

« Why Faster Systems Can Make Organizations Dumber Faster | Main | How to Use a "Glue Gun" to Catch a Liar »

July 01, 2007


Feed You can follow this conversation by subscribing to the comment feed for this post.


Re: using context to improve decoding accuracy

Yeah, it happens to me all the time, with all kinds of data. Like you, I notice it most with spoken language. Using context for decoding is part of why hidden markov models are good at speech recognition.


Good point about data demands. Perhaps persisting only "significant" features could reduce storage requirements? Source attribution for the features could be pushed off to cheaper, higher latency storage media.

What do you think about the points made by ? It seems like an efficient / scalable framework for incorporating "live" updates is a real challenge...

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)