My Photo

Your email address:


Powered by FeedBlitz

April 2018

Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30          
Blog powered by Typepad

Become a Fan

« Hacking the 2007 Brazil Ironman Triathlon in Florianopolis (May 27, 2007) – Strategy, Tragedy and 100% Pure Agava Tequila | Main | Transparency, Privacy and Responsibility »

June 03, 2007

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Darryl Williams

Jeff,

Interesting thought piece. This is something that is desperately needed--now. However, if we take the current evolution of data, one must expect that the future of Persistent Context must be a fused product of both textual entity data and video entity data. For example, non-obvious relationship data with face recognition data with other bio-metrics.

With that said, data is data is data. By digitizing what is a truck, what is a color, what is a crease on a face, Persistent Context should be able to adapt to the point of following an item of interest everywhere. Imagine Persistent Context being used for Amber Alerts (tied into traffic cameras, ATM cameras, etc).

Israel L'Heureux

Hi Jeff,

I'm new to your blog, but I'm excited at the progress you're making with the small footprint database.

Over at www.assetbar.com, we're tying to tackle similar problems, but are taking a different technical approach. We've implemented a horizontally-scalable database filesystem that is able to sustain rather large numbers of reads and writes.

We're into distributing the db load across many machines, but whereas something like GFS/Big Table is optimized for large, sequential reads, we're optimizing for lots of small reads / writes. In particular, for individual user behavior as well as how large numbers of users interact with an "asset" (aka file, photo, blog post, etc.).

So by combining a long history of user behavior / activity (one context) with the context of one or more different assets in real time, we're shooting for a web-scale personalization engine / platform. It's a different take on Greg Linden's interesting Findory project.

So far, we have a live proof of concept of our non-SQL architecture that is serving out nearly 10M assets a month. The content is from a indie web comic called Achewood, by Chris Onstad.

This fall, we'll be launching a new application on our platform that should be widely useful.

In the meantime, I'd love to read your whitepaper and I hope that later this year we'll be able to publish stats and whitepapers of our own.

Again, great project!

Geva Perry

Another approach for those interested in these kinds of systems is Space-Based Architecture, which utilizes an In-Memory Data Grid in conjunction with persistence as needed.

See GigaSpaces' implementation.

http://www.gigaspaces.com

MA Wedding Videographer

Great job. youve come a long way.

Nathaniel Eliot

In the interests of full disclosure, I should point out that I'm an open-source fanatic. Unless you're using an OSI-approved license, I'm going to crib from your work to build an open-source engine. If your software is open-source, however, I'll be glad to contribute as much as I can.

That all said, I'd love a copy of that white-paper, if you're comfortable giving me one. Thank you for this site, either way; it's helped clarify my understanding of systems integration.

The comments to this entry are closed.