We've recently begun trialing a few frontend performance monitoring services - Keynote, Gomez, trying to get the most out of Watchmouse.  They have their individual pros and cons and when they report sporadic issues, it can be difficult to correlate to actual user experiences (how many users were effected, where, and to what extent?)  The dearth of data around end-user page load times (and things like domComplete) is a major blind spot.

Now that /event messages are flowing from bits to both kraken and vanadium, I think an initial in house system to analyze page load times as measured by actual users could be rapidly prototyped, and trump the above trials.  This may already be an eventual deliverable for kraken, but given the drive behind the current trials, why wait?

The client side would be simple js - for n% of page views from a supported browser (ie >= 9, chrome >= 6, ff >=6, android >= 4.0) fire off an event request containing everything relevant from the window.performance.timing object (https://developer.mozilla.org/en-US/docs/Navigation_timing). 

On the backend, perhaps some frequently periodic processing around geoip lookups and ISP (or other network path) determination before going into a data store from which we pull structured data for pretty numbers and pictures.  The end result should be able to help identify everything from js/dom performance issues after a release, to who we should peer with and where we should provision our next edge cache center.

My main questions right now:

- Would vanadium or kraken be better suited for building this sooner than later (within a few weeks)

- Would anyone like to help? (David, your guidance around coding the frontend visualization would be highly valued even if you don't have a day or two to personally throw at it)

Asher