That's great!  Patrick also coded up something around the navigation timing api and took some approaches I really like (base64 encoding the entire time window.performance.timing object and including its sha1 in the /event request, to help ward off garbage data being dumped in) and is on the way to having a standalone mediawiki extension.  A quick sync-up should ensure that the results meet everyone's needs.

-A

On Thu, Nov 29, 2012 at 4:20 PM, Ori Livneh <ori.livneh@gmail.com> wrote:
Hey Asher,

We have something like this in place for mobile at the moment -- it's (somewhat uselessly) measuring time to DOMReady and DOMContentLoaded, but Jon (CC'd) and I were going to migrate it to use the Navigation Timing API sometime this week. Happy to help in whatever way. The Kraken / vanadium question isn't crucially important, since the data is going to both by default anyhow.

O


On Thursday, November 29, 2012 at 3:40 PM, Asher Feldman wrote:

> We've recently begun trialing a few frontend performance monitoring services - Keynote, Gomez, trying to get the most out of Watchmouse. They have their individual pros and cons and when they report sporadic issues, it can be difficult to correlate to actual user experiences (how many users were effected, where, and to what extent?) The dearth of data around end-user page load times (and things like domComplete) is a major blind spot.
>
> Now that /event messages are flowing from bits to both kraken and vanadium, I think an initial in house system to analyze page load times as measured by actual users could be rapidly prototyped, and trump the above trials. This may already be an eventual deliverable for kraken, but given the drive behind the current trials, why wait?
>
> The client side would be simple js - for n% of page views from a supported browser (ie >= 9, chrome >= 6, ff >=6, android >= 4.0) fire off an event request containing everything relevant from the window.performance.timing object (https://developer.mozilla.org/en-US/docs/Navigation_timing).
>
> On the backend, perhaps some frequently periodic processing around geoip lookups and ISP (or other network path) determination before going into a data store from which we pull structured data for pretty numbers and pictures. The end result should be able to help identify everything from js/dom performance issues after a release, to who we should peer with and where we should provision our next edge cache center.
>
> My main questions right now:
>
> - Would vanadium or kraken be better suited for building this sooner than later (within a few weeks)
>
> - Would anyone like to help? (David, your guidance around coding the frontend visualization would be highly valued even if you don't have a day or two to personally throw at it)
>
> Asher
>
> _______________________________________________
> Analytics mailing list
> Analytics@lists.wikimedia.org (mailto:Analytics@lists.wikimedia.org)
> https://lists.wikimedia.org/mailman/listinfo/analytics




_______________________________________________
Analytics mailing list
Analytics@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics