Moving this discussion to mobile-l. Ori, are you on mobile-l?
Let's also be mindful to model (client side or router rate limiting is easiest) realistic connection scenarios (i.e., 2G and 3G and clogged wifi connections). Shaving off 3-4 seconds on fast connections makes a world of difference, and so does shaving off 10-15 seconds on slow ones, for getting the user to be able to interact with the page.
-Adam
On Thu, Jun 25, 2015 at 5:10 AM, Joaquin Oltra Hernandez < jhernandez@wikimedia.org> wrote:
Hi,
Ori Livneh, Sam Smith, Adam Basso and Joaquin Hernandez met to further talk about the performance work that's being scheduled for the following quarter.
I've parsed my notes and written down what I've learned, and wanted to share them to create a shared understanding of how this planning is going. Not sure if this should go to other more public mailing lists at this stage, feel free to share wherever you think it should go.
- Don't brainstorm and blindly implement ideas. Usually any ideas we
can come up with will imply complex changes (like only loading the lead section) but won't have the expected return.
- Measure and report, identify key metrics.
- Performance work is hard to measure (predict outcomes) beforehand,
you usually never know how it's going to unfold.
- Suggested workflow is: 1. Measure and analyze. 2. Formulate
hypothesis based on concrete data. 3. Implement hypothesis and goto 1. - Seems like for being effective on accomplishing performance goals, a continued effort through quarter will be needed (considering ^ previous point) instead of a laser focused half quarter effort.
- Broad first-sight insights:
- Server side time (only accountable if cache miss or logged in) is negligible compared to other factors.
Roughly half time (~2s) parsing script and (~3s) rendering. - Looks like there's wins to be gained from optimizing on browser performance. - We need to research and communicate before hypothesizing.
- Browser side performance is mobile's site biggest bottleneck.
coordinate with the Performance team to track other types of metrics coming from other tools like,
- Tools:
- Besides Grafana dashboards using the graphite data we'll
- Speedcurve.com for browser/front-end based performance reports.
- Maybe sitespeed.io for tracking regressions.
- How to do it:
- Measure, establish baseline data.
- Plan hypothesis.
- Implement, local measuring. If good deploy.
- Measure with change deployed. Evaluate impact. Write down results.
- Go to 1 or 2.
Going forward we'll be communicating and syncing regularly with the performance team on our data, hypothesis, plans and results to stay in sync and help each other. (Is there a performance mailing list? Should we use wikitech-l?).
Regarding numbered metrics and considering what we learned from these meetings it is going to be hard to set numbers for the goals to reach, we'll do our best and communicate back.
Cheers
ps: If I've mistaken or missed anything on my notes please call me out on it and correct me!
reading-wmf mailing list reading-wmf@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/reading-wmf